百度飞桨 PaddlePaddle 1.7 深度学习平台教程
首页
白天
夜间
下载
阅读记录
书签管理
我的书签
添加书签
移除书签
编辑文档
如何贡献代码
来源 1
浏览
569
扫码
打印
2020-05-02 02:08:28
如何贡献代码
上一篇:
下一篇:
发布点评
Release Notes
安装说明
MacOS下安装
从源码编译
Ubuntu下从源码编译
Windows下从源码编译
MacOS下从源码编译
CentOS下从源码编译
附录
使用Docker安装
Windows下安装
Ubuntu下安装
CentOS下安装
使用Conda安装
FAQ
安装与编译
预测引擎
网络搭建及训练
进阶指南
二次开发
新增OP
C++ OP相关注意事项
如何在框架外部自定义C++ OP
如何写新的C++ OP
如何写新的Python OP
如何贡献代码
提交PR注意事项
本地开发指南
FAQ
设计思想
环境变量FLAGS
存储管理
数值计算
执行器
cudnn
分布式
其他
调试
check nan inf工具
设备管理
预测部署
模型压缩
移动端部署
Paddle-Lite
服务器端部署
C 预测 API介绍
C++ 预测 API介绍
安装与编译 Windows 预测库
安装与编译 Linux 预测库
Python 预测 API介绍
性能调优
分布式GPU训练优秀实践
存储分配与优化
分布式CPU训练优秀实践
性能优化分析及工具
timeline工具简介
CPU性能调优
堆内存分析和优化
使用Paddle-TensorRT库预测
重计算:大Batch训练特性
单机训练优秀实践
模型评估/调试
VisualDL 工具
VisualDL 使用指南
VisualDL 工具简介
模型评估
分布式训练
分布式训练快速开始
使用FleetAPI进行分布式训练
分布式训练使用手册
准备数据
异步数据读取
准备步骤
同步数据读取
数据预处理工具
典型案例
自然语言处理
语义角色标注
机器翻译
情感分析
计算机视觉
生成对抗网络
图像分类
简单案例
线性回归
数字识别
词向量
工具组件
ELASTIC CTR
推荐
个性化推荐
API Reference
fluid.backward
append_backward
gradients
API功能分类
评价指标
Caffe-Fluid常用层对应表
数据并行执行引擎
分布式训练
大规模稀疏特征模型训练
分布式异步训练
分布式同步训练
分布式训练reader准备
执行引擎
复杂网络
CompiledProgram
反向传播
基础概念
模型参数
预测引擎
模型保存与加载
优化器
神经网络层
序列
池化
稀疏更新
激活函数
控制流
数学操作
图像检测
张量
使用DataFeeder传入训练/预测数据
卷积
学习率调度器
损失函数
数据输入输出
TensorFlow-Fluid常用接口对应表
fluid.executor
scope_guard
Executor
global_scope
fluid.clip
ErrorClipByValue
GradientClipByNorm
GradientClipByValue
set_gradient_clip
GradientClipByGlobalNorm
fluid.io
save_inference_model
buffered
chain
get_program_parameter
get_program_persistable_vars
save_persistables
cache
batch
save
load
set_program_state
load_program_state
load_inference_model
DataLoader
load_vars
shuffle
save_params
ComposeNotAligned
PyReader
load_persistables
save_vars
multiprocess_reader
compose
xmap_readers
load_params
map_readers
firstn
fluid.dygraph
Conv2D
ExponentialDecay
GroupNorm
FC
NoamDecay
GRUUnit
Layer
Pool2D
PiecewiseDecay
Embedding
Sequential
NaturalExpDecay
save_dygraph
Linear
PolynomialDecay
LayerNorm
no_grad
prepare_context
CosineDecay
Conv3DTranspose
BackwardStrategy
NCE
PRelu
Conv2DTranspose
TreeConv
InverseTimeDecay
to_variable
ParameterList
guard
Conv3D
LayerList
BatchNorm
TracedLayer
BilinearTensorProduct
SpectralNorm
load_dygraph
fluid.metrics
CompositeMetric
Auc
DetectionMAP
MetricBase
Precision
ChunkEvaluator
Accuracy
EditDistance
Recall
fluid.transpiler
HashName
DistributeTranspilerConfig
RoundRobin
memory_optimize
DistributeTranspiler
release_memory
fluid
CUDAPinnedPlace
BuildStrategy
Variable
data
release_memory
cuda_places
memory_optimize
DistributeTranspilerConfig
is_compiled_with_cuda
ExecutionStrategy
create_lod_tensor
WeightNormParamAttr
cpu_places
DataFeeder
CUDAPlace
LoDTensorArray
ParallelExecutor
CompiledProgram
Executor
scope_guard
embedding
Tensor
program_guard
Program
ParamAttr
gradients
DataFeedDesc
global_scope
LoDTensor
CPUPlace
save
name_scope
load
in_dygraph_mode
load_op_library
require_version
default_startup_program
one_hot
cuda_pinned_places
DistributeTranspiler
default_main_program
create_random_int_lodtensor
fluid.nets
simple_img_conv_pool
sequence_conv_pool
img_conv_group
scaled_dot_product_attention
glu
fluid.profiler
stop_profiler
profiler
cuda_profiler
start_profiler
reset_profiler
fluid.layers
equal
unfold
StaticRNN
zeros_like
split
sequence_concat
sigmoid_focal_loss
logical_and
asin
polynomial_decay
sequence_reshape
affine_channel
retinanet_target_assign
matmul
DynamicRNN
hsigmoid
unique
Print
natural_exp_decay
space_to_depth
prior_box
crop_tensor
polygon_box_transform
where
detection_output
lod_reset
BeamSearchDecoder
floor
logsigmoid
strided_slice
gather_tree
sum
linear_chain_crf
LSTMCell
rank_loss
density_prior_box
im2sequence
row_conv
elementwise_min
cos_sim
dice_loss
temporal_shift
relu6
shuffle
cos
less_equal
margin_rank_loss
continuous_value_model
brelu
beam_search
rsqrt
softmax
reduce_any
resize_nearest
bilinear_tensor_product
inverse_time_decay
yolov3_loss
tensor_array_to_tensor
logical_xor
IfElse
image_resize
adaptive_pool3d
layer_norm
label_smooth
lod_append
less_than
RNNCell
gather
py_func
shuffle_channel
conv3d_transpose
pad2d
crf_decoding
array_read
softshrink
reorder_lod_tensor_by_rank
sequence_slice
data_norm
swish
spectral_norm
dropout
get_tensor_from_selected_rows
py_reader
dynamic_decode
GRUCell
elementwise_add
beam_search_decode
elementwise_pow
unsqueeze
dynamic_gru
log_loss
elementwise_max
auc
prroi_pool
soft_relu
Normal
target_assign
eye
edit_distance
exp
shard_index
fc
sequence_enumerate
resize_trilinear
abs
center_loss
reverse
lstm
chunk_eval
pixel_shuffle
smooth_l1
greater_than
rnn
npair_loss
selu
sequence_pad
psroi_pool
instance_norm
argmax
double_buffer
box_decoder_and_assign
ctc_greedy_decoder
clip
ones_like
conv3d
sequence_pool
unstack
argmin
linspace
ones
random_crop
gaussian_random_batch_size_like
adaptive_pool2d
roi_perspective_transform
reduce_all
generate_proposals
teacher_student_sigmoid_loss
cross_entropy
create_parameter
gather_nd
atan
cumsum
cosine_decay
Decoder
stack
distribute_fpn_proposals
square
multiclass_nms
affine_grid
relu
hard_shrink
mse_loss
accuracy
logical_not
array_length
hard_swish
iou_similarity
erf
concat
unique_with_counts
squeeze
deformable_conv
sequence_softmax
greater_equal
grid_sampler
fsp_matrix
while_loop
tanh_shrink
sequence_conv
embedding
gelu
roi_align
tanh
sequence_expand_as
cast
box_coder
shape
elementwise_sub
multi_box_head
linear_lr_warmup
elementwise_floordiv
generate_mask_labels
sigmoid_cross_entropy_with_logits
sequence_scatter
ssd_loss
array_write
reduce_max
filter_by_instag
warpctc
mul
anchor_generator
scale
hard_sigmoid
sign
transpose
mean
pow
multiplex
elementwise_div
hash
huber_loss
zeros
argsort
create_global_var
rank
elu
sequence_reverse
elementwise_mul
crop
masked_select
sampled_softmax_with_cross_entropy
scatter_nd
range
sequence_first_step
pad_constant_like
bipartite_match
Switch
thresholded_relu
nce
read_file
box_clip
increment
flatten
is_empty
assign
slice
switch_case
diag
create_tensor
retinanet_detection_output
logical_or
softmax_with_cross_entropy
reciprocal
topk
While
has_nan
reshape
roi_pool
resize_bilinear
acos
Uniform
sums
add_position_encoding
sin
sqrt
expand_as
rpn_target_assign
bpr_loss
batch_norm
dynamic_lstmp
softplus
conv2d_transpose
reduce_sum
sequence_mask
image_resize_short
clip_by_norm
fill_constant_batch_size_like
group_norm
dynamic_lstm
similarity_focus
noam_decay
one_hot
kldiv_loss
stanh
isfinite
create_array
yolo_box
reduce_prod
not_equal
sequence_unpad
fill_constant
conv2d
sequence_last_step
mean_iou
lstm_unit
MultivariateNormalDiag
elementwise_mod
gru_unit
lrn
generate_proposal_labels
expand
prelu
uniform_random_batch_size_like
scatter
pad
load
create_py_reader_by_data
sequence_expand
reduce_mean
exponential_decay
Categorical
round
sampling_id
l2_normalize
sigmoid
data
gaussian_random
scatter_nd_add
autoincreased_step_counter
maxout
ceil
collect_fpn_proposals
has_inf
merge_selected_rows
uniform_random
case
softsign
reduce_min
piecewise_decay
pool3d
cond
square_error_cost
pool2d
deformable_roi_pooling
leaky_relu
log
fluid.dataset
QueueDataset
InMemoryDataset
DatasetFactory
fluid.initializer
XavierInitializer
MSRAInitializer
force_init_on_cpu
Normal
NormalInitializer
Xavier
Uniform
init_on_cpu
ConstantInitializer
TruncatedNormalInitializer
NumpyArrayInitializer
Bilinear
TruncatedNormal
UniformInitializer
BilinearInitializer
MSRA
Constant
fluid.unique_name
guard
generate
switch
fluid.optimizer
LarsMomentum
SGDOptimizer
ExponentialMovingAverage
DecayedAdagrad
FtrlOptimizer
AdamOptimizer
AdadeltaOptimizer
LambOptimizer
Adadelta
Adagrad
SGD
LarsMomentumOptimizer
AdagradOptimizer
Dpsgd
DpsgdOptimizer
DGCMomentumOptimizer
RecomputeOptimizer
AdamaxOptimizer
ModelAverage
LookaheadOptimizer
PipelineOptimizer
DecayedAdagradOptimizer
Adamax
Adam
RMSPropOptimizer
Ftrl
Momentum
MomentumOptimizer
fluid.regularizer
L2DecayRegularizer
L2Decay
L1Decay
L1DecayRegularizer
快速上手
基本概念
Variable
Tensor
Operator
Executor
动态图机制-DyGraph
编程指南
LoDTensor
Program
编程实践
单机训练
训练过程中评测模型
配置简单的网络
模型/变量的保存、载入与增量训练
暂无相关搜索结果!
本文档使用
全库网
构建
×
思维导图备注
×
文章二维码
手机扫一扫,轻松掌上读
×
文档下载
请下载您需要的格式的文档,随时随地,享受汲取知识的乐趣!
PDF
文档
EPUB
文档
MOBI
文档
×
书签列表
×
阅读记录
阅读进度:
0.00%
(
0/0
)
重置阅读进度