百度飞桨 PaddlePaddle 1.8 深度学习平台教程
首页
白天
夜间
下载
阅读记录
书签管理
我的书签
添加书签
移除书签
编辑文档
exp
来源 1
浏览
451
扫码
打印
2020-12-18 23:10:26
exp
上一篇:
下一篇:
发布点评
Release Note
安装说明
从源码编译
MacOS下从源码编译
Linux下从源码编译
Windows下从源码编译
附录
docker安装
使用Docker安装
Pip安装
Windows下的PIP安装
Linux下的PIP安装
MacOS下的PIP安装
Conda安装
MacOS下的Conda安装
Windows下的Conda安装
Linux下的Conda安装
常见问题
安装类FAQ
框架类FAQ
其他常见问题
进阶指南
二次开发
新增OP
C++ OP相关注意事项
如何在框架外部自定义C++ OP
如何写新的C++ OP
如何写新的Python OP
如何贡献代码
提交PR注意事项
本地开发指南
FAQ
设计思想
环境变量FLAGS
存储管理
数值计算
执行器
cudnn
分布式
其他
调试
check nan inf工具
设备管理
预测部署
模型压缩
移动端部署
Paddle-Lite
服务器端部署
C 预测 API介绍
C++ 预测 API介绍
安装与编译 Windows 预测库
安装与编译 Linux 预测库
Python 预测 API介绍
性能调优
分布式GPU训练优秀实践
存储分配与优化
分布式CPU训练优秀实践
性能优化分析及工具
timeline工具简介
CPU性能调优
使用Paddle-TensorRT库预测
混合精度训练最佳实践
重计算:大Batch训练特性
单机训练优秀实践
运行时设备切换
模型评估/调试
VisualDL 工具
VisualDL 使用指南
VisualDL 工具简介
模型评估
分布式训练
分布式训练快速开始
使用FleetAPI进行分布式训练
准备数据
命令式编程模式(动态图)
数据准备、载入及加速
声明式编程模式(静态图)
异步数据读取
数据预处理工具
准备步骤
同步数据读取
典型案例
自然语言处理
语义角色标注
机器翻译
情感分析
计算机视觉
生成对抗网络
图像分类
简单案例
线性回归
数字识别
词向量
工具组件
飞桨大规模分类库简介
ELASTIC CTR
推荐
个性化推荐
API Reference
fluid.backward
append_backward
gradients
API功能分类
评价指标
Caffe-Fluid常用层对应表
数据并行执行引擎
分布式训练
大规模稀疏特征模型训练
分布式异步训练
分布式同步训练
分布式训练reader准备
执行引擎
复杂网络
CompiledProgram
反向传播
基础概念
模型参数
预测引擎
模型保存与加载
优化器
神经网络层
序列
池化
稀疏更新
激活函数
控制流
数学操作
图像检测
张量
使用DataFeeder传入训练/预测数据
卷积
学习率调度器
损失函数
数据输入输出
TensorFlow-Fluid常用接口对应表
fluid.executor
scope_guard
Executor
global_scope
fluid.clip
ErrorClipByValue
GradientClipByNorm
GradientClipByValue
set_gradient_clip
GradientClipByGlobalNorm
fluid.io
save_inference_model
buffered
chain
get_program_parameter
get_program_persistable_vars
save_persistables
cache
batch
save
load
set_program_state
load_program_state
load_inference_model
DataLoader
load_vars
shuffle
save_params
ComposeNotAligned
PyReader
load_persistables
save_vars
multiprocess_reader
compose
xmap_readers
load_params
map_readers
firstn
fluid.dygraph
Conv2D
ExponentialDecay
GroupNorm
NoamDecay
GRUUnit
disable_dygraph
grad
enable_imperative
Layer
Pool2D
PiecewiseDecay
Embedding
Sequential
NaturalExpDecay
save_dygraph
Linear
PolynomialDecay
declarative
enable_dygraph
LayerNorm
no_grad
prepare_context
ParallelEnv
ProgramTranslator
BCELoss
CosineDecay
Conv3DTranspose
enabled
L1Loss
MSELoss
BackwardStrategy
NLLLoss
disable_imperative
NCE
PRelu
Conv2DTranspose
TreeConv
InverseTimeDecay
Dropout
DataParallel
to_variable
ParameterList
guard
Conv3D
LayerList
BatchNorm
TracedLayer
BilinearTensorProduct
SpectralNorm
load_dygraph
InstanceNorm
fluid.metrics
CompositeMetric
Auc
DetectionMAP
MetricBase
Precision
ChunkEvaluator
Accuracy
EditDistance
Recall
fluid.transpiler
HashName
DistributeTranspilerConfig
memory_optimize
DistributeTranspiler
release_memory
fluid
CUDAPinnedPlace
BuildStrategy
Variable
data
release_memory
cuda_places
set_flags
memory_optimize
DistributeTranspilerConfig
is_compiled_with_cuda
ExecutionStrategy
create_lod_tensor
WeightNormParamAttr
cpu_places
DataFeeder
CUDAPlace
get_flags
LoDTensorArray
ParallelExecutor
CompiledProgram
Executor
scope_guard
embedding
Tensor
program_guard
Program
ParamAttr
enable_dygraph
gradients
DataFeedDesc
global_scope
LoDTensor
CPUPlace
save
name_scope
load
disable_imperative
in_dygraph_mode
device_guard
load_op_library
ComplexVariable
require_version
default_startup_program
one_hot
cuda_pinned_places
DistributeTranspiler
default_main_program
disable_dygraph
create_random_int_lodtensor
enable_imperative
fluid.nets
simple_img_conv_pool
sequence_conv_pool
img_conv_group
scaled_dot_product_attention
glu
fluid.profiler
stop_profiler
profiler
cuda_profiler
start_profiler
reset_profiler
fluid.layers
equal
unfold
StaticRNN
zeros_like
split
sequence_concat
sigmoid_focal_loss
logical_and
asin
polynomial_decay
sequence_reshape
affine_channel
retinanet_target_assign
matmul
DynamicRNN
hsigmoid
unique
randint
Print
natural_exp_decay
space_to_depth
prior_box
size
crop_tensor
polygon_box_transform
where
detection_output
lod_reset
BeamSearchDecoder
floor
logsigmoid
strided_slice
gather_tree
sum
linear_chain_crf
LSTMCell
rank_loss
density_prior_box
im2sequence
row_conv
elementwise_min
dist
cos_sim
dice_loss
temporal_shift
relu6
log1p
cos
less_equal
margin_rank_loss
cross
continuous_value_model
brelu
beam_search
rsqrt
softmax
reduce_any
resize_nearest
bilinear_tensor_product
inverse_time_decay
yolov3_loss
tensor_array_to_tensor
logical_xor
IfElse
image_resize
GreedyEmbeddingHelper
adaptive_pool3d
layer_norm
kron
label_smooth
lod_append
less_than
RNNCell
gather
py_func
shuffle_channel
conv3d_transpose
pad2d
crf_decoding
array_read
softshrink
reorder_lod_tensor_by_rank
sequence_slice
data_norm
swish
dropout
get_tensor_from_selected_rows
logsumexp
trace
py_reader
dynamic_decode
GRUCell
elementwise_add
beam_search_decode
elementwise_pow
unsqueeze
dynamic_gru
log_loss
elementwise_max
interpolate
auc
prroi_pool
soft_relu
Normal
target_assign
eye
edit_distance
exp
shard_index
fc
sequence_enumerate
resize_trilinear
randn
abs
center_loss
reverse
lstm
full_like
chunk_eval
pixel_shuffle
smooth_l1
greater_than
rnn
npair_loss
selu
sequence_pad
flip
psroi_pool
instance_norm
argmax
double_buffer
box_decoder_and_assign
ctc_greedy_decoder
clip
ones_like
conv3d
sequence_pool
unstack
argmin
linspace
ones
randperm
random_crop
adaptive_pool2d
roi_perspective_transform
reduce_all
generate_proposals
teacher_student_sigmoid_loss
cross_entropy
create_parameter
gather_nd
atan
cumsum
cosine_decay
Decoder
stack
distribute_fpn_proposals
square
multiclass_nms
affine_grid
relu
hard_shrink
addcmul
mse_loss
accuracy
logical_not
array_length
hard_swish
iou_similarity
erf
concat
unique_with_counts
squeeze
deformable_conv
sequence_softmax
greater_equal
grid_sampler
fsp_matrix
while_loop
tanh_shrink
sequence_conv
embedding
gelu
roi_align
tanh
sequence_expand_as
cast
box_coder
shape
elementwise_sub
multi_box_head
linear_lr_warmup
elementwise_floordiv
generate_mask_labels
sigmoid_cross_entropy_with_logits
sequence_scatter
ssd_loss
triu
array_write
reduce_max
log_softmax
filter_by_instag
warpctc
mul
anchor_generator
scale
hard_sigmoid
roll
BasicDecoder
sign
inplace_abn
transpose
mean
pow
multiplex
elementwise_div
hash
huber_loss
zeros
argsort
create_global_var
rank
elu
t
sequence_reverse
elementwise_mul
dot
crop
sampled_softmax_with_cross_entropy
scatter_nd
range
sequence_first_step
bmm
pad_constant_like
allclose
bipartite_match
Switch
thresholded_relu
nce
read_file
box_clip
increment
flatten
is_empty
assign
slice
switch_case
diag
create_tensor
retinanet_detection_output
logical_or
softmax_with_cross_entropy
reciprocal
topk
While
has_nan
reshape
roi_pool
resize_bilinear
acos
Uniform
sums
add_position_encoding
sin
sqrt
TrainingHelper
expand_as
rpn_target_assign
bpr_loss
meshgrid
batch_norm
dynamic_lstmp
softplus
conv2d_transpose
diag_embed
reduce_sum
nonzero
sequence_mask
image_resize_short
clip_by_norm
group_norm
dynamic_lstm
similarity_focus
noam_decay
one_hot
kldiv_loss
stanh
isfinite
clamp
create_array
yolo_box
reduce_prod
not_equal
DecodeHelper
sequence_unpad
fill_constant
conv2d
sequence_last_step
mean_iou
locality_aware_nms
lstm_unit
MultivariateNormalDiag
elementwise_mod
gru_unit
lrn
generate_proposal_labels
full
expand
prelu
scatter
index_select
pad
load
create_py_reader_by_data
elementwise_equal
sequence_expand
reduce_mean
arange
exponential_decay
Categorical
round
sampling_id
l2_normalize
sigmoid
data
gaussian_random
scatter_nd_add
autoincreased_step_counter
maxout
ceil
collect_fpn_proposals
SampleEmbeddingHelper
has_inf
merge_selected_rows
uniform_random
case
softsign
reduce_min
piecewise_decay
pool3d
cond
square_error_cost
pool2d
deformable_roi_pooling
addmm
tril
leaky_relu
log
fluid.dataset
QueueDataset
InMemoryDataset
DatasetFactory
dataset
uci_housing
imikolov
wmt16
cifar
imdb
movielens
mnist
wmt14
Conll05
sentiment
fluid.initializer
XavierInitializer
MSRAInitializer
Normal
NormalInitializer
Xavier
Uniform
ConstantInitializer
TruncatedNormalInitializer
NumpyArrayInitializer
Bilinear
TruncatedNormal
UniformInitializer
BilinearInitializer
MSRA
Constant
fluid.unique_name
guard
generate
switch
fluid.optimizer
LarsMomentum
SGDOptimizer
ExponentialMovingAverage
DecayedAdagrad
FtrlOptimizer
AdamOptimizer
AdadeltaOptimizer
LambOptimizer
Adadelta
Adagrad
SGD
LarsMomentumOptimizer
AdagradOptimizer
Dpsgd
DpsgdOptimizer
DGCMomentumOptimizer
RecomputeOptimizer
AdamaxOptimizer
ModelAverage
LookaheadOptimizer
DecayedAdagradOptimizer
Adamax
Adam
RMSPropOptimizer
Ftrl
Momentum
MomentumOptimizer
fluid.regularizer
L2DecayRegularizer
L2Decay
L1Decay
L1DecayRegularizer
快速上手
基本概念
Variable
Tensor
Operator
Executor
命令式编程使用教程
编程指南
LoDTensor
Program
编程实践
单机训练
训练过程中评测模型
配置简单的网络
模型/变量的保存、载入与增量训练
Reader
暂无相关搜索结果!
本文档使用
全库网
构建
×
思维导图备注
×
文章二维码
手机扫一扫,轻松掌上读
×
文档下载
请下载您需要的格式的文档,随时随地,享受汲取知识的乐趣!
PDF
文档
EPUB
文档
MOBI
文档
×
书签列表
×
阅读记录
阅读进度:
0.00%
(
0/0
)
重置阅读进度