Chainer
Chainer Tutorial
Introduction to Chainer
Core Concept
Forward/Backward Computation
Parameterized functions
FunctionSet
Optimizer
Example: Multi-layer Perceptron on MNIST
Recurrent Nets and their Computational Graph
Recurrent Nets
Truncate the Graph by Unchaining
Network Evaluation without Storing the Computation History
Using GPU(s) in Chainer
Relationship between Chainer and PyCUDA
Basics of GPUArray in Chainer
Run Neural Networks on a Single GPU
Model-parallel Computation on Multiple GPUs
Data-parallel Computation on Multiple GPUs
Define your own function
Non-parameterized Functions
Write an Elementwise Kernel Function
Parameterized Functions
Testing Function
Chainer Reference Manual
Core functionalities
Variable
Function
FunctionSet
Optimizer
Utilities
CUDA utilities
Initialization and global states
Devices and contexts
GPUArray allocation and copy
Random number generators
Kernel definition utilities
Interprocess communication on GPU
Gradient checking utilities
Standard Function implementations
Learnable connections
Array manipulation functions
Activation functions
Pooling functions
Normalization functions
Loss, evaluation and aggregation
Reusable subnetwork of complex architectures
Optimizers
Chainer
Docs
»
Chainer – A flexible framework of neural networks
Edit on GitHub
Chainer – A flexible framework of neural networks
¶
This is the
Chainer
documentation.
Chainer Tutorial
Chainer Reference Manual
Indices and tables
¶
Index
Module Index
Search Page
Read the Docs
v: v1.0.1
Versions
latest
stable
document_fix
v1.0.1
v1.0.0
Downloads
pdf
htmlzip
epub
On Read the Docs
Project Home
Builds
Free document hosting provided by
Read the Docs
.