Training: Model, Loss and Optimization
Validation and Testing and Deployment
Supervised, Semi and Unsupervised Learning
Optional: Inductive Bias (Maybe Next Year)
From Perceptron to Multi-Layer Perceptron
Neural Network is A Layered Approximator
Neural Network Can Fold Papers
Neural Network Consists of Logic Gates
Neural Network as Template Matching
Code - Assign Parameters to Neural Network
Code - Fit a Function using MLP
Gradient Calculation and Gradient Descent for General Functions
Gradient-based Algorithm (Old)
Read - Finite Difference Gradient Approximation (need refinement)
Loss Function: Bridge between Neural Network and Optimization
Example: Gradient Descent for Logistic Regression
Convexity and Neural Network Loss Surface
Batch and Stochastic Gradient Descent
Hand - Optimize a Neural Network
Read - Inversion of Multiplication
Read - Jacobian Matrix and Weight Gradient Matrix (Next Year)
Computational Graph and Chain Rule
Gradient Calculation using Backpropagation
Subgradient for Non-Differentiable Function
Retain Graph and Checkpoint Scheme (Next Year)
(Deprecated) Modularized Functions
Read - Reverse Automatic Differentiation: A Tutorial with NumPy Implementation