Intro

What is Machine Learning?

Fundamental Concepts in Machine Learning

Training: Model, Loss and Optimization

Validation and Testing and Deployment

Supervised Learning

Unsupervised Learning

Semi-Supervised Learning

Building Intuitions in Neural Network

From Perceptron to Multi-Layer Perceptron

Neural Network: A Layered Approximator

Neural Network as Folding Process

Neural Network as Logic Gates

Neural Network as Template Matching

Why Deep Neural Network?

Neural Network Optimization (Basic)

A Brief Intro about Numerical Optimization

Gradient-based Algorithm

Optimization Modeling for Neural Network

Loss Surface and Convexity

Neural Network Optimization (Advanced)

Batch and Stochastic Gradient Descent

Gradient Calculation using Backpropagation

Subgradient for Non-Differentiable Function

Neural Network Optimization (Improvements)

Second Order Improvement - Momentum

First Order Improvement - AdaGrad

Learning Rate Schedules

Loss Function (Basic: Maximum Likelihood Estimation)

Loss and Probability

Intro to Maximum Likelihood Estimation (MLE)

Maximum Likelihood Estimation for Classification

Maximum Likelihood Estimation for Regression

Cross-Entropy for Multi-class Classification

Loss Function (Advanced: Bayesian Estimation)

Review of Probability and Bayes Theorem

Maximum A Posteriori (MAP) Estimation

Bayesian Estimation Framework and Regularization Techniques

Advanced Topics in Statistic Modeling (Next Year)

Generative & Discriminative Models

General Linear Models

Misc of Loss Function Designs


Case Study: Solving Soft WCSS Loss with Gradient Descent

Integrate Specificity and Precision into Loss Function

Probability Distribution Comparison

Sequence-with-Sequence Comparison

Linear Models and Fully Connected Layer

Matrix Multiplication and Linear Models

Why Linear Layers?

Perceptron and Fully Connected Layer