LMU Summer Semester 2018 Machine Learning Tutorial Materials
View the Project on GitHub changkun/ss18-machine-learning-tutorial
a)
ADALINE gradient descent based learnig rule (Batch Gradient Descent):
Perceptron learning rule:
b)
Sample-based rule for ADALINE (Stochastic Gradient Descent or Delta Rule):
c)
SGD can be learned on the fly. The model can be update sample by sample. It’s unessary to recompute the whole model, essentially better for large dataset.
d)
Striking difference: Objective function
Be aware of: error / loss / cost and objective function
a)
b)
c)
a)
b)
c)