Learning Objectives
- Explain benefits of the Empirical Risk Minimization Framework
- Design Supervised Learning Gradient Descent Algorithms
- Design Unsupervised Learning Gradient Descent Algorithms
- Design Reinforcement Learning Gradient Descent Algorithms
Chapter Summary:
The objective of Chapter 1 is to introduce the empirical risk minimization framework and show how a large class of commonly used supervised, unsupervised, and reinforcement machine learning algorithms can be naturally represented within this framework. The method of gradient descent is also introduced as a general purpose method for designing machine learning algorithms for minimizing empirical risk functions. Empirical risk functions are constructed for representing a large class of important supervised learning machines including linear regression, logistic regression, multilayer perceptrons, deep learning architectures, and recurrent neural network architectures. Empirical risk functions are also constructed for a large class of important unsupervised learning machines including the nonlinear denoising autoencoder, K-means clustering, latent semantic indexing, general dissimilarity measure clustering algorithms, stochastic neighborhood embedding algorithms, and Markov logic nets. And finally, empirical risk functions are constructed for supporting value function reinforcement learning and policy gradient reinforcement learning algorithm design of a large class of linear and nonlinear machine learning algorithms.
The podcast LM101-079: Ch1: How to View Learning as Risk Minimization provides an overview of the main idea of this book chapter, some tips for students to help them read this chapter, as well as some guidance to instructors for teaching this chapter to students.