This module was also a warp-speed introduction to optimization, convex functions, and some linear algebra
in addition to kernel methods. You are done with this module when:
You can write out Lagrangians for constrained optimization
You understand primal and dual formulations (min-max and max-min) with Lagrangians
You can write out the Support vector classification formulations, both in the linearly separable and non-linearly separable case
You understand SVC formulations as the hinge loss with \(\ell_2\) penalty view
You understand quadratic forms and positive definite matrices
You understand positive definite kernels and the kernel lifting
You understand introducing non-linearities in algorithms that only depend on dot products using kernel lifting