Representer theorem of L2 regularized logistic regression

63 Views Asked by At

Let $\left\{\left(x_i, y_i\right)\right\}_{i=1}^n$ be a set of training data, where $x_i \in \mathbb{R}^d$ for all $i$, and $y_i \in\{-1,1\}$. Consider the $l_2$ regularized logistic regression model with parameter $\theta$, where we want to find an $f_\theta$ that minimizes the loss function: $$ \sum_{i=1}^n \ln \left(1+\exp \left(-y_i\left(f_\theta\left(x_i\right)\right)\right)+\lambda\left\|f_\theta\right\|_{\mathcal{H}}^2\right. $$ where $f_\theta(x)$ is of the form $\theta^T x$, and $\left\|f_\theta\right\|_{\mathcal{H}}^2=\theta^T \theta$.

Let $\mathcal{H}$ be the reproducing kernel Hilbert space that corresponds to the above $l_2$ regularized logistic regression. Using the representer theorem, I'm trying to figure out the form of the optimal predictive function. I know I have to come up with a reproducing kernel Hilbert space but I’m unsure how it should be determined based on the loss function.