Upper bound in bayesian regression setting

84 Views Asked by At

Let $y_i = x_i^\top \beta + \epsilon_i$, $i=1,\ldots,n$; where $\epsilon_i$ are i.i.d. following a distribution with mean zero and unit variance, i.e., $\epsilon_i \sim P_{\epsilon_i}(0,1)$, $i=1,\ldots,n$, $\beta \sim N(0, (1/\lambda) I)$ and is p-dimensional vector, for some fixed $\lambda > 0$, and $x_i$ is also a p dimensional vector . Let the SVD of $X$ be $US^{1/2}V^\top$. Now, I am trying to find an upper bound on $\|y^\top UR\|_2^2$ where $s_i$ is $i^{th}$ diagonal element of $S$ and $R$ is zero matrix having same size as $S$ but with diagonal elements of $S$ inverted and each element is raised to the power half. My first attempt is below \begin{align*} \|y^\top UR\|_2^2 & = \|(X\beta + \epsilon)^\top UR \|_2^2 \\ & = \| \beta^\top (US^{1/2}V^\top)^\top UR + \epsilon ^\top UR\|_2^2 \\ & \leq 2 (\| \beta^\top V\|_2^2 + \|\epsilon ^\top UR\|_2^2) \\ & \leq 2 (\|\beta\|^2_{2} + \| \epsilon R\|^2_{2}). \end{align*} But, I am trying to find stricter bound for it and remove $\epsilon$, how can I use expectations or concentration inequalities here? Also, is it possible to use a trace of the covariance of $\epsilon$?