Question 1: how convert following problem: $$\min \quad \| X \|_2$$ to a Semi-definite Programming(SDP): $$\min \quad t$$ $$s.t. \quad -tI \preceq X \preceq tI,t \ge 0$$ where X is a symmetric matrix of $n \times n$.
Question 2: in fact, I'm even not sure whether it is a SDP. I tried to construct a optimization variable:
$Y={\begin{bmatrix}S_1 \\ & S_2 \\ & & t\end{bmatrix}}_{(2n+1)\times(2n+1)}$ . Where $S_1=tI-X,S_2=X+tI$.
Notice that $S_1,S_2 \succeq 0$ from above constraints. Now I get $Y \succeq 0$ and write it as: $$\min \quad tr(CY)$$ $$s.t. \quad Y \succeq 0$$
Matrix $C$ can be easily constructed. Now I wonder whether is the way I construct optimization variable valid? If not, what are the formal methods?
Problem comes from an example in Boyd & Vandenberghe Convex Optimization at page 174

The Idea
The $ \left\| \cdot \right\|_{2} $ matrix norm is given by:
$$ \left\| X \right\|_{2} = \sqrt{ \lambda_{max} \left( {X}^{T} X \right) } $$
Where $ \lambda_{max} $ is the maximum eigen value of $ {X}^{T} X $.
Hence the above looks at the extreme conditions where the matrix $ X + t I $ or $ X - t I $ are PSD / NSD matrices.
Proof of $ {L}_{2} $ Matrix Norm
$$ \begin{align*} \left\| A \right\|_{2} & = \max_{x \neq 0} \frac{ \left\| A x \right\|_{2} }{ \left\| x \right\|_{2} } = \max_{x \neq 0} \frac{ \sqrt{ {x}^{T} {A}^{T} A x } }{ \left\| x \right\|_{2} } & \text{} \\ & = \max_{x \neq 0} \frac{ \sqrt{ {x}^{T} Q \Lambda {Q}^{T} x } }{ \left\| x \right\|_{2} } & \text{Where $ {A}^{T} A = Q \Lambda {Q}^{T} $ by Spectral Decomposition} \\ & = \max_{x \neq 0} \frac{ \sqrt{ \left( {Q}^{T} x \right)^{T} \Lambda \left( {Q}^{T} x \right) } }{ \left\| {Q}^{T} x \right\|_{2} } & \text{Since $ Q $ is Unitary matrix} \\ & = \max_{y \neq 0} \frac{ \sqrt{ {y}^{T} \Lambda y } }{ \left\| y \right\|_{2} } & \text{Where $ y = {Q}^{T} x $} \\ & = \max_{y \neq 0} \sqrt{ \frac{ \sum {\lambda}_{i} {y}_{i}^{2} }{ \sum {y}_{i}^{2} } } \leq \sqrt{ \frac{ \lambda_{max} \sum {y}_{i}^{2} }{ \sum {y}_{i}^{2} } } = \sqrt{{\lambda}_{max}} \end{align*} $$
The above is achievable by choosing $ {y}_{j} = 1 $ for $ {\lambda}_{j} = {\lambda}_{max} $ and $ {y}_{j} = 0 $ otherwise.
Pay attention that $ \lambda_{max} $ is always non negative (As all other eigen values of $ {A}^{T} A $ as being PSD Matrix). Basically it is the maximum of the absolute values of the eigen values of $ A $.
The Meaning of the Operations
All needed to show is that the operation of adding scaled unit matrix is changing the values of the Singular Values of the matrix.
Since $ X $ is a symmetric matrix, by Spectral Decomposition one could write:
$$ X \pm t I = Q \Lambda {Q}^{T} \pm t I = Q \left( \Lambda \pm t I \right) {Q}^{T} $$
As can be seen above, adding the term $ t I $ shifts the eigenvalues of $ X $.
Shifting Set of Numbers
Given a set of numbers $ \left\{ {\lambda}_{1}, {\lambda}_{2}, \ldots, {\lambda}_{n} \right\} $ how could one find their maximum?
Assuming Non Negative Numbers
Assuming all numbers are non negative one could ask what's the minimum number $ t $ such that $ {\lambda}_{j} - t \leq 0, \, \forall t $. This will yield $ t = {\lambda}_{max} $. As uses above, $ t $ is non negative number.
Assuming Non Positive Numbers
Assuming all numbers are non positive one could ask what's the minimum number $ t $ such that $ {\lambda}_{j} + t \geq 0, \, \forall t $. This will yield $ t = {\lambda}_{max} $. As uses above, $ t $ is non negative number.
Summary
Hence, for any set of numbers if one looks for the non negative $ t $ which holds both of the above then $ t $ will be equal to the maximum absolute value of the set.