In optimization, it is common to see the so called $\operatorname{diag}$ function
Given a vector $x \in \mathbb{R}^n$, $\operatorname{diag}(x)$ = $n \times n$ diagonal matrix with components of $x$ on the diagonal
For example:
Optimization that involves inverse operation.
The reason of using $\operatorname{diag}$ is because it is used in several platforms such as MATLAB, and people generally understands what the function is supposed to do
Is there a more linear algebra, step by step way of converting a vector $x \in \mathbb{R}^n$ into a diagonal matrix with components on the diagonal without having a define a function that directly performs the task ?
i.e. given $x$, we find a series of functions/steps $f_2 \circ f_1 (x)$ which give us the same matrix as $\operatorname{diag}(x)$
Using tensor or Kronecker product notation, if $e_i = \begin{pmatrix} 0 \\ \vdots \\ 1 \\ \vdots \\ 0 \end{pmatrix}$ is the standard basis for $\mathbb{R}^{n \times 1}$ and $e^i = (0, \dots, 1, \dots, 0)$ is the standard basis for $\mathbb{R}^{1 \times n}$ then we can represent $\operatorname{diag}(x_1, \dots, x_n)$ as
$$ \operatorname{diag}(x_1, \dots, x_n) = \sum_{i=1}^n x_i (e^i \otimes e_i). $$
This is of course the same as writing
$$ \operatorname{diag}(x_1, \dots, x_n) = \sum_{i=1}^n x_i e_{ii} $$
where $(e_ij)_{i,j=1}^n$ is the basis of $M_n(\mathbb{R}) = \mathbb{R}^{n \times n}$ consisting of matrices $e_{ij}$ which have $1$ at the $i$-th row and $j$-th column and $0$ at all other places.