From Matrix Variate Distributions by Gupta & Nagar.
1) definition of vectorization for a generic matrix (page 9)
Let $X$ be a $m\times n$ matrix and let $X_1$, $\dots$, $X_n$ be the columns of $X$ (which are column vectors in $\mathbb{R}^m$). Define the vectorization of $X$ as the column vector in $\mathbb{R}^{mn}$ \begin{equation}\operatorname{vec}[X]\triangleq \begin{bmatrix}X_1 \\ \vdots\\ X_n\end{bmatrix}\end{equation}
2) definition of vectorization for a symmetric matrix - aka half vectorization (page 10)
Let $X$ be a $p\times p$ matrix and denote the typical element as $X_{ij}$. Define the vectorization of $X$ as the column vector in $\mathbb{R}^{p(p+1)/2}$ formed by the elements above and including the diagonal, taken columnwise, i.e. \begin{equation}\operatorname{vecp}[X]=\begin{bmatrix}X_{11} & X_{12} & X_{22} & \cdots & X_{1p} & \cdots & X_{pp}\end{bmatrix}=\operatorname{vecp}[X']\end{equation} where $X'$ is the transpose of $X$.
3) definition of transition matrix (page 11)
The matrix $B_p$ of order $p^2\times\frac{1}{2}p(p+1)$ with typical element \begin{equation}(B_p)_{ij,gh}=\frac{1}{2}(\delta_{ig}\delta_{jh}+\delta_{ih}\delta_{jg}) \qquad i\leq p, j\leq p, g\leq h\leq p\end{equation} where $\delta_{rs}$ is the Kronecker's delta, is called transition matrix.
4) theorem (page 11)
if $X$ is a symmetric matrix of order $p\times p$ then \begin{equation}\begin{aligned} \operatorname{vecp}[X]&= B_p' \operatorname{vec}[X]\\ \operatorname{vec}[X]&= (B_p^+)' \operatorname{vecp}[X]\\ \end{aligned}\end{equation} where $B_p^+\triangleq (B_p' B_p)^{-1}B_p'$
5) observation (page 11)
$B_p^+$ is of order $\frac{1}{2}p(p+1)\times p^2$ with typical element \begin{equation}\begin{aligned}(B_p^+)_{gh,ij}&=(2-\delta_{gh})(B_p)_{ij,gh} \qquad i\leq p, j \leq p, g \leq h \leq p \\ &=1, \qquad ij = gh \text{ or } ij=hg\\ &=0, \qquad \text{otherwise} \end{aligned}\end{equation}
my question
To me, is not clear the definition of transition matrix. Firstly, I'm not sure if I have understood correctly the meaning of the symbol $\delta_{rs}$, I believe it means \begin{equation}\delta_{rs}\triangleq \begin{cases}1 & \text{if } r=s \\ 0 & \text{otherwise} \end{cases}\end{equation} Moreover, the notation used to represent the indexes of the typical element $(B_p)_{ij}$ seems to me erroneus, and I will give you a simple example in a moment. I've reported the theorem and the observation because maybe provide some useful information to understand the definition of transition matrix. Moreover, I have found something similar to my problem in this page on wikipedia, where the "elimination matrix" seems to be $B_p$, while the "duplication matrix" seems to be $B_p^+$.
example
Consider the simple case $p=2$. Thanks to the theorem we can do some retro-engineering to discover who is $B_2$. The vectorizations in this case are \begin{equation}\operatorname{vec}[X]=\begin{bmatrix}X_{11} & X_{21} & X_{12} & X_{22}\end{bmatrix}'\end{equation} \begin{equation}\operatorname{vecp}[X]=\begin{bmatrix}X_{11} & X_{12} & X_{22}\end{bmatrix}'\end{equation} so the matrix that maps $\operatorname{vec}[X]$ into $\operatorname{vecp}[X]$ is \begin{equation}B_2'=\begin{bmatrix}1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\end{equation} consequently, the transition matrix is \begin{equation}B_2=\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix}\end{equation} This is the result that I expect by computing the definition of transition matrix. Assuming the previous interpretation of $\delta_{rs}$, I start to compute the elements of $B_2$ according the definition of $B_p$ \begin{equation}\begin{aligned} i=1, j=1, g=1, h=1:& \qquad B_{11} = \frac{\delta_{11}\delta_{11}+\delta_{11}\delta_{11}}{2}=\frac{1\cdot1+1\cdot1}{2}=1\\ i=1, j=1, g=1, h=2:& \qquad B_{12} = \frac{\delta_{11}\delta_{12}+\delta_{12}\delta_{11}}{2}=\frac{1\cdot0+0\cdot1}{2}=0\\ i=1, j=1, g=2, h=2:& \qquad B_{14} = \frac{\delta_{12}\delta_{12}+\delta_{12}\delta_{12}}{2}=\frac{0\cdot0+0\cdot0}{2}=0\\ i=1, j=2, g=1, h=1:& \qquad B_{21} = \frac{\delta_{11}\delta_{21}+\delta_{11}\delta_{21}}{2}=\frac{1\cdot0+1\cdot0}{2}=0\\ i=1, j=2, g=1, h=2:& \qquad B_{22} = \frac{\delta_{11}\delta_{22}+\delta_{12}\delta_{21}}{2}=\frac{1\cdot1+0\cdot0}{2}=\frac{1}{2}\\ i=1, j=2, g=2, h=2:& \qquad B_{24} = \frac{\delta_{12}\delta_{22}+\delta_{12}\delta_{22}}{2}=\frac{0\cdot1+0\cdot1}{2}=0\\ i=2, j=1, g=1, h=1:& \qquad B_{21} = \frac{\delta_{21}\delta_{11}+\delta_{21}\delta_{11}}{2}=\frac{0\cdot1+0\cdot1}{2}=0\\ i=2, j=1, g=1, h=2:& \qquad B_{22} = \frac{\delta_{21}\delta_{12}+\delta_{22}\delta_{11}}{2}=\frac{0\cdot0+1\cdot1}{2}=\frac{1}{2}\\ i=2, j=1, g=2, h=2:& \qquad B_{24} = \frac{\delta_{22}\delta_{12}+\delta_{22}\delta_{12}}{2}=\frac{1\cdot0+1\cdot0}{2}=0\\ i=2, j=2, g=1, h=1:& \qquad B_{41} = \frac{\delta_{21}\delta_{21}+\delta_{21}\delta_{21}}{2}=\frac{0\cdot0+0\cdot0}{2}=0\\ i=2, j=2, g=1, h=2:& \qquad B_{42} = \frac{\delta_{21}\delta_{22}+\delta_{22}\delta_{21}}{2}=\frac{0\cdot1+1\cdot0}{2}=0\\ i=2, j=2, g=2, h=2:& \qquad B_{44} = \frac{\delta_{22}\delta_{22}+\delta_{22}\delta_{22}}{2}=\frac{1\cdot1+1\cdot1}{2}=1\\ \end{aligned}\end{equation} so I conclude that \begin{equation}B_2 = \begin{bmatrix} 1 & 0 & ? & 0\\ 0 & 1/2 & ? & 0 \\ ? & ? & ? & ? \\ 0 & 0 & ? & 1 \end{bmatrix}\end{equation} which is clearly wrong.
update
I've found this paper, where some insight on the notation are given.
First, $(B_p)_{ij,gh}$ is a "4 indexes" notation, and not a "2 indexes" notation where one have to take the products $ij$, $gh$ to reduce the 4 indexes to 2 indexes.
Second, equation 4 says that in the special case of the direct product (which I believe is the Kronecker product), this "4 indexes" notation assumes the meaning \begin{equation}[A\otimes B]_{ij, gh}=a_{jh}b_{ig}\end{equation} where, I believe, $a_{jh}$ is the element of row $j$ and column $h$ of $A$ (and the same for $b_{ig}$).
Third, in this paper is written, using the notation of Gupta & Nagar, that for $B_p$ hold the following relations \begin{equation}\begin{aligned}(B_p)_{ii,ii}&=1\\ (B_p)_{ij,ij}&=1/2 \qquad i\neq j\\ (B_p)_{ij,ji}&=1/2 \qquad i\neq j\\ (B_p)_{ij,gh}&=0 \qquad ij\neq gh \text{ and } ij\neq hg\\ \end{aligned}\end{equation}
The transition matrix (a.k.a elimination matrix) is not unique. Consider $X$ being a symmetric $(2 \times 2)$ matrix, i.e. $x_{2,1} = x_{1,2}$, then
$$ \begin{bmatrix} x_{1,1} \\ x_{2,1} \\ x_{2,2} \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1-c & c & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} x_{1,1} \\ x_{2,1} \\ x_{1,2} \\ x_{2,2} \end{bmatrix}. $$ Intuition: For every row in the elimination matrix that is supposed to pick out an off-diagonal element of $X$ you can take $1-c$ times the entry in the lower triangluar part and $c$ times the element in the upper triangular part.
For $c = 1$ you get the canonical elimination matrix. For $c=\frac{1}{2}$ you get the Moore-Penrose inverse of the duplication matrix.