A paper I'm reading (https://www.mdpi.com/1099-4300/22/9/999/pdf , page 3) has the notation $Z \leftarrow X \leftrightarrow Y$. I can't find much about Markov notation anywhere, so trying to understand this. In the context of the paper, $X$, $Y$, $Z$ are random variables which represent the input to a neural model, the ground truth output that the neural model need to predict, and the value of an intermediate representation, respectively.
I understand the concept of a Markov chain which means that given the current value of a random variable, we have all the information we need to predict the next value, and having any previous values of that variable won't give us any extra information.
So, I can imagine that the notation means that if we know $X$, we have enough information to know $Z$. And perhaps the unidirectional arrow means that knowing $Z$ is not enough information to know $X$? Perhaps because multiple values of $X$ map to the same value of $Z$? But what about $X \leftrightarrow Y$? In the paper, $Y$ is often a multiclass classification label, $Y \in \{1, \dots, K \}$, where $K$ is a small-ish finite integer, eg 10. So, then multiple values of $X$ will certainly map to the same $Y$ values, and therefore knowing $Y$ is not enough information to know $X$?
Tried googling around, and searching on math stack exchange, but can't seem to find any resources on what the notation means. I've found some related questions on math stack exchange, such as markov chain notation $X - Y - Z$ , but I can't seem to find any explanation of this directional and bidirectional notation, or how it relates to the meanings of the variables used in this paper?