Conditional independence exists for two events $A$, $B$ given an event $C$ when: $P(A,B|C) = P(A|C)\cdot P(B|C)$
Bayes Theorem for two Events $E, F$: $P(E│F)= \frac{P(F|E) \cdot P(E)}{P(F)}$
I deduced that from a relationship between the events $X,Y,Z$ when $$P(X,Y,Z) = P(X)\cdot P(Y|Z)\cdot P(Z|X)$$ (a Bayesian network would look like $X \longrightarrow Z \longrightarrow Y$) there is a conditional independence for X and Y given Z : $$P(X,Y|Z)= \frac{P(X,Y,Z)}{P(Z)} = \frac{P(X)\cdot P(Z|X)}{P(Z)}\cdot P(Y|Z) = P(X|Z)\cdot P(Y|Z)$$
Meaning that if we can observe the event $Z$ and it holds, that both $X$ and $Y$ are only dependent on $Z$ and therefore conditionally independent from each other. But when I examined the case for when Z is not given($Z$ could occur $P(Z)$or not $P(\overline{Z})$), I don't know how to proceed to show dependency between X and Y:
$$P(X,Y) = P(X,Y,Z)+P(X,Y,\overline{Z}) = \\ P(X)\cdot P(Y|Z)\cdot P(Z|X) + P(X)\cdot P(Y|\overline{Z})\cdot P(\overline{Z}|X) = \space ?$$
After reading Did's comment, I made some changes to the Question
I changed the use of the notations from variables to events, since I seem to have abused them and created confusion. I believe it makes things simpler too. So, for an event $A$, $P(A)$ is the probability that $A$ occurs. $P(A, B) = P(A \cap B)$ is the prob. that A and B occur etc. .
I watched a video to explain different dependency relationships and it said for the structure shown above, from $P(X,Y) = P(X) \cdot P(Y|X)$ we can deduce that Y is dependent on X, if Z is not given (Video, in german), which is wrong.
My question in the first place was, how can I prove that Y and X are somehow dependent if we are not able to observe Z.
Hint: $\Pr(Y|Z)=\Pr(Y|XZ)$. Consequently, $\Pr(Y|Z)\Pr(Z|X)=\Pr(Y|XZ)\Pr(Z|X)=\Pr(YZ|X)$