Consider a trivariate probability distribution $P: \mathbb{R}^3\rightarrow [0,1]$. I have the following questions:
(1) Are there necessary conditions on the cumulative distribution function (CDF) associated with $P$ ensuring that $$ \exists \text{ a random vector $(X_1,X_2)$ such that $(X_1, X_2, X_1-X_2)$ has probability distribution $P$} $$
(2) Are there necessary and sufficient conditions on the CDF associated with $P$ ensuring that $$ \exists \text{ a random vector $(X_1,X_2)$ such that $(X_1, X_2, X_1-X_2)$ has probability distribution $P$} $$
(3) The conditions that you propose can be "approximated" as a linear constraint on the CDF?
I'm providing more details on my question also thanks to/inspired by the answers below. The answers below help, but I'm still not satisfied. Please help if you can.
If there exists a random vector $(X_1,X_2)$ such that $(X_1, X_2, X_1-X_2)$ has probability distribution $P$, then $P$ should satisfy: for every $\begin{pmatrix} a_1\\ b_1\\ c_1 \end{pmatrix}\leq \begin{pmatrix} a_2\\ b_2\\ c_2 \end{pmatrix}$
If $a_2\geq b_2+c_2$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([a_1, b_2+c_2], [b_1, b_2], [c_1, c_2])\\ P([a_2, a_3], [b_1, b_2], [c_1, c_2])= 0 & \forall a_3\geq a_2\\ \end{cases} $$
If $b_1\leq a_1-c_2$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([a_1,a_2], [a_1-c_2, b_2], [c_1, c_2])\\ P([a_1,a_2], [b_3, b_1], [c_1, c_2])=0 & \forall b_3\leq b_1\\ \end{cases} $$
If $a_1 \leq b_1+c_1$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([b_1+c_1,a_2],[b_1,b_2],[c_1,c_2])\\ P([a_3,a_1], [b_1, b_2], [c_1, c_2])=0 & \forall a_3 \leq a_1 \end{cases} $$
If $b_2\geq a_2-c_1$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([a_1,a_2], [b_1, a_2-c_1], [c_1, c_2])\\ P([a_1,a_2], [b_2, b_3], [c_1, c_2])=0 & \forall b_3\geq b_2 \end{cases} $$
If $c_2 \geq a_2-b_1$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([a_1,a_2], [b_1, b_2], [c_1, a_2-b_1])\\ P([a_1,a_2], [b_1, b_2], [c_2, c_3])=0 & \forall c_3\geq c_2 \end{cases} $$
If $c_1\leq a_1-b_2$ $$ \begin{cases} P([a_1,a_2], [b_1, b_2], [c_1, c_2])= P([a_1,a_2], [b_1, b_2], [a_1-b_2, c_2])\\ P([a_1,a_2], [b_1, b_2], [c_3, c_1])=0 & \forall c_3\leq c_1 \end{cases} $$
All the implications above can be re-written as linear function of the CDF associated with $P$.
However: are these implications also sufficient? If yes, I don't know how to prove it; If not, I don't know how to find a counterexample.
Edited to fix a bug, and expand. Still only a rough outline...
If you can work with PDF then I think the answer by @AlejandroNasifSalum is necessary and sufficient -- with possible exceptions having zero probability, not sure if you care.
For the rest, I assume you do not care about zero-probability events. If you do care about zero-probability violations, my guess is it will be very hard (impossible?) to exclude such violations using CDFs.
Anyway, let $F(a,b,c) = Prob(X \le a, Y \le b, Z \le c)$ be the CDF. Using the CDF and varying the $3$ inputs you can draw out "boxes" aligned along the $3$ axes, and any such box not intersecting $S=\{(a,b,c)\in\mathbb R^3 \colon a=b+c\}$ must have probability $0$. So you can easily come up one Necessary Condition:
[NC1] $\forall b, c: F(\infty,b,c) = F(b+c,b,c)$, because given $Y\le b$ and $Z \le c$ this restricts $X \le b+c$ and so increasing the range of $X$ beyond $b+c$ does not increase the probability. [I will use $\infty$ to denote positive infinity.]
Geometrically, [NC1] basically uses a box $B(b,c)$ defined by $(X>b+c, Y\le b, Z \le c)$ and is saying $P(B(b,c)) = F(\infty,b,c) - F(b+c,b,c)= 0$.
The question is whether by using enough boxes of this and similar forms, one can rigorously prove what you want, i.e., $X = Y + Z$ (with prob $1$).
The rest of this answer is speculative / a rough outline. I don't actually have enough rigorous probability/measure theory background for a rigorous proof. Where I have doubt I will write (?) to denote my doubt. Anyway, here are some geometric-inspired arguments.
First of all, observe that [NC1] stipulates $P(B(b,c))=0$ for all $b, c$. Therefore (?) the union of all such boxes $\bigcup_{b,c} B(b,c)$ also have $P(\bigcup_{b,c} B(b,c)) = 0$. The union $\bigcup_{b,c} B(b,c)$ is (?) in fact $\{(a,b,c)\in\mathbb R^3 \colon a > b+c\}$, i.e. the open region above the plane $S$.
So my idea is to similarly exclude the open region below the plane $S$. To do this, we will use a box $Q(b,c)$ of the form $(X\le b+c, Y > b, Z > c)$, i.e. the exact "opposite" of the format of $B$. By inclusion-exclusion, we can use CDF to write
This directly gives us another Necessary Condition:
[NC2] $\forall b, c: P(Q(b,c)) = 0$.
I think it is non-controversial that both [NC1] and [NC2] are necessary. Further, I think (?) together they are sufficient, but I am less sure about that, partly because I am less sure that $\bigcup_{b,c} Q(b,c)$ is (?) the open region below plane $S$. There may be some subtleties I am missing here with the equal signs...?
Anyway, this is the best I can do. :) If all the (?) above turn out to be OK, then the two conditions together would imply $P(S) = 1$, i.e. $X = Y+Z$ with probability $1$. But it will take a better-trained theoretician than me to verify all of the above... :)