Conditional probability for discrete random variable

48 Views Asked by At

I was looking at the following paper. For a $Z\in\{-1,+1\}^n$ they define the following distribution as

$$P[Z=z] = c \exp(\sum_{1\le i < j\le n}A_{ij}z_iz_j + \sum_{i=1}^n\theta_i z_i) $$

for a symmetric $A\in\mathbb{R}^{n\times n}$ and a normalization constant $c$. Non on the next page, page 4, the state fact 1. For $Z_i\in\{-1,1\}$ and $Z_{-i}\in\{-1,1\}^{n-1}$ they state the conditional probability as

$$P[Z_i=1|Z_{-i}=x] =\frac{\exp{\sum_{i\neq j}A_{ij}x_j+\theta_i}}{\exp{(\sum_{i\neq j}A_{ij}x_j+\theta_i)} + \exp{(-\sum_{i\neq j}A_{ij}x_j-\theta_i)}}$$

Now two questions regarding this. First is a technical one, by definition we have

$$P[Z_i=1|Z_{-i}=x]=\frac{P[Z_i=1,Z_{-i}=x]}{P[Z_{-i}=x]}$$

How is the probability $P[Z_{-i}=x]$ to understand? The probability mass function is defined on $\{-1,+1\}^n$ but the input vector in this case is of one dimension less.

Second question, how do they derive this conditional distribution above?