This is a definition of a state-space model given in An Introduction to Sequential Monte Carlo by Chopin and Papaspiliopoulos.
A state-space model is a time series model that consists of two discrete-time processes $\{X_t\}:=(X_t)_{t\ge 0}$, $\{Y_t\}:=(Y_t)_{t\ge 0}$, taking values respectively in spaces $\mathcal{X}$ and $\mathcal{Y}$. A simplified specification of the model is done by means of a parameter vector $\theta \in \Theta$, and a set of densities that define the joint density of the processes via a factorisation: $$p_0^\theta (x_0)\Pi_{t=0}^T f_t^\theta (y_t|x_t) \Pi_{t=1}^T p_t^\theta(x_t|x_{t-1}).$$
Then this describes a generative probabilistic model, where $X_0$ is drawn according to the initial density $p_0^\theta(x_0)$ and then each $X_t$ is drawn conditionally on the previously drawn $X_{t-1}=x_{t-1}$ according to a transition kernel $p_t^\theta(x_t|x_{t-1})$ and each $Y_t$ conditionally on the most recent $X_t=x_t$ according to the density $f_t^\theta(y_t|x_t)$.
I can understand that the factorisation gives a model as stated above, but I cannot prove rigorously how the factorisation gives a joint probability density, that is, expanding the factorisation gives the joint density $p^\theta(x_{0:T},y_{0:T})$.
How can we show this? I would greatly appreciate some help.