I have a simple recurrence relation that relates a sequence of random variables $X_i$ to another sequence $Y_i$ using a constant $k$:
\begin{equation} Y_i\left(1+k\right) = kY_{i-1} + X_i \quad \mbox{or} \quad Y_i = \left(\frac{k}{k+1}\right)Y_{i-1} + \left(\frac{1}{k+1}\right)X_i \end{equation}
Assuming that the $X_i$ are independent and identically distributed, and making minimal assumptions about the distribution of the random variable $X$, then I want to know how to relate $\mu_x$ and $\mu_Y$, $\sigma_Y^2$ and $\sigma_X^2$, $\mathrm{skew}\left(X\right)$ and $\mathrm{skew}\left(Y\right)$ and also $\mathrm{Kurt}\left(X\right)$ and $\mathrm{Kurt}\left(Y\right)$.
I have found a reference that suggests taking simple expectations of the recurrence relationship will allow such relationships to be found, and then proceeds to give the relationship between the means, variances and skewnesses. It is relatively straightforward to show that:
\begin{equation} \left(1+k\right)\mathbf{E}\left[Y\right] = k\mathbf{E}\left[Y\right] + \mathbf{E}\left[X\right] \quad \mbox{so} \quad \mathbf{E}\left[Y\right] = \mathbf{E}\left[X\right], \quad \mu_Y = \mu_X \end{equation}
and similarly for the variance: \begin{equation} \left(1+k\right)^2\mathbf{Var}\left[Y\right] = k^2\mathbf{Var}\left[Y\right] + \mathbf{Var}\left[X\right] \quad \mbox{so} \left(1+2k\right)\mathbf{Var}\left[Y\right] = \mathbf{Var}\left[X\right], \quad \sigma_Y^2 = \left(\frac{1}{1+2k}\right)\sigma_X^2 \end{equation}
But I am not able to see how to derive the expression for the relationship between $\mathrm{skew}\left(X\right)$ and $\mathrm{skew}\left(Y\right)$. The reference (a published paper) suggests this is:
\begin{equation} \mathbf{skew}\left[Y\right] = \left(\frac{\left(1+2k\right)^{3/2}}{3k^2+3k+1}\right)\mathbf{skew}\left[X\right] \end{equation}
but how is this derived? I want to know this because I also want to be able to derive an expression for the kurtosis.
If $Y_i$ is a sequence, you cannot assume that all $Y_i$ have the same distribution, even if you assume that all $X_i$ have the same distribution. in particular, you cannot just write ${\bf E}[Y_i]={\bf E}[Y]$ as you do.
I assume the requrence starts with some $Y_0$ that is not given by the recurssion (because we don\t have $Y_{-1}$). Then we have $$ Y_1 = \frac{k}{k+1} Y_0 + \frac{1}{k+1} X_1 $$ $$ Y_2 = \frac{k}{k+1} Y_1 + \frac{1}{k+1} X_2 = \frac{k^2}{(k+1)^2} Y_0 + \frac{k}{(k+1)^2} X_1 + \frac{1}{k+1} X_2 $$ $$ \dots $$ $$ Y_n = \frac{k^n}{(k+1)^n} Y_0 + \sum_{i=0}^{n-1} \frac{k^{i}}{(k+1)^{i+1}} X_{n-i} $$ Assuming that all $X_i$ have the same distribution, we have then \begin{align} {\bf E}[Y_n] &= \frac{k^{2n}}{(k+1)^{2n}} {\bf E}[Y_0] +\sum_{i=0}^{n-1} \frac{k^{i}}{(k+1)^{i+1}} {\bf E}[X] = \\ &= \frac{k^n}{(k+1)^n} {\bf E}[Y_0] +\sum_{i=0}^{n-1} \left(\frac{k}{k+1}\right)^i \frac{{\bf E}[X]}{k+1} = \\ &= \frac{k^n}{(k+1)^n} {\bf E}[Y_0] +\frac{1-\left(\frac{k}{k+1}\right)^{n}}{1-\frac{k}{k+1}} \frac{{\bf E}[X]}{k+1} = \\ &= \frac{k^n}{(k+1)^n} {\bf E}[Y_0] +\left(1-\left(\frac{k}{k+1}\right)^{n}\right) {\bf E}[X] \end{align} If you assume $ {\bf E}[Y_0] = {\bf E}[X]$ then you indeed obtain ${\bf E}[Y_n] ={\bf E}[X] $, but otherwise it may vary. We can also notice that $$ \lim_{n\to\infty} {\bf E}[Y_n] = {\bf E}[X] $$ regardless of the starting condition.
Similarily with the variance; assuming that $X_i$ are independent of each other and of $Y_0$ it is posible to show that $X_I$ and $Y_{i=1}$ are independent and we have $$ {\bf Var}[Y_n] = \frac{k^2}{(k+1)^2}{\bf Var}[Y_n] + \frac{1}{(k+1)^2}{\bf Var}[X]$$ or, after solving the recurrence \begin{align} {\bf Var}[Y_n] &= \frac{k^{2n}}{(k+1)^{2n}} {\bf Var}[Y_0] +\sum_{i=0}^{n-1} \frac{k^{i}}{(k+1)^{i+1}} {\bf Var}[X] = \dots = \\ &= \frac{k^{2n}}{(k+1)^{2n}} {\bf Var}[Y_0] +\left(1-\left(\frac{k}{k+1}\right)^{2n}\right) \frac{{\bf Var}[X]}{1+2k} \end{align} And again, if $ {\bf Var}[Y_0] = {\bf Var}[X] $, or in the limit $n\to\infty$, we get $$ \sigma_{Y_n}^2 = \frac{\sigma_X^2}{1+2k}$$
The skewness (not yet the curtosis, curtosis would be the next step) is defined as $$ {\bf skew}[Y] = \frac{{\bf E}[(Y-\mu_Y)^3]}{\sigma_Y^3}$$ Using the fact that \begin{align} {\bf E}[(Y_n-\mu_{Y_n})^3] &= {\bf E}\left[(\frac{k}{k+1}Y_{n-1}+\frac{1}{k+1}X_n-\frac{k}{k+1}\mu_{Y_{n-1}}-\frac{1}{k+1}\mu_X)^3\right] = \\ &= {\bf E}\left[(\frac{k^3}{(k+1)^3}(Y_{n-1}-\mu_{Y_{n-1}})^3+\frac{3k^2}{(k+1)^3}(Y_{n-1}-\mu_{Y_{n-1}})^2(X_n-\mu_X)+\frac{3k}{(k+1)^3}(Y_{n-1}-\mu_{Y_{n-1}})(X_n-\mu_X)^2+\frac{1}{(k+1)^3}(X_n-\mu_X)^3\right] = \\ &= \frac{k^3}{(k+1)^3}{\bf E}[(Y_{n-1}-\mu_{Y_{n-1}})^3]+\frac{3k^2}{(k+1)^3}{\bf E}[(Y_{n-1}-\mu_{Y_{n-1}})^2]{\bf E}[X_n-\mu_X]+\frac{3k}{(k+1)^3}{\bf E}[Y_{n-1}-\mu_{Y_{n-1}}]{\bf E}[(X_n-\mu_X)^2]+\frac{1}{(k+1)^3}{\bf E}[(X_n-\mu_X)^3] = \\ &= \frac{k^3}{(k+1)^3}{\bf E}[(Y_{n-1}-\mu_{Y_{n-1}})^3]+\frac{1}{(k+1)^3}{\bf E}[(X_n-\mu_X)^3] \end{align} and using the same method as before we get \begin{align} {\bf E}[(Y_n-\mu_{Y_n})^3] &= \frac{k^{3n}}{(k+1)^{3n}} {\bf E}[(Y_0-\mu_{Y_0})^3] +\left(1-\left(\frac{k}{k+1}\right)^{3n}\right) \frac{{\bf E}[(X-\mu_{X})^3]}{1+3k+3k^2} \end{align} If we assume $ {\bf E}[(Y_0-\mu_{Y_0})^3] = {\bf E}[(X-\mu_{X})^3]$, then we get $$ {\bf E}[(Y_n-\mu_{Y_n})^3] = \frac{{\bf E}[(X-\mu_{X})^3]}{1+3k+3k^2} $$ so $$ {\bf skew}[Y_n] = \frac{1}{\sigma_{Y_n}^3}\frac{{\bf skew}[X]\sigma_X^3}{1+3k+3k^2}$$ Using the previous result on $\sigma_{Y_n}$ we get $$ {\bf skew}[Y_n] = (1+2k)^{\frac32}\frac{{\bf skew}[X]}{1+3k+3k^2}$$
Note that this holds only with the right starting condition for the recurrence or in the limit $n\to\infty$.