Consider the following problem:
There are $k$ socks and $n$ drawers which can only contain 1 sock (very small drawer indeed). The drawers are aligned. You put socks in the drawers with uniform probability. You are wondering the law of probability of the number of drawers containing 1 sock and the drawer to its right have also a sock in it.
More formally, write $[1:n]=[1;n]\cap\mathbb{N}$, and let $\eta$ be a configuration of the sock & drawer, meaning $\eta = (\eta_{i})_{i\in [1:n]}\in \Omega=\{0;1\}^{[1:n]}$ where $\eta_{i}=1$ means that there is a sock in the drawer $i$. Furthermore, denote $\mathbb{P}$ the law of probabilty over $\Omega$ such that $\forall \eta,\sigma \in \Omega:\mathbb{P}(\eta)=\mathbb{P}(\sigma)$
We are looking at the law of probability of the following random variable : $$ \sum_{i=1}^{n-1} \eta_i \eta_{i+1} $$ under the condition $\sum \eta_i=k$
Using $w$ for empty drawers and $z$ for non-empty ones and $u$ for non-empty drawers with a non-empty drawer to the right we obtain the generating function
$$(1+w+w^2+\cdots) \\ \times \sum_{q\ge 0} (z+z^2u+z^3u^2+\cdots)^q (w+w^2+w^3+\cdots)^q \\ \times (1+z+z^2u+z^3u^2+\cdots).$$
This is
$$\frac{1}{1-w} \sum_{q\ge 0} z^q \frac{1}{(1-uz)^q} w^q \frac{1}{(1-w)^q} \\ \times \left(1+ z\frac{1}{1-uz}\right) \\ = \frac{1}{1-w} \frac{1}{1-wz/(1-w)/(1-uz)} \frac{1+z-uz}{1-uz} \\ = \frac{1+z-uz}{(1-w)(1-uz)-wz}.$$
Observe that when we put $u=1$ and $w=z$ we obtain
$$\frac{1}{(1-z)^2-z^2} = \frac{1}{1-2z}$$
which is a useful sanity check and shows that we have accounted for all strings. Differentiate with respect to $u$ and set $u=1$ to count non-empty neighbors:
$$\left.\left(\frac{1+z-uz}{1-w-uz+uwz-wz}\right)'\right|_{u=1} \\ = \left.-\frac{z}{1-w-uz+uwz-wz} - \frac{1+z-uz}{(1-w-uz+uwz-wz)^2} (wz-z)\right|_{u=1} \\ = -\frac{z}{1-w-z} - \frac{1}{(1-w-z)^2}(wz-z) = \frac{z^2}{(1-w-z)^2}.$$
We thus require
$$[z^k] [w^{n-k}] \frac{z^2}{(1-w-z)^2}$$
where we may assume that $k\ge 2$ since with $k=1$ or $k=0$ the expectation is zero. We get
$$[z^{k-2}] [w^{n-k}] \frac{1}{(1-w-z)^2} \\ = [z^{k-2}] \frac{1}{(1-z)^2} [w^{n-k}] \frac{1}{(1-w/(1-z))^2} \\ = [z^{k-2}] \frac{1}{(1-z)^2} (n-k+1) \frac{1}{(1-z)^{n-k}} \\ = (n-k+1) [z^{k-2}] \frac{1}{(1-z)^{n-k+2}} \\ = (n-k+1) {k-2+n-k+1\choose n-k+1} \\ = (n-k+1) {n-1\choose n-k+1} = (n-1) {n-2\choose n-k}.$$
We get for the expectation
$$(n-1) {n\choose k}^{-1} {n-2\choose n-k} = (n-1) \frac{k! (n-k)!}{n!} \frac{(n-2)!}{(k-2)! (n-k)!}$$
which is
$$\bbox[5px,border:2px solid #00A000]{ \frac{1}{n} k(k-1).}$$
An elementary immediate proof is sure to appear given the simplicity of this result now that it has been computed.
The interpretation of the question that was used here is documented in the following Maple code.
with(combinat); EN := proc(n, k) local choice, res, nb, pos; res := 0; for choice in choose(n, k) do nb := 0; for pos to k-1 do if choice[pos] + 1 = choice[pos+1] then nb := nb+1; fi; od; res := res + nb; od; res; end; F := (n,k) -> coeftayl(coeftayl(z^2/(1-w-z)^2, w=0, n-k), z=0, k); ENX := (n,k) -> EN(n,k)/binomial(n,k); FX := (n,k) -> 1/n*k*(k-1);Addendum. As an additional sanity check observe that
$$[z^k] [w^{n-k}] \left.\frac{1+z-uz}{(1-w)(1-uz)-wz}\right|_{u=1} \\ = [z^k] [w^{n-k}] \frac{1}{(1-w)(1-z)-wz} = [z^k] [w^{n-k}] \frac{1}{1-w-z} \\ = [z^k] \frac{1}{1-z} [w^{n-k}] \frac{1}{1-w/(1-z)} \\ = [z^k] \frac{1}{1-z} \frac{1}{(1-z)^{n-k}} = [z^k] \frac{1}{(1-z)^{n-k+1}} \\ = {k+n-k\choose n-k} = {n\choose n-k} = {n\choose k}$$
which is the correct result.
Remark Nov 9 2016. Brian Scott asks about the value of
$$[u^m] [z^k] [w^{n-k}] \frac{1+z-uz}{(1-w)(1-uz)-wz}.$$
This is
$$[u^m] [z^k] [w^{n-k}] \frac{1+z-uz}{1-uz-w(1+z-uz)} \\ = [u^m] [z^k] \frac{1}{1-uz} [w^{n-k}] \frac{1+z-uz}{1-w(1+z-uz)/(1-uz)} \\ = [u^m] [z^k] \frac{(1+z-uz)^{n-k+1}}{(1-uz)^{n-k+1}} \\ = [u^m] [z^k] \left(1+\frac{z}{1-uz}\right)^{n-k+1}.$$
Extracting coefficients we find
$$\sum_{p=0}^{n-k+1} {n-k+1\choose p} [u^m] [z^k] \frac{z^p}{(1-uz)^p} \\ = \sum_{p=0}^{k} {n-k+1\choose p} [u^m] [z^{k-p}] \frac{1}{(1-uz)^p}.$$
We must have $m=k-p$ or $p=k-m$ and get
$${n-k+1\choose k-m} {m+k-m-1\choose k-m-1} \\ = {n-k+1\choose k-m} {k-1\choose m}.$$
We can compute higher moments with these data. We already have $\mathrm{E}[X]$ and obtain for $\mathrm{E}[X(X-1)]$
$$\sum_{m=2}^{k-1} m(m-1) {n-k+1\choose k-m} {k-1\choose m}.$$
This is zero when $k=1$ or $k=2$ so we may assume that $k\ge 3$ in calculating the sum.
$$\sum_{m=2}^{k-1} m(m-1) {n-k+1\choose k-m} \frac{(k-1)(k-2)}{m(m-1)} {k-3\choose m-2} \\ = (k-1)(k-2) \sum_{m=2}^{k-1} {n-k+1\choose k-m} {k-3\choose m-2} \\ = (k-1)(k-2) \sum_{m=1}^{k-2} {n-k+1\choose m} {k-3\choose k-m-2}.$$
We evaluate this using the Egorychev method and put
$${k-3\choose k-m-2} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k-m-1}} (1+z)^{k-3} \; dz.$$
Observe that this vanishes when $m\ge k-1$ and when $m=0$ where we get $[z^{k-2}] (1+z)^{k-3}=0$ and hence we may set the range of $m$ from zero to infinity, getting for the sum
$$(k-1)(k-2) \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k-1}} (1+z)^{k-3} \sum_{m\ge 0} {n-k+1\choose m} z^m \; dz \\ = (k-1)(k-2) \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k-1}} (1+z)^{n-2} \; dz \\ = (k-1)(k-2) \times {n-2\choose k-2}.$$
We get for the second factorial moment
$$(k-1)(k-2) {n\choose k}^{-1} {n-2\choose k-2} \\ = (k-1)(k-2) \frac{k! (n-k)!}{n!} \frac{(n-2)!}{(n-k)!(k-2)!}.$$
This is
$$\frac{1}{n}\frac{1}{n-1} k(k-1)^2(k-2).$$
We have for the variance
$$\mathrm{Var}[X] = \mathrm{E}[X^2]-\mathrm{E}[X]^2 = \mathrm{E}[X(X-1)] + \mathrm{E}[X] -\mathrm{E}[X]^2$$
so that in the present case we obtain
$$\bbox[5px,border:2px solid #00A000]{ \frac{1}{n}\frac{1}{n-1} k(k-1)^2(k-2) + \frac{1}{n} k(k-1) - \frac{1}{n^2} k^2(k-1)^2.}$$
This simplifies to
$$\frac{k(k-1)(n-k)(n+1-k)}{n^2(n-1)}.$$