Let $a_1=a_2=\cdots=a_t= 1,a_k=x_{k1}a_{k-1}+x_{k2}a_{k-2}+\cdots+x_{kt}a_{k-t},$ where $x_{ki}$~U(0,b), and $x_{ki}(k>t,i=1,2,\cdots,t)$ are independent each other.
Prove that $\exists c\in \mathbb R:$ $\dfrac{\log{a_k}}{k}$ converges almost surely to $c.$ (write as $\dfrac{\log{a_k}}{k} \to^{p} c$)$\tag{1}$
When $t=1,\dfrac{a_k}{a_{k-1}}$~U(0,b),$$\dfrac{a_k}{a_{k-1}}\cdots\dfrac{a_2}{a_1}=\dfrac{a_k}{a_1}=x_k\cdots x_2,\log{\dfrac{a_k}{a_1}}=\log(x_k)+\cdots+\log{x_2}$$ $$\implies \lim_{k\to \infty}{\dfrac{1}{k-1}\log{\dfrac{a_k}{a_1}}}\to^p E(\log(X))=\dfrac{1}{b}\int_{0}^b\log{x}~dx=\log{b}-1=c.$$
We can find that $c$ does not depend on the value of $a_1$ as soon as $a_1>0$. I think this is true for $t>1$.
How to prove $(1)$ if $t>1,$ and how to find $c$ ? Thanks in advance!
For every $k\geqslant0$, define the vector $Y_k$ of size $t\times1$ by $(Y_k)_i=a_{i+k}$ for every $1\leqslant i\leqslant t$ and the matrix $A_k$ of size $t\times t$ by $(A_k)_{i,1}=x_{ki}$ for every $1\leqslant i\leqslant t$, $(A_k)_{i,i+1}=1$ for every $1\leqslant i\leqslant t-1$, and $(A_k)_{i,j}=0$ for every other $1\leqslant i,j\leqslant t$.
Then $Y_0$ is the vector whose every coordinate is $1$ and $Y_{k+1}=A_kY_k$ for every $k\geqslant0$ where $(A_k)_{k\geqslant0}$ is i.i.d., hence it is known that, for every norm on $\mathbb R^t$, for example the Euclidean norm, $$ \log\|Y_k\|=\gamma k+o(k), $$ almost surely, where $\gamma$ is the Lyapunov exponent of the sequence $(A_k)$. In particular, $$ \frac{\log a_k}k\to\gamma, $$ almost surely. It happens that, except in dimension $t=1$ and in some specific cases, no explicit formula for $\gamma$ exists. However, considering the matrix norm induced by the Euclidean norm on $\mathbb R^t$, $$ \gamma=\lim_{k\to\infty}k^{-1}E[\log\|A_kA_{k-1}\cdots A_1\|]=\iint\log(\|Ay\|/\|y\|)\mathrm d\mu(A)\mathrm d\nu(y), $$ where $\mu$ is the common distribution of the matrices $(A_k)$ (hence $\mu$ is known) and $\nu$ is the stationary distribution of a Markov chain on the projective space of dimension $t-1$ (hence $\nu$ is often unknown).
For more details, see this introduction. For examples of explicit computations in some specific cases, see this paper.