Let $K \subseteq (0,1) \times \mathbb{R}^m$ be a subbundle of rank $k$ of the trivial bundle $(0,1) \times \mathbb{R}^m$ over $(0,1)$, that is, $K$ is a differentiable submanifold of $(0,1) \times \mathbb{R}^m$ and $K_{t}:= \left\{ v \in \mathbb{R}^m: (t,v) \in K \right\} $ is a vector subspace of $\mathbb{R}^m$ of dim $k$ for any $t \in (0,1)$. I am trying to prove that if for any section $\sigma \in \Gamma(K)$ we have that $\frac{d \sigma} {d t} \in K$, then $K = (0,1) \times \mathbb{R}^k$.
Firstly, to make sense of $\frac{d \sigma} {d t}$, I am seeing it as a shorthand for $d_{t_0} \sigma ( \left. \frac{d}{dt} \right\vert_{t=t_0} ) \in T_{\sigma(t_0)} K$; thus the condition in the hypothesis would read out be that $d_{t_0} \sigma ( \left. \frac{d}{dt} \right\vert_{t=t_0} ) \in K_{t_0}$. Is this correct? If so, can one see that $T_{\sigma(t_0)}K \simeq K_{t_0}$ (this seems like it could be useful, if true)?
And of course, I am lost about how to prove the result. I have the hint that the composition of the maps $$ K \hookrightarrow (0,1) \times \mathbb{R}^m \xrightarrow{pr_2} \mathbb{R}^m $$ is supposed to be of rank $k$, but I don't know how to show this i.e. that the composition $f = pr_2 \circ i$ has $\dim (d_{(t,v)}f(T_{(t,v)}K)) = k$ and also I am confused as to why that would imply that $K_t$ doesn't depend on $t$ - assuming it's true that $T_{(t,v)} \simeq K_t$ for any $(t,v) \in K$, we'd have that $K_t$ is a $k$-dimensional subspace of $T_v \mathbb{R}^m \simeq \mathbb{R}^m$, but it's still unclear why $K_t$ would have to be the same subspace for all $t$.
$T_{\sigma(t_0)}K$ has one more dimension than $K_{t_0}$ - this last one captures all the fiber directions, but there is also the base direction.
The $d_{t_0} \sigma ( \left. \frac{d}{dt} \right\vert_{t=t_0})$ will be a vector wich pushes forward by the projection to $\frac{d}{dt}$, but we want the "other part". Luckily the $T[(0,1) \times \mathbb{R}^m]=T[(0,1)]\times T[\mathbb{R}^m]$ so we can just throw away the first part -- i.e. project to $T[\mathbb{R}^m]$. Another way to get the same answer would be just to take $\lim_{t\to t_0} \frac{pr_2(\sigma(t))-pr_2(\sigma(t_0)}{t-t_0}\in \mathbb{R}^m$ and think of it as a vector in the fiber $\{t_0\}\times \mathbb{R}^m$.
Well, it's not entirely clear what $K=(0,1)\times \mathbb{R}^k$ means; unless we know one of the fibers is $\mathbb{R}^k\subset \mathbb{R}^k$ (this is abuse of notation: $k$ tuples of real numbers are not $m$ tuples of real numbers; but you know what I mean -- we extend by zeroes), it is in fact not true that $K=(0,1) \times \mathbb{R}^k$ as a set (and even then -- see above about abuse of notation).
But you are right that we can conclude that there is a $k$-dimensional subspace $V$ of $\mathbb{R}^n$ such that $K=(0,1)\times V$ (as a set).
The idea is that in order for $K_t$ to "move" with $t$ some point of it would have to escape $V$, and we should be able to detect it by taking the path of this point and seeing that at some point its velocity was not in $V$. Making this into a proof requires some technical manipulations. Perhaps you can come up with better ones, but here is an option:
I will habitually identify points in a fiber $K_t$ and their projections to $\mathbb{R}^m$.
Take $T_0\in (0,1)$ and let $V=K_{T_0}$ thought of as a subspace of $\mathbb{R}^n$. Let $V$ be given by $n-k$ (linear, but smooth would do just as well) equations $f_1(x)=0, \ldots, f_{n-k}(x)=0$ (here $x\in \mathbb{R}^n$ and $f_j:\mathbb{R}^n\to \mathbb{R}$).
Now we want to show that for every $t$, and any $x_t\in K_t\subset \mathbb{R}^n$ we also have $f_j(x_t)=0$ - so that $K_t=V$. The plan is to show that this is true "locally" using the derivative condition, and then deduce that it is true globally. Technically, we can set it up using connectedness if $(0,1)$.
For this, consider the set of all $t$ for which the above is true. It is non-empty (since it includes $T_0$) and we want to show it is open and closed. Then it would be all of $(0,1)$.
It is closed for general reasons: suppose not, then there is $t$ and $t_i\to t$ such that $K_t$ is not in zero set of some $f_j$ but all $K_{t_i}$s are; but then there is a sequence of $x_i\in K_{t_i}$ converging to $x\in K_t$ with $f_j(x_i)=0$ and $f_j(x)\neq 0$, which is impossible since $f_j$ is continuous.
Let's show that it is open.
Suppose $t_0$ is in this set. We know that $K$ is locally trivial, so there is $(a,b)\subset(0,1)$ containing $t_0$ such that over $(a,b)$ we have a trivialization. We use this to get a frame over $(a,b)$ - a $k$-tuple of vectors $v_1(t), \ldots, v_k(t)$ in $\mathbb{R}^m$ smoothly depending on $t$ and forming (for each $t\in (a,b)$) a basis for $K_t$.
First, let's treat the case when $k=1$. Then we have $v'(t)=\lambda(t)v(t)$ for some smooth function $\lambda:(a,b)\to \mathbb{R}$. Now there are at least two ways to go:
We can observe that each of the $m$ components of $v$ satisfy the same linear ODE $f'(t)=\lambda(t)f(t)$ (with possibly different initial conditions). By uniqueness of solutions and linearity, all solutions are proportional (as functions of $t$) -- which means precisely that $v(t)$ is a rescaling of $v(t_0)$.
Let $f(t)$ be the solution of the ODE $f'(t)=\lambda(t)f(t)$ with initial value $f(t_0)=1$. Observe that since the ODE has solution $f(t)=0$ and by uniqueness of solutions two distinct solutions never cross, $f(t)>0$ for all $t\in(a,b)$. Now let $u(t)=v(t)/f(t)$. We compute:$$u'=\frac{v'f-f'v}{f^2}=\frac{\lambda v f-\lambda f v}{f^2}=0$$Hence $u$ is constant, $u(t)=u(t_0)$ and $v$ is always a multiple of $v(t_0)$.
This completes the proof for the $k=1$ case.
For the general $k$ we again have at least two ways to go:
Consider the alternating form $\omega(t)=v_1^*(t)\wedge v_2^*(t)\wedge \ldots \wedge v_k^*(t)$. It is now a vector in the vector space of alternating forms, and since each $\frac{d v_i(t)}{dt}$ is a linear combination of $v_1(t), \ldots, v_k(t)$, it has $\frac{d \omega(t)}{dt}=\lambda(t)\omega(t)$. Hence by the same arguments as in the $k=1$ case it is constant up to rescaling, which exactly means that the vector space $K_t$ spanned by $(v_1, \ldots, v_k)$ is constant.
To do what is essentially the same thing, but in simpler language, we form an $m\times k$ matrix $V(t)$ with columns $v_i(t)$, and the condition that all $v'_i$ are in the span of the $v_i$ becomes $V'(t)= V(t)\Lambda(t)$, where now $\lambda(t)$ is a $k\times k$ matrix which tells us how to mix $v_i$s to get $v'_j$s. We can let $F(t)$ be the $k\times k$ matrix solution to matrix ODE $F'=F\Lambda$ with $F(t_0)=Id_k$. Then $f(t)=\det F(t)$ satisfies $f'=tr(\Lambda)f$ and so is non-zero, meaning $F(t)$ is invertible, and from $0=(FF^{-1})'=F'F^{-1}+F(F^{-1})'$ we have $(F^{-1})'=-F^{-1}F'F^{-1}=\Lambda F^{-1}$, so that $$(VF^{-1})'=V'F^{-1}-V(F^{-1})'= V\Lambda F^{-1}-V\Lambda F^{-1}=0$$ So $V(t)F^{-1}(t)$ is constant, equal to $V(t_0)$ and $V(t)=V(t_0)F(t)$, meaning that $K_t$, the span of $v_i(t)$s is $K_{t_0}$.
This completes the proof.