I've been having a lot of trouble understanding the proof of this theorem:

So I've tried to really dissect everything and wanted to make sure that I've been seeing things correctly. Could someone verify my steps are valid/point where I need to reread any definitions due to wrong work?
In regards to the implication $m\in I \implies m(\alpha)=I$, I approached it with by noting that any polynomial $f(t)=\sum a_it^i$. In this case, we have
$\begin{align*} m(t+I) &= \sum (m_i+I)(t+I)^i \\ \\ &= \sum (m_i+I)(\underbrace{t \cdots t}_\text{$i$-many}+I) \\ &= \sum (m_i+I)(t^i+I) \\ \\ &= \sum (m_it^i+I) \\ \\ &= (m_0t^0+I)+(m_1t^1+I)+ \cdots +(m_{\partial m}t^{\partial m} +I) \\ \\ &= \left( \sum m_it^i \right) + I = m(t)+I=I. \end{align*}$
(Note, $\partial m$ just denotes the degree of $m(t)$ and all sums above are clearly over $[0,\partial m]$.)
Another thing that wasn't inherently clear from first glance was "$S=K(\alpha)$", so I tried to work out these details as well.
$K[t]/I \subseteq K(t+I):$ Let $x \in K[t]/I$. Then $x$ is of the form $f+I$ where $f \in K[t]$ and may be written as $$f(t)=f_0+f_1t+f_2t^2+\cdots+f_nt^n$$ for $f_i\in K[t]$ and some integer $n$. Now in $K(t+I)$ we have $$f_i+I = \underbrace{(1+I)+ \cdots +(1+I)}_\text{$f_i$-many}$$ and $$\begin{align*} f_0+f_1(t+I)+f_2(t+I)^2+\cdots+f_n(t+I^n) &= (f_0+I)+(f_1t+I) + \cdots + (f_nt^n+I) \\ &= f(t)+I \\ \end{align*}$$ as needed.
$K[t]/I \supseteq K(t+I):$ Let $x\in K(t+I)$. Then $x$ is an element that can be obtained from the elements of $K \cup \{t+I\}$ by a finite sequence of field operations. Noting we have $K$ "contained in" $K[t]/I$ (that is, we can identify $K$ with $v(i(K))$) and we have $t+I\in K[t]/I$ as well, then any element $x$ in $K(t+I)$ that is obtained through a finite sequence of field operations on $K \cup \{t+I\}$ is also an element of $K[t]/I$ that can be obtained through the same finite sequence of field operations of $v(i(K)) \cup \{ t+I \}$.
The rest of the proof in the image seems clear, it's just those two steps above I stumbled over and have been having trouble flushing out details in...
There's a different way of looking at ideals that, in my opinion, makes a lot of commutative ring calculations make a lot more sense.
Given an ideal $I$ of a commutative ring $R$ we can define an equivalence relation $\sim_I$ on $R$ via $r\sim_I s \iff r-s\in I$. A key property of this equivalence relation is that it is a congruence with respect to the ring operations. That is, if $r \sim_I r'$ and $s \sim_I s'$ then $rs\sim_I r's'$ and similarly for $+$. (In fact, we can turn this around and derive the notion of ideal by considering what the equivalence class of $0$ looks like with respect to a congruence.) The congruence property means that the ring operations on $R$ are well-defined as operations on the quotient set, $R/{\sim_I}$.
Now let's look at $R/{\sim_I}$ in a way that is likely different than you're used to. The normal approach is to consider the equivalence classes, and with a bit of work you can show that they are of the form $r+I\equiv\{r+s\mid s\in I\}$, but let's look at the quotient set in a different way. Let's consider the elements of $R/{\sim_I}$ to be the elements of $R$ itself but instead change what it means for two elements to be equal. For $r,s\in R$, $r = s$ if and only if they are the same element. For $r,s\in R/{\sim_I}$, we'll say $r = s$ if and only if $r \sim_I s$. Now, if $s\in I$, then $r+s = r$ in $R/{\sim_I}$ because $s = 0$ in $R/{\sim_I}$. Again, the congruence property is crucial for this to make sense.
From this perspective, the first result you mention is completely trivial. First, note that in this perspective $\alpha = t$ in $R/{\sim_I}$. The result is saying if $m$ is in the equivalence class of $0$, then $m = 0$ in $R/{\sim_I}$ but this is just different ways of saying the same thing. The equivalent to the work that you did is using explicitly applying the congruence property step-by-step. However, it readily follows from congruence that if $p$ is a polynomial, then $p(x)\sim_I p(y)$ if $x \sim_I y$. You've almost certainly proven that $(r+I)+(s+I) = (r+s)+I$ and $(r+I)(s+I)=rs +I$ from which $p(r+I) = p(r) +I$ just as readily follows. You want to get to a point where this fact is second nature.
It's not completely clear what your definition of $K(\alpha)$ is. It seems to be something like $K\cup\{\alpha\}$ plus all elements that can be created by applying the "field operations" to elements of that set a finite number of times. Unless you embed $K\cup\{\alpha\}$ into some suitably large field, this is a somewhat vague definition: what should $\alpha+3$ be? The simplest answer is that we take formal expressions, i.e. syntax itself, as the elements and then quotient the result to identify formal expressions that should be equal, e.g. $r+s = s+r$. This is one perspective on the polynomial ring with respect to the commutative ring operations. Similarly, for the field operations the analogue is the field of rational functions. The interesting part of this theorem is really Lemma 3.4. The only field operation that isn't a commutative ring operation is inverting an element. What Lemma 3.4 implies is that the inverse of a polynomial $p\in K[t]/I$ is expressible as another polynomial in $K[t]/I$. So all field operations for $K(\alpha)$ can be expressed as a finite number of commutative ring operations in $K[t]/I$. We can embed $K[t]/I$ into $K(\alpha)$ via $p(t)\mapsto \frac{p(\alpha)}{1}$. The inverse is $\frac{p(\alpha)}{q(\alpha)}\mapsto p(t)q(t)^{-1}$ where $q(t)^{-1}$ is the polynomial that satisfies $q(t)q(t)^{-1} \sim_I 1$ that Lemma 3.4 implies exists.
Returning to the equivalence relation perspective, it should be a bit more clear that $K[t]/\langle m\rangle$ is a rather natural choice for building a ring where you have an extra element, $t$, and $m(t)=0$. This is what $K[t]/\langle m\rangle$ says! It's the ring with one indeterminate where $m$ behaves like $0$. The significance of irreducibility and monicness is to avoid zero divisors. The surprising thing is that the resulting ring turns out to be a field.