Problem: Show that $A$ and $B$ are similar without using eigenvalues or eigenvectors:
$$ A = \left[ {\begin{array}{ccc} 1 & 1 & 1 \\ 0 & 2 & 2 \\ 0 & 0 & 3 \\ \end{array} } \right] $$
$$ B = \left[ {\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \\ \end{array} } \right] $$
I see that they are row-equivalent. But how to determine the transition matrix $S$ such that $S^{-1}AS = B$? I understand diagonalizability (which they are) but I am instructed not to use that concept.
I also know that I could set up a system of equations to solve for the entries of $S$ like $AS = SB$, but that strikes me as crude.
Here is another attempt of an answer. It will not mention eigentheory, but rely on the observation that $A$ and $B$ are upper triangular matrix that share an identical diagonal.
Warning: This approach does not work in general. This warning seems to be more relevant than the actual heuristic trial-and-error approach, which is why this disclaimer proceeds the actual answer.
Look at the diagonal matrices $D_1=\text{diag}(1,2,3)$ and $D_2=\text{diag}(3,2,1)$. As diagonal matrices they are upper triangular and their diagonals are proper permutations of each other, just short of being identical. When we look for a transition matrix $T$ with $D_1T=TD_2$ it is however not possible to choose it from the upper triangular matrices. You need to permute the diagonal entries, e.g. $T=\begin{pmatrix}0&0&1\\0&1&0\\1&0&0\end{pmatrix}$. Another candidate would be $T=\begin{pmatrix}0&0&\frac{1}{2}\\0&1&0\\2&0&0\end{pmatrix}$ - those transition matrices are usually not unique as witnessed by the two examples just given, but no upper triangular matrix turns out to be a transition matrix. To see this set $U=\begin{pmatrix}a&b&c\\0&d&e\\0&0&f\end{pmatrix}$, compute $U^{-1}=\begin{pmatrix}\frac{1}{a}&-\frac{b}{ad}&\frac{be-cd}{adf}\\0&\frac{1}{d}&-\frac{e}{df}\\0&0&\frac{1}{f}\end{pmatrix}$ and $UD_2U^{-1}=\begin{pmatrix}3&-\frac{b}{d}&\frac{be-2cd}{df}\\0&2&-\frac{e}{f}\\0&0&1\end{pmatrix}$ which never can be equal to $D_1$ as the diagonal does not fit and is independent of the variable entries of $U$.
An even worse example is given by $\tilde{A}=\begin{pmatrix}1&0&0\\0&1&1\\0&0&1\end{pmatrix}$ and $\tilde{B}=\begin{pmatrix}1&1&0\\0&1&0\\0&0&1\end{pmatrix}$. These are both upper triangular with identical diagonal which on top consist only of identical entries. Possible transition matrices are $T=\begin{pmatrix}0&0&1\\1&0&0\\0&1&0\end{pmatrix}$ or $T=\begin{pmatrix}0&0&\frac{5}{2}\\-3&0&0\\0&-3&0\end{pmatrix}$ which can be easily checked to satisfy $\tilde{A}T=T\tilde{B}$, none of them are upper triangular. If we try the approach with the "universal" upper triangular transition matrix we get $U\tilde{B}U^{-1}=\begin{pmatrix}1&\frac{a}{d}&-\frac{ae}{df}\\0&1&0\\0&0&1\end{pmatrix}$ which can never coincide with $\tilde{A}$ since at the position of the off-diagonal $1$ in $\tilde{A}$ we cannot get anything else than $0$.
These examples should make it clear, that the approach that I will present now, is not guaranteed to succeed, it is only a heuristic. The deeper reason for this failure is Jordan normal form theory, which requires that suitable transition matrices for the two examples above need to have a certain permutation part that is not achievable with upper triangular transition matrices alone. To show the impossibility concretely you don't need to rely on Jordan normal form theory, we explicitly computed the impossibility.
Actual answer: We need to find a $T$ such that $AT=TB$ for $A,B$ given in the question. Since $A,B$ are both upper triangular and the upper triangular matrices form a subalgebra of all matrices, we can heuristically hope to find an upper triangular $T=\begin{pmatrix}a&b&c\\0&d&e\\0&0&f\end{pmatrix}$ - although examples above show, that this wishful hope might be vacuous.
Additionally, since the diagonals of $A$ and $B$ are identically, we may thus hope that we can get away with a diagonal in $T$ consisting only of $1$s, i.e. $a=d=f=1$ resp. $T=\begin{pmatrix}1&b&c\\0&1&e\\0&0&1\end{pmatrix}$. This cuts down on our variables and makes the systems of equations that you already suggested in your question and which comes from $\begin{pmatrix}1&b+1&c+e+1\\0&2&2e+2\\0&0&3\end{pmatrix}=AT=TD=\begin{pmatrix}1&2b&3c\\0&2&3e\\0&0&3\end{pmatrix}$ less "crude" or dull.
We can now solve for $b,c,e$ - first solve for $b,e$ then using these solve for $c$ - and get a unique solution. Now you have an upper triangular transition matrix with only $1$s in the diagonal (actually the unique such transition matrix) for $A$ and $B$.
Summary: Solve the equation $AT=TB$ for $T$ in an informed way making some heuristic assumptions on the way that cuts down on the indeterminants. Get a rather simple linear system in just 3 unknowns. Be happy that we were lucky enough to succeed.
Pro: No mention of eigenstuff, not even hidden away from the proof/computation checker.
Con: As examples show this may fail terribly in general. You could still fall back to the "crude" system of equations in 9 unknowns.
Upshot: There is a reason why mathematicians developed eigenvalues and (generalized) eigenvectors, since this theory is guaranteed to work in general and compared to huge systems of equations has reasonable computational cost. Eigencomputations might be tedious, but compared to alternatives they are relatively efficient.