Given a $n\times n$ matrix $A=\{a_{ij}\}$, it is well-known that the following conditions are equivalent for $ A $:
(1) $ \sum\limits_k a_{ik}a_{jk}=\delta_{ij} $;
(2) $ \sum\limits_k a_{ki}a_{kj}=\delta_{ij} $,
where $\delta{ij}$ is the Kronecker delta.
With the help of $(A^{T}A)^T=AA^T=E$, I can prove $(1)\Longrightarrow(2)$ by doing matrix multiplication.
But, how to prove it without matrix multiplication?
Thanks!
What I write here is at best very fuzzy, reasoning about poorly defined concepts. I added the first-order-logic tag to the question, in case one of those guys can give a better formulation.
I believe the answer is it's impossible. One can't give a precise proof of that without first giving a precise definition of "without matrix multiplication", which is a bit problematic (see the first paragraph below). But:
Anything you can prove with matrix multiplication you can also prove without matrix multiplication, just by manipulating the sums that give the entries in the matrix product directly, without referring to the product explicitly.
The standard proof of the equivalence of (1) and (2) uses not just matrix multiplication, it also uses a non-trivial fact about matrix multiplication:
The NTF is false for linear operators in general, which shows that it can't be proved by just looking at the formulas for the entries in $AB$ and $BA$ and playing with indices. That's not a very precise description of what sort of proof is impossible; the point is that a proof that's so simple that if it worked it would also work for general linear operators can't be right; the NTF depends on finite dimensionality, which is a higher-order thing, not expressed just in terms of $a_{jk}$ and $b_{jk}$.
(Tha argument you sketch for the equivalence is simple enough that it could be simply rewritten in terms of $a_{jk}$, and it doesn't appear to use NTF. But it's also wrong: In fact $(A^TA)^T=A^TA$, not $AA^T$.)
So one suspects that the equivalence of (1) and (2) is also non-trivial in the sense we've been waving our hands at. For general linear operators it's not so clear perhaps what (1) and (2) even mean. But consider the following "infinite matrix" (which btw defines an operator on $\ell_2$ with no problem): $$A=\begin{bmatrix}\frac1{\sqrt 2}&\frac1{\sqrt 2}&0&0&0\dots \\0&0&1&0&0\dots \\0&0&0&1&0\dots \\0&0&0&0&1\dots \\\vdots\end{bmatrix}.$$
Here the rows are orthonormal but the columns are not. Hence the equivalence of (1) and (2) cannot be proved by the sort of argument that I suspect you mean by "without matrix multiplication" (roughly, simple "first-order" manipulation of the equations in (1) to give the equations in (2).)
Note For a specific fixed value of $n$ I think it's clear that there must be a simple "first-order" proof that (1) and (2) are equivalent; what I'm claiming is impossible is such a proof of the fact that (1) and (2) are equivalent for all $n$.