Essentially just the title. This is supposed to be the introductory Upper Division Linear Algebra class so we haven't covered the determinant yet. I'm mentioning this because I am aware of the theorem that states that the conclusion would follow if the determinant was either 1 or -1. We haven't proven that in class yet so we can't use it. Anyway, if any of you have any ideas or hints, I would greatly appreciate it.
Thank you!
EDIT: Actually, that theorem does not apply in the way I stated. Either way, it cannot be used to solve this problem as it wasn't covered in class.
EDIT 2: This question is different than the other question asking why the elements of the inverse of the Hilbert Matrix are integers because my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses.
To abbreviate my post here, a determinant-free proof that the inverse of the Hilbert matrix has integer entries.
Consider the inner product $\langle f,g\rangle =\int_0^1 fg$ on nice enough functions. The $n\times n$ Hilbert matrix $H$ has $ij$ entry (running the labels from zero to $n-1$) $\langle x^i,x^j\rangle$. This makes it a Gramian matrix. The vectors we used are the usual basis for the polynomials of degree $\le n-1$, and from that we can write the inner product of any two such polynomials, written in terms of the usual basis, as $\langle f,g\rangle = f^T H g$.
Let $P$ be an upper triangular matrix with its columns orthonormal with respect to this inner product - the columns are exactly what we get by applying Gram-Schmidt to the standard basis. Then $P^T H P = I$. Move some things around, and $H=(P^T)^{-1} P^{-1}$, so $H^{-1}=P\cdot P^T$.
That's all very general stuff - but we started with something special. We already know a lot about polynomials - in particular, we have an explicit family of orthogonal polynomials with respect to this product, the Legendre polynomials. The usual inner product in the definition there uses the interval $[-1,1]$, but going to $[0,1]$ is just an affine substitution. Legendre polynomials in the standard scaling are normalized to values of $\pm 1$ at the endpoints - here, that would be $q_k(x) = \frac{1}{k!}\frac{d^k}{dx^k}\left((x-x^2)^k\right)$. We want orthonormal scaling, so we note that $$\langle q_k,q_k\rangle =\int_0^1 \frac{1}{(k!)^2}\frac{d^k}{dx^k}\left((x-x^2)^k\right)\cdot \frac{d^k}{dx^k}\left((x-x^2)^k\right)\,dx$$ Integrating by parts $k$ times, we get $$\langle q_k,q_k\rangle =\frac{(-1)^k}{(k!)^2}\int_0^1 \left((x-x^2)^k\right)\cdot \frac{d^{2k}}{dx^{2k}}\left((x-x^2)^k\right)\,dx = \binom{2k}{k}\int_0^1 x^k(1-k)^x\,dx$$ Integrate that by parts another $k$ times, and it becomes $$\langle q_k,q_k\rangle = \binom{2k}{k}\int_0^1 \frac{k!x^k}{(2k)!}\cdot (k!)\,dx = \int_0^1 x^{2k}\,dx=\frac1{2k+1}$$ To get our orthonormal scaling, we then divide $q_k$ by the square root of this number and get $p_k = \sqrt{2k+1}q_k$.
Now, we also note that the $q_k$ are all polynomials with integer coefficients; the operator $\frac{1}{k!}\frac{d^k}{dx^k}$ preserves integer polynomials. So then, we can write $$P=\begin{pmatrix}q_0 & q_1 &\cdots & q_{n-1}\end{pmatrix} \begin{pmatrix}1&0&\cdots&0\\0&\sqrt{3}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots & \sqrt{2n-1}\end{pmatrix}$$ $$P\cdot P^T = \begin{pmatrix}q_0 & q_1 &\cdots & q_{n-1}\end{pmatrix} \begin{pmatrix}1&0&\cdots&0\\0&\sqrt{3}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots & \sqrt{2n-1}\end{pmatrix}^2\begin{pmatrix}q_0^T \\ q_1 \\ \cdots \\ q_{n-1}\end{pmatrix}$$ $$H^{-1} = P\cdot P^T = Q\begin{pmatrix}1&0&\cdots&0\\0&3&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&2n-1\end{pmatrix}Q^T$$ All three terms of that last product are matrices with integer coefficients, and $H^{-1}$ must be an integer matrix. Of course, now that we have this explicit $LU$ factorization for the inverse, we can calculate all sorts of things about it easily, such as the sum referenced in the linked thread.