$ \newcommand{\m}[1]{\left( \begin{matrix} #1 \end{matrix} \right)} $ The Problem: Consider the matrices $A_n := \left(a^{(n)}_{i,j}\right)_{1 \le i,j \le n},B_n := \left(b^{(n)}_{i,j}\right)_{1 \le i,j \le n} \in M_{n \times n}(\Bbb R)$ with entries defined as follows:
$$a^{(n)}_{i,j} := \min \{i,j\} \qquad b^{(n)}_{i,j} := \max \{i,j\}$$
(Examples of these matrices follow momentarily, to show how the structure of each "evolves" and emerges as $n$ increases.)
What are $\det(A_n)$ and $\det(B_n)$ in the general $n \times n$ case, for each $n \in \Bbb Z^+$?
Background/Context: I happened to have a question similar to this on my linear algebra final a few weeks ago. (In full disclosure: final grades have already been determined and all that, so this isn't any longer an "active" test question.) However, that problem used a matrix similar to $A_n$, without specific numbers, and only the $4 \times 4$ case.
I then recently found a video by Dr. Peyam which considered the $5 \times 5$ cases for both $A_n$ and $B_n$, and hinted that these possibly came from a Putnam exam. I'm not sure if it was, though. I looked back through all of the Putnam problems back to $1985$ and didn't see this exact one, though one was similar: $A2$ from $2014$, where $A_n$ is instead defined by $a^{(n)}_{i,j} = 1/\min\{i,j\}$. (Maybe I'll look at that some other time.)
I imagine the definition for $B_n$ is just considered a natural generalization of this problem. In any event, I couldn't find this problem handled in its generality on MSE, so I figured I'd try to figure it out, and post my own solution.
Of course, if you have your own solutions, they're greatly welcomed! Each can provide their own insights.
Small Cases for the Matrices: To help show the structure of the matrices, $A_n$ and $B_n$ look like, in the cases for $n=1,2,\cdots,6$, as follows. Determinants are included, but calculated by Wolfram.
\begin{alignat*}{3} &\boxed{n=1} &&\qquad A_1 = \m{ 1 } &&\qquad \det(A_1) = 1\\ & &&\qquad B_1 = \m{ 1 } &&\qquad \det(B_1) = 1\\ &\boxed{n=2} &&\qquad A_2 = \m{ 1 & 1 \\ 1 & 2 } &&\qquad \det(A_2) = 1\\ & &&\qquad B_2 = \m{ 1 & 2 \\ 2 & 2 } &&\qquad \det(B_2) = -2\\ &\boxed{n=3} &&\qquad A_3 = \m{ 1 & 1 & 1 \\ 1 & 2 & 2 \\ 1 & 2 & 3 } &&\qquad \det(A_3) = 1\\ & &&\qquad B_3 = \m{ 1 & 2 & 3 \\ 2 & 2 & 3 \\ 3 & 3 & 3 } &&\qquad \det(B_3) = 3\\ &\boxed{n=4} &&\qquad A_4 = \m{ 1 & 1 & 1 & 1 \\ 1 & 2 & 2 & 2 \\ 1 & 2 & 3 & 3 \\ 1 & 2 & 3 & 4 } &&\qquad \det(A_4) = 1\\ & &&\qquad B_4 = \m{ 1 & 2 & 3 & 4 \\ 2 & 2 & 3 & 4 \\ 3 & 3 & 3 & 4 \\ 4 & 4 & 4 & 4 } &&\qquad \det(B_4) = -4\\ &\boxed{n=5} &&\qquad A_5 = \m{ 1 & 1 & 1 & 1 & 1 \\ 1 & 2 & 2 & 2 & 2 \\ 1 & 2 & 3 & 3 & 3 \\ 1 & 2 & 3 & 4 & 4 \\ 1 & 2 & 3 & 4 & 5 } &&\qquad \det(A_5) = 1\\ & &&\qquad B_5 = \m{ 1 & 2 & 3 & 4 & 5 \\ 2 & 2 & 3 & 4 & 5 \\ 3 & 3 & 3 & 4 & 5 \\ 4 & 4 & 4 & 4 & 5 \\ 5 & 5 & 5 & 5 & 5 } &&\qquad \det(B_5) = 5\\ &\boxed{n=6} &&\qquad A_6 = \m{ 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 2 & 2 & 2 & 2 & 2\\ 1 & 2 & 3 & 3 & 3 & 3\\ 1 & 2 & 3 & 4 & 4 & 4\\ 1 & 2 & 3 & 4 & 5 & 5\\ 1 & 2 & 3 & 4 & 5 & 6} &&\qquad \det(A_6) = 1\\ & &&\qquad B_6 = \m{ 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 2 & 3 & 4 & 5 & 6\\ 3 & 3 & 3 & 4 & 5 & 6 \\ 4 & 4 & 4 & 4 & 5 & 6 \\ 5 & 5 & 5 & 5 & 5 & 6 \\ 6 & 6 & 6 & 6 & 6 & 6} &&\qquad \det(B_6) = -6 \end{alignat*}
One reasonably conjectures that $\det(A_n) = 1$ for all $n$ and $\det(B_n) = (-1)^{n-1} n$ for all $n$.
Another approach that is slightly more involved, but comes at the problem from a different angle and treats both $A_n$ and $B_n$ as special cases of a more general formula. We will actually compute the inverse matrices and relate them to graph laplacians, then use Kirchoff's theorem to eveluate the determinants.
Let $x$ be a variable, and define the $n\times n$ matrix $$L_n(x)=\left(\begin{array}{ccccc} -1 & 1 \\ 1 & -2 & 1\\ \\ &\ddots& \ddots &\ddots\\ && 1 &-2 & 1\\ &&& 1 &-x-1 \end{array}\right)$$
The connection with graph theory is that if we consider the linear graph on $n+1$ nodes, with the edge weight between the last two nodes being equal to x, and the other edge weights equal to 1, then $L_n$ is equal to the corresponding graph Laplacian with the last row and column removed (and opposite the usual sign convention). As a consequence, we get the following:
Lemma $det(L_n(x))=(-1)^nx$
Proof Follows from Kirchoff's theorem about spanning trees. Since the graph is linear, it has only a single spanning tree, and the weighting is $(1)^nx$. The factor of $(-1)^n$ arises because we are using the opposite of the usual sign convention.
Now let us compute $L_n^{-1}$. We use the block decomposition $$L_n=\left(\begin{array}{cc} L_{n-1}(1) & e_{n-1} \\ e_{n-1}^t & -x-1\end{array}\right):=\left(\begin{array}{cc} A & B \\ C & D\end{array}\right)$$ where $e_{n-1}$ denotes the vector of length $n-1$ with a 1 in the last entry and 0 elsewhere.
Now apply the formula for the inverse of a block matrix. The main observation is that $A\textbf{1}=-e_{n-1}$, so that $A^{-1}e_{n-1}=-\textbf{1}$. The remaining calculations are routine, with the final result being
$$L_n^{-1}(x)=\left(\begin{array}{cc} L_{n-1}^{-1}(1) -\textbf{1}\textbf{1}^t/x& -\textbf{1}/x \\ -\textbf{1}/x & -1/x\end{array}\right)=\left(\begin{array}{cc} L_{n-1}(1)^{-1} & 0 \\ 0 & 0\end{array}\right)-\textbf{1}\textbf{1}^t/x$$
If we set $x=1$ then we get a recurrence for $L_{n}^{-1}(1)$, and it easily follows by induction that the solution is $(L_{n}^{-1}(1))_{ij}=-\min(n-i+1,n-j+1)$.
Computing $\det(A_n):$
By the previous discussion, we can write $A_n=-P^t(L_n^{-1}(1))P$ where $P$ is the permutation matrix the reverses the order of the rows. Conjugating does not change the determinant, so applying the Lemma, $det(A_n)=(-1)^n det(L_n(1))^{-1}=(-1)^n(-1)^n=1$.
Computing $\det(B_n):$
For $1\leq i,j\leq n$, observe that $max(i,j)=-min(n-i,n-j)+n=((L_{n-1}(1))^{-1})_{ij}+n$. Thus $$B_n=\left(\begin{array}{cc} L_{n-1}(1)^{-1} & 0 \\ 0 & 0\end{array}\right)+n\textbf{1}\textbf{1}^t$$
and therefore $B_n=L_n(-1/n)^{-1}$. By the lemma, $det(B_n)=1/det(L_n(-1/n))=n(-1)^{n+1}$.