a) Assume that $(a_1, ..., a_9)$ are different positive numbers. Let us make a $3 \times 3$ matrix $A_s$ by placing them arbitrarily into $9$ positions available. Show that there is always a way to assemble them so that $\det(A_s) > 0$.
b) Assume now that some of $a_i, i =1,\dots,9$ are equal and the total number of different values taken by the $a_i$ is $N \in \{1,2,...,9\}$. What is the minimal $N$ which guarantees the existence of $A_s$ as above with $\det(A_s) >0$? Give a proof.
For part (a), I tried to show that, with row operations, $\det(A_s)$ can never equal $0$, so that the matrix is invertible, with determinant either negative or positive. Then if the determinant is positive, the proof is complete; if not, then simply interchange any two rows, which negates the determinant, giving a positive determinant as required. However, this proof doesn't work. Using just the numbers $1 \dots 9$, then $(7,8,9)$ is in the span of $(1,2,3)$ and $(4,5,6)$.
Since the question is asking to show that there is always a way, i.e., there exists a way to achieve $\det(A_s)>0$, I feel that I should work on a contradiction, and assume first that there is no way to achieve it.
Perhaps I can use the fact that $\rm{trace} \, A_s$ must be positive, but I don't see it right now.
Any hints would be greatly appreciated.
Thanks.
For part $(a)$ you are almost done. If $\mathrm{det}(A)=0$ and
$$A=\begin{bmatrix} a_1 & a_2 & a_3 \\ a_4 & a_5 & a_6 \\ a_7 & a_8 & a_9 \end{bmatrix},$$ then $(a_1,a_2,a_3)=c_1(a_4, a_5, a_6)+c_2(a_7,a_8,a_9)$ for some $c_1,c_2 \in \mathbb{R}$. Note that $c_1, c_2 \neq 0$ by assumption since $a_i \neq a_j$ whenever $i \neq j$. By construction, we see that switching any two entries in any column ensures that the rank of the matrix will become $3$ and the the determinant will therefore be non-zero.
E.g., it is given that $a_1=c_1a_4+c_2a_7$. But if we switch $a_1$ with $a_4$, we can show that there cannot exist $c_3$ and $c_4$ such that $a_4=c_3a_1+c_4a_7$, $a_2= c_3a_5+c_4a_8$, and $a_3=c_3a_6+c_4a_9$ unless either $c_1=0$, $c_2=0$, $a_1=a_2$, $a_4=a_5$, or $a_8=a_9$. I leave this to you to verify. (Edit: as the OP pointed out to me in the comments below, I have also assumed here that the column vectors of $A$ were not scalar multiples. Thanks Lebron!). We now know how to rectify the situation where $\mathrm{det}(A)=0$. You have already demonstrated what to do if the determinant is negative, so the construction is complete.
For part $(b)$, we work backwards from the situation where $N=1$. Clearly, since there is but a single matrix we can construct and its determinant is zero, this is the degenerate case. Consider $N=2$. We now need to check only $4$ possible scenarios: when we can partition the set $\{a_i\}_{1\leq i \leq 9}$ into subsets of order $1$ and $8$ such that each element in each subset is redundant, the instance when such a partition into subsets creates subsets of order $2$ and $7$, then $3$ and $6$, and finally $4$ and $5$. We show that it is possible to end up in a situation where a matrix of non-zero determinant cannot be constructed.
Consider the instance when $a_i=a_j$ for all $1 \leq i,j \leq 8$ and $a_9$ is the distinct element. Then for every matrix, two rows are equal to one another and the rank is necessarily two.
Consider now when $N=3$. I claim that this is the minimum. In this case, there are only $7$ unique partitions of the set such that for each subset, all elements are redundant. The orders of the subsets in each instance are $$\{1,1,7\}, \{1,2,6\}, \{1,3,5\}, \{1,4,4\}, \{2,2,5\}, \{2,3,4\}, \text{and} \{3,3,3\}.$$
E.g, in the partition $\{1,3,5\}$, there is a number which only occurs once in the matrix, a number which occurs $3$ times, and a number which occurs $5$ times. Call the three distinct elements $a$, $b$, and $c$ and let the partition indicate the number of each: $\{\#a,\#b,\#c\}$. We will now show by construction that for each of the above partitions, a matrix of rank 3 exists.
For {1,1,7}:
\begin{bmatrix} a & c & c \\ c & b & c \\ c & c & c \end{bmatrix}
For {1,2,6}:
\begin{bmatrix} a & b & c \\ c & b & c \\ c & c & c \end{bmatrix}
For {1,3,5}:
\begin{bmatrix} a & b & c \\ c & b & c \\ c & b & c \end{bmatrix}
For {1,4,4}:
\begin{bmatrix} a & b & b \\ c & b & c \\ c & b & c \end{bmatrix}
For {2,2,5}:
\begin{bmatrix} a & a & b \\ c & b & c \\ c & c & c \end{bmatrix}
For {2,3,4}:
\begin{bmatrix} a & a & b \\ c & b & c \\ c & b & c \end{bmatrix}
For {3,3,3}:
\begin{bmatrix} a & b & a \\ c & c & b \\ b & a & c \end{bmatrix}
Phew! I enjoyed that, it was a very cool question and I have a feeling this is not the best way to go about proving this! Please let me know if you think I've made any mistakes here.