When deriving determinants from first principals, we start with multi-linearity and the alternating property. Coupling this with the fact that the determinant of the identity matrix must be $1$, we get the Leibniz formula for determinants. All of this is spelled out in this video. It also motivates the two properties (multi-linearity and the alternating property) geometrically.
I'm wondering if there is a purely algebraic motivation for why multi-linearity and the alternating property are indispensable? Does it have something to do with the fact that we're talking about quantifying the effects of linear maps?
The determinant has many different properties that uniquely determine it so it can be developed starting from multiple different combinations of those properties. Alternating + multilinear is a common one, as is the oriented volume stuff over $\mathbb{R}$.
However, I'd argue that neither of these is a good place to motivate the determinant, and neither of them is the historical motivation. Alternating + multilinear leads to elegant proofs but it leads to exactly your question: why study a function satisfying these properties? Oriented volumes are a good thing to know about but they only motivate the determinant over $\mathbb{R}$, whereas the determinant in fact makes sense and is very useful over any field, and even over any commutative ring. In fact hardly any treatments of the determinant really emphasize one of its most important features in practice: it is a polynomial, with integer coefficients.
The historical motivation of the determinant is the simplest possible one you could imagine, and the one students are first exposed to: solving systems of linear equations. It is just the function you are forced to write down when you solve systems of $n$ linear equations in $n$ variables. It is called the determinant because it determines when such systems have unique solutions.
So, along these lines, here is a purely algebraic definition of the determinant that does not mention the concept of an alternating or multilinear map at all, and defines the determinant as a polynomial (so, over every commutative ring simultaneously): start with an $n \times n$ matrix whose entries are indeterminates
$$X = \left[ \begin{array}{cccc} x_{11} & x_{12} & \dots & x_{1n} \\ x_{21} & x_{22} & \dots & x_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n1} & x_{n2} & \dots & x_{nn} \end{array} \right].$$
This is the universal matrix. Now suppose I asked you to try to invert this matrix. A systematic way to do this would be row reduction. In terms of solving systems of linear equations this corresponds to trying to eliminate variables. If you row reduce the above matrix while repeatedly assuming that every expression you need to be invertible is actually invertible (so, precisely, working over the fraction field of $\mathbb{Z}[x_{ij}]$), you will eventually write down the LU decomposition of the universal matrix, which you can use to write down the inverse.
What you will find is that the inverse has the property that every entry is a rational function of the $x_{ij}$, and moreover all the denominators are the same polynomial. This polynomial is the determinant (this is Cramer's rule). So, with this approach to the determinant we make the decision that the real defining feature of the determinant is that it is a polynomial in the entries of a matrix $X$ that is invertible iff $X$ is invertible, and whose inverse can be used to write down $X^{-1}$ explicitly. (Over a field we could also say "that vanishes iff $X$ is non-invertible." Also, this only tells us the determinant up to a sign but we can fix the sign by requiring that $\det(I) = 1$.)
This definition is a little awkward to start with because it's a little hard to prove things about (we need to prove Cramer's rule even to get started), which is probably why it's not a popular choice of starting point. But in my opinion this is the best-motivated definition from first principles by far. We get to alternating + multilinear from here by proving that $\det(XY) = \det(X) \det(Y)$, which you can see (up to a sign) from the fact that if you take $X$ and $Y$ to both have indeterminate entries $x_{ij}$ and $y_{ij}$ then $(XY)^{-1} = Y^{-1} X^{-1}$ has entries consisting of a bunch of rational functions with denominator $\det(Y) \det(X)$. From multiplicativity it follows that the determinant has the usual behavior you expect with respect to row and column operations and it's not hard to prove that it's alternating + multilinear from here (over, and I want to keep emphasizing this, every commutative ring simultaneously).