$$ax+by = k_1\\cx + dy = k_2$$
If I want to solve for $y$ in the first equation:
$$by = k_1 - ax\implies y = \frac{k_1-ax}{b}$$
Then substitute $y$ in the second equation:
$$cx + d\frac{k_1-ax}{b} = k_2\implies \frac{bcx}{b}+\frac{dk_1-dax}{b} = k_2\implies\\bcx + dk_1 -dax = bk_2\implies x(bc-da) = bk_2-dk_1\implies\\x = \frac{bk_2 - dk_1}{bc-da}$$
The denominator of this fraction should be the first historical definition of the determinant of the matrix, because it determinates when a system has a solution or not.
$$\begin{bmatrix}a & b\\c & d\end{bmatrix}$$
should be $ad-bc$ but by my solution to the system, it's the negative of it. I know that to verify that the system has a solution, we must find if the determinant is $0$, so the signal has no problem here. But I would like to know why I get a different determinant here.
I need my solution to match the original definition of determinant because I want to express the solutions for $x$ and $y$ as determinant over determinant, like in the cramer's rule for solving systems.
Also, anybody knows how to empirically obtain the general rule for a determinant of a $n\times n$ system? It seems pretty dark to me, because I can't find anything on google. All proofs of Cramer's rule already use a definition for determinant, but I see the determinant as a consequence of it, because it determinates the existence of a solution. This is pretty importante to me, and I would like to know if there's a book that talks about it, because I've never seen one...