If the LU factorisation of a matrix is not unique, what result is being shown in MatLab?
For example, I have a matrix: $\begin{pmatrix} 1 & 4 & 3 \\ -2 & 2 & 3 \\ -1&4&2\end{pmatrix}$. The LU Factorisation I got is: $L = \begin{pmatrix} 1 & 0 & 0 \\ -2 & 1 & 0 \\ -1 & 0.8 & 1 \end{pmatrix}$ and $U = \begin{pmatrix} 1 & 4 & 3 \\ 0 & 10 & 9 \\ 0 & 0 & -2.2 \end{pmatrix}$.
MatLab's result is $L = \begin{pmatrix} -0.5000 & 1.0000 & 0 \\ 1.0000 & 0& 0 \\ 0.5000 & 0.6000 & 1.0000 \end{pmatrix}$, $U = \begin{pmatrix} -2.0000 & 2.0000 & 3.0000 \\ 0 & 5.0000& 4.5000\\ 0 & 0 & -2.2000\end{pmatrix}$
Who is correct and what algorithm is Matlab using?
Matlab's [L,U]=lu(A) doesn't necessarily give an LU decomposition of $A$, even when one exists. This is because Matlab runs an algorithm called Gaussian elimination with partial pivoting, which does row interchanges in addition to adding and scaling rows. It does these row interchanges even if they are not mathematically required, as here, because it improves numerical stability.
In Gaussian elimination with partial pivoting, you exchange rows (but not columns) so that the chosen pivot in the $i$th column is the largest number in that column in magnitude located in or below the pivot position. Usually in math notation we say that Gaussian elimination with row interchanges provides a "$PA=LU$" decomposition; in this notation, Matlab's L is $P^T L$, and is thus usually not lower triangular. Indeed it wasn't in your case. Matlab can instead return $P$ and have $L$ be lower triangular if used with an alternate call format: [L,U,P]=lu(A).
In this particular example, Gaussian elimination with partial pivoting proceeds by first exchanging row 1 and 2, then clearing column 1. Then the best pivot in the second column is already in position, so you clear column 2 without another interchange, and then you're done.
For some context as to why you would do this, consider the example of $Ax=b$ where $A=\begin{bmatrix} 1 & 1 \\ M & 1 \end{bmatrix}$ where $M$ is a representable number greater than $\varepsilon_{\text{mach}}^{-1}$. (The symbol $\varepsilon_{\text{mach}}$ refers to "machine epsilon", which is a term you can look up; in the usual double precision arithmetic used on modern computers it is about $10^{-16}$.)
Now $A=\begin{bmatrix} 1 & 0 \\ M & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 0 & -M+1 \end{bmatrix}$. In computer arithmetic, the result of computing the decomposition is $\begin{bmatrix} 1 & 0 \\ M & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 0 & -M \end{bmatrix}$. At first glance that doesn't look so bad, you just commit a little relative error in the lower right entry.
But now consider $b=\begin{bmatrix} M \\ 1 \end{bmatrix}$. The true solution is $x=\begin{bmatrix} -1 \\ M+1 \end{bmatrix}$, which means the numerical solution that we could realistically hope to get is $\begin{bmatrix} -1 \\ M \end{bmatrix}$. But if you solve $\begin{bmatrix} 1 & 0 \\ M & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 0 & -M \end{bmatrix} = \begin{bmatrix} M \\ 1 \end{bmatrix}$ in exact arithmetic you instead get $\begin{bmatrix} 1/M \\ M-1/M \end{bmatrix}$. A huge relative error is committed in computing the first entry, and moreover the second equation of the original system is far from being satisfied. Try doing this over again with $\begin{bmatrix} M & 1 \\ 1 & 1 \end{bmatrix} x = \begin{bmatrix} 1 \\ M \end{bmatrix}$ (this system after swapping rows) and see what happens.