It is often quoted that the complexity of Gaussian Process algorithms is $\mathcal{O}(n^3)$ due to the need to invert an $n \times n$ matrix, where $n$ is the number of data points. But as far as I can find online, matrix multiplication is also $\mathcal{O}(n^3)$.
So would the complexity of GP algorithms still be $\mathcal{O}(n^3)$ if we no longer needed to invert matrices? Or have I missed something.
Matrix multiplications is indeed of complexity $\mathcal{O}(n^3)$ (well, for school book implementations). It is cubic when we multiply two matrices together. However, in my personal opinion, it is quite rare to perform this operation in machine learning algorithms, hence perhaps that is why we rarely hear people blaming the complexity of algorihms on matrix multiplication.
A more common operation is to perform matrix-vector multiplication, which has complexity $\mathcal{O}(n^2)$. In contrast, computing $A^{-1}$ alone would cost us $\mathcal{O}(n^3)$.
If you no longer need to invert the matrix, the final complexity would depends on what do you replace this step with. The complexity stays the same if you replace this step with another $\mathcal{O}(n^3)$ procedure. If you replace it with a cheaper procedure, then the complexity reduces.