The Wikipedia article on the computational complexity of matrix multiplication shows that the fastest known algorithms have gone from $O(n^{2.8074})$ in 1969 to $O(n^{2.37188})$ in 2022. I wonder if the following situation is possible: Could matrix multiplication be performed in $O(n^{2+\epsilon})$ operations for all $\epsilon>0$ (using possibly different algorithms for different values of $\epsilon$), but not solved in $O(n^2)$ operations by any algorithm?
I know little about computational complexity apart from the basic definitions, so I don't know whether there is any theorem or intuition that would show this to be impossible. It would be quite interesting if true!
Yes, it is possible. For example, it is known that for sorting of $n$ items, one needs to do '' $c n \log n$ '' comparison instructions, where 1. I do not remember the constant $c$ $\ $ 2. I do not remember the proper rounding for $n$ that is not a power of two (and therefore I use the quatation marks, my appologies).
Likewise, there are tasks for which the best algorithm has complexity $c n^2 \log n$. This is strictly worse than $O(n^2)$ but it is $O(n^{2+\varepsilon})$ for every $\varepsilon >0$. (One algorithm for every $\varepsilon$ is actually possible.)
But we do not talk here about matrix multiplication, because the question of matrix multiplication is an open problem. The possibilities questions in math should be asked a different way, mathematics does not know about subjective possibilities.