A matrix $A\in \mathbb{R}^{m\times n}$ is given with the property
$$a_{i,j}\leq a_{i,j + 1}\quad\text{and}\quad a_{i, j}\leq a_{i + 1, j}\text{,}$$ e.g.
$$ A = \begin{bmatrix} -4 & -2 & -1 & 7\\ -3 & 2 & 3 & 7 \\ -1 & 4 & 9 & 10 \end{bmatrix}\text{.} $$
A submatrix of matrix $A$ which contains rows $i$, $i_0 \leq i \leq i_1$, and columns $j$, $j_0\leq j\leq j_1$, is denoted by $A[i_0:i_1, j_0:j_1]$. Without any loss, we can assume that $m\leq n$, otherwise, we change the roles of rows and columns.
I have four solutions for an algorithm that finds the number of negative elements $a_{i,j}$ in $A$:
- A brute force one that checks every element of $A$: $\mathcal{O}(nm)$
- An improved version of 1, where we start traversal in the $i$-th row at the index of the last negative element in the previous row (in the first row, we start at $n$): $\mathcal{O}(m + n)$.
- An improved version of 2., where binary search is used for finding the last negative number in a given row.
- Recursive algorithm $F$, which is an improved version of 3:
- find the index $J$ of the last negative number in the row $j=m/2$
- let $t_1 = F(A[1:j - 1, J + 1:n])$ and $t_2 = F(A[j + 1:m,1:J])$
- return $Jj + t_1 + t_2$
So ... The third algorithm is in $\mathcal{O}(m\log n)$, since it can happen that each row takes $\mathcal{O}(\log n)$ steps (if all elements of $A$ are negative).
It seems that the fourth algorithm is considerably faster, since the aforementioned worst case would take only $\mathcal{O}(\log m\log n)$ steps, since all $t_1$s would be computed in $\mathcal{O}(1)$, but I am not able to prove that.
What is the exact time complexity $T(m, n)$ of the fourth algorithm?
This is the solution of the recursion
$$T(m, n) = \mathcal{O}(\log n) + \max_J T(m/2, n - J) + T(m/2, J)$$
If I try to plug in $T(m, n)\leq c\log n\log m$ into RHS, I found out that
$$\text{RHS}\leq c_1\log n + c\max_J (\log (m/2) \log J + (\log m/2)\log (n -J))\text{,}$$ so $\max$ is achieved at $J = n/2$, but from here, I cannot proceed further than
$$\text{RHS}\leq c_1\log n + c(\log m - 1)(2\log n - 2)$$ or simplifying that if we note that $c_1 < 2$ and start with the assumption that $c \geq 2$.
Shorn of the preamble, you ask for the asymptotic growth rate of a function $T(m,n)$ satisfying $$ T(m,n)=O(\log m)+\max_{1\leq J\leq n}\bigl[T(m/2,n-J)+T(m/2,J)\bigr]. $$ And in fact, you believe it is $O(\log m\log n)$. Unfortunately, this belief is false, since the function $T(m,n)=n$ satisfies the recurrence $$ T(m,n)=\max_{1\leq J\leq n}\bigl[T(m/2,n-J)+T(m/2,J)\bigr]. $$
This means you will need to work a little harder to show that your algorithm is fast, since the recurrence you wrote down is not enough to give a mathematical proof that it is fast.
Note. The "problem" is that each time you cut $m$ in half, you have two subproblems of comparable difficulty of size $n/2$. This won't give you logarithmic complexity, for that you would need there to be only one such subproblem.