So I'm supposing $F:\mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$ is differentiable, and I have MatLab code that evaluates $F$ at an arbitrary $x$ in $q$ flops.
I know that given $F(\bar{x})$ where $\bar{x}$ is an arbitrary and fixed point I can estimate the Jacobian $J(\bar{x})$ in $p \cdot (n+1)$ flops by using the finite difference $$J(x) \cdot d \approx \frac{F(x + \alpha d) - F(x)}{\alpha}$$ and letting $d$ be the $i$-th column of the identity matrix, in this fashion I can evaluate the Jacobian approximation one column at a time in $n \cdot (p+1)$ flops.
But I've been stuck with a further complication on my data. Let's suppose $n=m^{2}$. Now I could implement the same technique but with larger computation space. Supposing $J = (J_{1},J_{2}, \text{...},J_{m})$ where $J_{i}\in \mathbb{R}^{m \times m}$ and each row of $J_{i}$ has all entries equal to zero except for row $i, i=1:m$, I feel like I should be able to manipulate my previous finite differencing method such that I can evaluate the Jacobian approximation in $p \cdot (m+1)$ flops... I just can't figure out how.