If the problem in a neural network is to optimize on one set of parameters and the loss function is:
$$E(\theta) = loss1(a, b) - loss2(b)$$
can gradient descent minimize $E(\theta)$ in this problem, which means minimize $loss1$ and maximize $loss2$. In another words, can we find such $\theta$?
No, you cannot simultaneously solve two different objectives by minimizing one, in the general case. You can obtain a solution which tries to find a compromise of the objectives, but that's another thing.
Trivial counterexample n your case is $l_1(a,b) = a^2+b^2$ and $l_2 = b^2$. The cost $l_1(a,b)$ is minimized by $a=b=0$ while $l_2$ is maximized by $b=\pm \infty$ while the cost $l_1(a,b)-l_2(b)$ is minimized by $a=0$ and $b$ arbitrary.