I wish to optimize $J=\frac{I_1}{I_2}$ where $I_2=\int_0^T f(t)\theta(f(t))dt$. Where $\theta(.)$ is the Heaviside step function. To avoid using the step function (since it is not differentiable) , I wrote an inequality constraint $f(t)\geq0$ only for the bottom functional and hence got: $I_2=\int_0^T f(t)dt$. Furthermore, $I_1=\int_0^T f(t)dt$ without the constraint. This entire functional J is subjected to an equality constraint $h(t)=0$. Any help, including being pointed to a good resource, is extremely appreciated.
Edit: Fixed the wording, and defined $\theta(.)$ after seeing an answer.
Optimizing a ratio of two functionals, especially with constraints, is a problem often approached using the calculus of variations alongside methods from the field of constrained optimization such as Lagrange multipliers.
Given the functionals:
\begin{equation} I_2 = \int_{0}^{T} f(t)\theta(f(t))\,dt \end{equation} \begin{equation} I_1 = \int_{0}^{T} f(t)\,dt \end{equation}
and the constraint \begin{equation}( h(t) = 0 )\end{equation}, if we wish to optimize the functional:
\begin{equation} J = \frac{I_1}{I_2} \end{equation}
subject to \begin{equation}( f(t) \geq 0 )\end{equation}, we can reformulate the problem using a Lagrange multiplier \begin{equation}( \lambda(t) )\end{equation} to enforce the inequality constraint and a multiplier \begin{equation}( \mu )\end{equation} for the equality constraint.
We define a Lagrangian \begin{equation}( \mathcal{L} )\end{equation} as:
\begin{equation} \mathcal{L}(f(t), \lambda(t), \mu) = I_1 - \mu I_2 - \int_{0}^{T} \lambda(t) f(t) \,dt \end{equation}
where \begin{equation}( \lambda(t) \geq 0 )\end{equation} is the Lagrange multiplier corresponding to the inequality constraint. The term \begin{equation}( -\int_{0}^{T} \lambda(t) f(t) \,dt )\end{equation} is included to ensure that \begin{equation}( f(t) \geq 0 )\end{equation}, as \begin{equation}( \lambda(t) f(t) = 0 )\end{equation} when the constraint is active.
The condition for optimality is that the functional derivative of \begin{equation}( \mathcal{L} )\end{equation} with respect to \begin{equation}( f(t) )\end{equation} should vanish, i.e., \begin{equation}( \frac{\delta \mathcal{L}}{\delta f(t)} = 0 )\end{equation}, which gives us:
\begin{equation} \frac{\delta I_1}{\delta f(t)} - \mu \frac{\delta I_2}{\delta f(t)} - \lambda(t) = 0 \end{equation}
Moreover, we must satisfy the Karush-Kuhn-Tucker (KKT) conditions due to the inequality constraint:
\begin{equation} \lambda(t) \geq 0, \quad f(t) \geq 0, \quad \lambda(t) f(t) = 0 \end{equation}
To solve this optimization problem, one must typically assume specific forms for \begin{equation}( f(t) )\end{equation} and \begin{equation}( \theta(f(t)) )\end{equation}, or at least their variational derivatives, to proceed with finding explicit solutions or numerical approximations.
It should be noted that the problem of optimizing a ratio is not standard in the calculus of variations and might not admit a closed-form solution. In practice, one might need to employ numerical methods to find ( f(t) ) that optimizes \begin{equation}( J )\end{equation}, while satisfying the given constraints.
This is a broad outline of the approach to solving the given problem. The specifics would depend on the form of \begin{equation}( f(t) )\end{equation}, \begin{equation}( \theta(f(t)) )\end{equation}, and \begin{equation}( h(t) )\end{equation}, which were not provided in the question. Therefore, the exact solution procedure could vary significantly. For a comprehensive treatment of such problems, one might consult resources on constrained optimization and the calculus of variations.
############### EDIT ########################
Optimizing a functional that is a ratio of two integrals, particularly when one of the integrals involves the Heaviside step function, requires careful handling of non-differentiability. We are given:
\begin{equation} I_1 = \int_{0}^{T} f(t)\, dt \end{equation} \begin{equation} I_2 = \int_{0}^{T} f(t) \theta(f(t))\, dt \end{equation}
where \begin{equation} \theta(f(t)) \end{equation} is the Heaviside step function, which equals 1 if \begin{equation} f(t) \geq 0 \end{equation} and 0 otherwise. This step function ensures that \begin{equation} I_2 \end{equation} only accumulates positive values of \begin{equation} f(t) \end{equation} . We wish to optimize the functional:
\begin{equation} J = \frac{I_1}{I_2} \end{equation}
subject to the constraint \begin{equation} f(t) \geq 0 \end{equation} . To handle the non-differentiability of \begin{equation} \theta(f(t)) \end{equation} and to include the inequality constraint, we can introduce a Lagrange multiplier \begin{equation} \lambda(t) \end{equation} for the constraint. The resulting Lagrangian \begin{equation} \mathcal{L} \end{equation} would be:
\begin{equation} \mathcal{L}(f(t), \lambda(t)) = I_1 - \int_{0}^{T} \lambda(t) (f(t) - 0) \, dt \end{equation}
where \begin{equation} \lambda(t) \geq 0 \end{equation} is the Lagrange multiplier associated with the inequality constraint \begin{equation} f(t) \geq 0 \end{equation} .
For \begin{equation} I_2 \end{equation} , since the Heaviside function is not differentiable, we cannot directly use the standard method of Lagrange multipliers which requires differentiability. Instead, we would need to use a method capable of handling non-smooth optimization problems.
One approach is to utilize the concept of subdifferentials in convex analysis. The subdifferential of \begin{equation} \theta(f(t)) \end{equation} , denoted by \begin{equation} \partial \theta(f(t)) \end{equation} , contains all subgradients at \begin{equation} f(t) \end{equation} , which are the allowable slopes that can serve as "derivatives" for optimization purposes.
The condition for \begin{equation} f(t) \end{equation} to be optimal is that the subgradient of \begin{equation} \mathcal{L} \end{equation} with respect to \begin{equation} f(t) \end{equation} must include 0, which can be formulated as:
\begin{equation} 0 \in \frac{\delta I_1}{\delta f(t)} - \lambda(t) + \partial \theta(f(t)) \end{equation}
Additionally, the complementary slackness condition from the Karush-Kuhn-Tucker (KKT) conditions must be satisfied:
\begin{equation} \lambda(t) f(t) = 0 \end{equation}
Because \begin{equation} \theta(f(t)) \end{equation} is discontinuous, the analysis becomes significantly more complex and requires specialized techniques.
The overall problem of optimizing \begin{equation} J \end{equation} with the constraint \begin{equation} f(t) \geq 0 \end{equation} does not yield to standard variational calculus methods due to the presence of the non-differentiable Heaviside function. Instead, one must employ methods from non-smooth optimization, which might lead to a numerical rather than analytical solution.
For further resources on such optimization problems, one can refer to texts on convex analysis and non-smooth optimization, such as "Convex Analysis" by R. Tyrrell Rockafellar or "Nonsmooth Analysis and Control Theory" by F.H. Clarke.