I am currently trying to maximize an objective function $f(a,b,c,d,e)$ over the variable $b$ only.
By taking the derviative of f over b, setting it to zero, I can solve b in terms of the other 4 variables. So, $b=g(a,c,d,e) $
Graphically, if I substitude in some values of a,c,d,e into $f$ and using $g(a,c,d,e) $ from above to get $b$, I am able to see that $f$ reach its maximum value at that specific $b$. If I change the value of $b$, I can see that f decreases.
But then the question is, how do I prove that $b=g(a,c,d,e)$ is the point that maximize $f$? More specifically, I am looking for global maximum inside the interval [0,1].
PS: the constraint on b is that $b$ is inside the interval $[0,1]$
Update:
I find that there is one critical point of b inside [0,1].
I am not sure whether single variable calculus applies here. But if it does, since $b$ is inside the interval [0,1], and there is only one critical point (partial derivative of b is zero) inside this interval, then if $f$ doesn't diverge, the global maximum is either on the boundary or the critical point.
Do you guys think this is right?
Update2:
Some said using the second derivative method. But the question is, I don't have value of other 4 variables, how do I know if the second derivative is greater than 1 or less than that?
Collect the accessory parameters $a$, $c$, $d$, $e$ into a parameter point ${\bf p}:=(a,c,d,e)$. For given ${\bf p}$ we then have to study the function $$f_{{\bf p}}:\quad [0,1]\to{\mathbb R},\qquad b\mapsto f(a,b,c,d,e)$$of the single variable $b$. If this function is continuous on $[0,1]$ and differentiable in the interior of this interval it assumes a global maximum on $[0,1]$. This maximum is found as follows: Compute the zeros of the derivative $f_{{\bf p}}'$ in $\ ]0,1[\ $. In most cases you will obtain a finite (maybe empty) set $\{x_1,x_2,\ldots, x_r\}\subset \ ]0,1[\ $. Then build the candidate list $$C_{\bf p}:=\{0,1,x_1,\ldots, x_r\}\ ,$$ and you can be sure that $$M({\bf p}):=\max_{x\in [0,1]} f_{{\bf p}}(x)=\max_{x\in C_{\bf p}}f_{{\bf p}}(x)\ ,$$ where on the right hand side you have to compare only finitely many values.
During this analysis the parameter point ${\bf p}$ was fixed. Now the list $C_{\bf p}$ will depend on ${\bf p}$, and so will the individual member of the list giving rise to the maximal value of $f$. It may very well happen that for some ${\bf p}$ the $\max$ is taken at the left endpoint of $[0,1]$, for other ${\bf p}$'s in the interior, and still other at the right endpoint.
This phenomenon is already present in the following simple example: Let $\sigma$ be the segment connecting the points $(-1,0)$ and $(1,0)$ in the plane. For a given point ${\bf p}:=(u,v)$ let $d({\bf p})$ be the distance from ${\bf p}$ to the nearest point of $\sigma$. (Draw a figure!)
A concluding remark: No second derivatives had to be calculated.