In Regiomontanus' angle-maximization problem, one can maximize the angle by maximizing the tangent of the angle, since the tangent function is increasing. This makes the differentiation simpler. One can argue that working directly with the angle rather than with the tangent of the angle only adds an extra complication to the problem and has no compensating advantages, and that that makes the problem appear more complicated than it is.
Similarly, problems appear in first-semester calculus in which one minimizes a distance by minimizing the square of the distance, and that makes the differentiation simpler, and the complications involved in working with the distance rather than with its square shed no light.
Are there also naturally occurring optimization problems in single-variable calculus in which the simple form of the problem is attained by replacing the independent variable with some function of the independent variable?
Later comment inspired by user7530's answer:
The problem is to find $\displaystyle\underset{u}{\operatorname{argmin}} f(u)$ and $\displaystyle\min_u f(u)$. You could solve $f'(u)=0$ for $u$.
You could do a change of variables: $u=j(v)$, so you've got $\displaystyle\min_v f(j(v)) = \min_v h(v)$ (where of course $h(v)=f(j(v))$).
The point would be that $h'(v)$ might be a much simpler expression than $f'(u)=f'(j(v))$. To do the substitution only to differentiate the same function misses the point. It seems that that's what user7530's answer does.
Not really. There's not a large payoff for doing so, at least relative to changing your objective function. For instance, suppose you have an optimization problem
$$\min_u f(u)$$
and can benefit from the change of indepedent variables $v = g(u)$. Your new optimization problem becomes $$\min_v f(g^{-1}(v))$$ which has optimality condition $$f'(g^{-1}(v)) = 0.$$ Writing $f' = r\circ g$, the above is "easy" to solve if $r$ is easy to invert (to get the optimal $v$) and then $g$ is easy to invert (to get the optimal $u$). But then there was no need to change variables, as you could have used the same insights to solve the original problem $f'(u)=0$.
In order to really take advantage of changing variables, you need to "cheat" -- change the rules of the game completely, while staying within the boundaries of one-dimensional calculus techniques.
Simple problems from the calculus of variations come to find: for instance, consider a curve $y(x)$, with $y(0) = y(1) = 0$. Suppose you want to solve the following optimization problem, over the space of smooth functions $y$ satisfying the above conditions: $$\min_{y}\ \int_0^1 \left(1+y'^2 + y\right)\,dx \qquad \text{s.t.} \qquad y(0)=y(1)=0.$$ Obviously a direct approach will not work here, but you can solve the problem by transforming the optimization into the standard $$\min_\varepsilon\ \left\|\int_0^1 \left(1+(y+\varepsilon\delta y)'^2+(y+\varepsilon \delta y)\right)\,dx\right\|$$ giving you the Euler-Lagrange equations $2y'' = 1$, and resulting solution $y = \frac{1}{2}x^2 -\frac{1}{2}x$.