I know that some functions are multivariate and therefore take a long time to compute zeros for the gradient, and then test for minima, maxima, etc. However, some optimization algorithms are not tractable either, and numerically they are never analytical and they don't give an analytic solution. So what is the purpose of not simply using the analytical methods we have from calculus eg solving gradient zeros vs possibly incorrect and at best approximate optimization algorithms?
2026-04-01 03:29:09.1775014149
Soft question: Why use optimization algorithms instead of calculus methods?
373 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in CALCULUS
- Equality of Mixed Partial Derivatives - Simple proof is Confusing
- How can I prove that $\int_0^{\frac{\pi}{2}}\frac{\ln(1+\cos(\alpha)\cos(x))}{\cos(x)}dx=\frac{1}{2}\left(\frac{\pi^2}{4}-\alpha^2\right)$?
- Proving the differentiability of the following function of two variables
- If $f ◦f$ is differentiable, then $f ◦f ◦f$ is differentiable
- Calculating the radius of convergence for $\sum _{n=1}^{\infty}\frac{\left(\sqrt{ n^2+n}-\sqrt{n^2+1}\right)^n}{n^2}z^n$
- Number of roots of the e
- What are the functions satisfying $f\left(2\sum_{i=0}^{\infty}\frac{a_i}{3^i}\right)=\sum_{i=0}^{\infty}\frac{a_i}{2^i}$
- Why the derivative of $T(\gamma(s))$ is $T$ if this composition is not a linear transformation?
- How to prove $\frac 10 \notin \mathbb R $
- Proving that: $||x|^{s/2}-|y|^{s/2}|\le 2|x-y|^{s/2}$
Related Questions in ANALYSIS
- Analytical solution of a nonlinear ordinary differential equation
- Finding radius of convergence $\sum _{n=0}^{}(2+(-1)^n)^nz^n$
- Show that $d:\mathbb{C}\times\mathbb{C}\rightarrow[0,\infty[$ is a metric on $\mathbb{C}$.
- conformal mapping and rational function
- What are the functions satisfying $f\left(2\sum_{i=0}^{\infty}\frac{a_i}{3^i}\right)=\sum_{i=0}^{\infty}\frac{a_i}{2^i}$
- Proving whether function-series $f_n(x) = \frac{(-1)^nx}n$
- Elementary question on continuity and locally square integrability of a function
- Proving smoothness for a sequence of functions.
- How to prove that $E_P(\frac{dQ}{dP}|\mathcal{G})$ is not equal to $0$
- Integral of ratio of polynomial
Related Questions in NUMERICAL-METHODS
- The Runge-Kutta method for a system of equations
- How to solve the exponential equation $e^{a+bx}+e^{c+dx}=1$?
- Is the calculated solution, if it exists, unique?
- Modified conjugate gradient method to minimise quadratic functional restricted to positive solutions
- Minimum of the 2-norm
- Is method of exhaustion the same as numerical integration?
- Prove that Newton's Method is invariant under invertible linear transformations
- Initial Value Problem into Euler and Runge-Kutta scheme
- What are the possible ways to write an equation in $x=\phi(x)$ form for Iteration method?
- Numerical solution for a two dimensional third order nonlinear differential equation
Related Questions in ALGORITHMS
- Least Absolute Deviation (LAD) Line Fitting / Regression
- Do these special substring sets form a matroid?
- Modified conjugate gradient method to minimise quadratic functional restricted to positive solutions
- Correct way to prove Big O statement
- Product of sums of all subsets mod $k$?
- (logn)^(logn) = n^(log10+logn). WHY?
- Clarificaiton on barycentric coordinates
- Minimum number of moves to make all elements of the sequence zero.
- Translation of the work of Gauss where the fast Fourier transform algorithm first appeared
- sources about SVD complexity
Related Questions in NUMERICAL-OPTIMIZATION
- Modified conjugate gradient method to minimise quadratic functional restricted to positive solutions
- Minimization of a convex quadratic form
- What is the purpose of an oracle in optimization?
- What do you call iteratively optimizing w.r.t. various groups of variables?
- ProxASAGA: compute and use the support of $\Delta f$
- Can every semidefinite program be solved in polynomial time?
- In semidefinite programming we don't have a full dimensional convex set to use ellipsoid method
- How to generate a large PSD matrix $A \in \mathbb{R}^{n \times n}$, where $\mathcal{O}(n) \sim 10^3$
- Gram matrices in the Rayleigh-Ritz algorithm
- The 2-norm of inverse of a Hessian matrix
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
The reason to use any numerical method is that you might not have an explicit analytical solution to the problem you're trying to solve. In fact, you might be able to prove (as with the three body problem) that no analytical solution involving elementary functions exists. Thus approximate methods (numerical or perturbation-based) are the best we can do, and when applied correctly (this is important), they usually provide answers with high degree of accuracy.
An elementary example of this issue (as mentioned by several comments) is finding roots of polynomials of high degree. As was proved in the early 19th century, there is no explicit formula for the roots of a polynomial of degree 5 or higher. Thus if your derivative consists of such functions, solving $f^\prime(x) = 0$ is only possible using a numerical technique.
For a more complicated example, many times the function you are trying to optimize does not even have an explicit functional form. In calculus, you learn how to optimize functions like "$f(x) = x\exp(-x)$". Here you have an explicit formula for which you can take derivatives and solve $f^\prime = 0$. But many times you don't have this - what you have is something like "let $\boldsymbol{x}_0$ be the initial condition for a nonlinear system of differential equations
$$ \dot{\boldsymbol{x}} = f(\boldsymbol{x})\\ \boldsymbol{x}(0) = \boldsymbol{x}_0 $$ Define $f(\boldsymbol{x}_0) = g(\boldsymbol{x}(1))$, i.e. run the system until $t=1$, then evaluate some function $g()$ there. This function will almost never have an explicit functional form. Thus if you wanted to compute
$$ \min_{\boldsymbol{x}_0} f(\boldsymbol{x}_0), $$ you would almost certainly be forced to use a numerical method.
I will say that there are methods that are a hybrid of numerical and exact/algebraic. For instance, you can use something like a Newton's method (which would usually require Hessian matrices) together with automatic differentiation. Automatic differentiation treats a computational algorithm like an explicit formula and then takes explicit derivatives of it using a (highly sophisticated) version of the chain rule.