How to determine the optimal step size in a quadratic function optimization

527 Views Asked by At

I have the following optimization problem:

$$\underset{\alpha\in\mathbb{R}}{\text{min}}:\;\;f(\textbf{x}+\alpha\textbf{d})$$ $$\text{subject to}:\;\;0\leq\alpha\leq \alpha_{max},$$

where $f(\textbf{x})=\frac{1}{2}\textbf{x}^T\textbf{Q}\textbf{x}+\textbf{c}^T\textbf{x}, \;\textbf{x}\in\mathbb{R}^n, \;\textbf{d}\in\mathbb{R}^n, \;\textbf{c}\in\mathbb{R}^n,\;$ $\alpha_{max}\in \mathbb{R}$ and $\textbf{Q}$ is $n\times n$ symmetric positive semi-definite real matrix. I'm trying to solve a general support vector machine - classifier parameters by using active set method with gradient projection. I'm almost done with my formulations but my last task is to solve the optimization problem I presented above. How should I solve for optimal $\alpha$?

1

There are 1 best solutions below

0
On BEST ANSWER

In reduced gradient - active set methods (RG-ASM): $\alpha^* \in [0, \alpha_{max}]$ and this $\alpha^*$ must satisfy Armijo-Wolfe conditions (sufficient decrease condition and curvature condition).

$f(x^k + \alpha d^k) \le f(x^k) + \alpha c_1\nabla f(x^k)d^k\;$ where $c_1 \in (0, 1)$

$\nabla f(x^k + \alpha d^k)^T d^k \ge c_2\nabla f(x^k)d^k\;$ where $0 < c1 < c2 < 1$