How to solve an equation with the sum of minima numerically?

208 Views Asked by At

The equation is given by

$$ \sum_{n=1}^N \min(\gamma, \beta a_n)=N$$ where $\beta$ is the variable with $\beta\in[0,\sqrt\gamma/\min(a_n\mid a_n>0)]$, $ \gamma $ is a constant with $1\le\gamma\le N$, and $a_n$ is an nonnegative constant.

I know that the function $f(\beta)=\sum_{n=1}^N \min(\gamma, \beta a_n)-N$ is strictly increasing. Also, $f(0)<0 $ and $ f(\sqrt\gamma/\min(a_n\mid a_n > 0 )) > 0$. Thus, there must exist an unique solution for the above equation. How can I find the solution numerically? e.g. using matlab. Increasing $ \beta$ slowly works, but it is not accurate.

2

There are 2 best solutions below

5
On

The fastest generic (read: linear convergence) algorithm given a continuous function and an interval $[a,b]$ with $f(a) < 0$ and $f(b) > 0$ is called bisection (sometimes binary search):

  1. Look at the sign $s$ of $f(\frac{a+b}2)$
    • If $s = 0$, we are done with the exact solution $\frac{a+b}2$.
    • If $s = 1$, we know the root is in $[a, \frac{a+b}2]$, so set $b = \frac{a+b}2$ and repeat until the interval length $b-a$ is less than $2\epsilon$
    • If $s = -1$, we know the root is in $[\frac{a+b}2, b]$, so set $a=\frac{a+b}2$ and repeat until the interval length $b-a$ is less than $2\epsilon$
  2. If $b-a<2\epsilon$, return $\frac{a+b}2$. We know that the actual root is wihin $\epsilon$ of this value.

The guaranteed accuracy ($\frac{b-a}2$) is halved in each iteration, so we converge linearly to the solution. In other words we need linear time if the accuracy is given in digits. This algorithm is optimal and always converges as long as $f$ is continuous.

A slight modification to prevent numerical problems if a root is on an interval boundary (i.e. found exactly), but the value returned isn't exactly $0$ would be to also return if $|f(\frac{a+b}2)| < \delta$ for some tolerance $\delta > 0$.

0
On

Your function is piecewise linear. I suggest finding all the points where it's slope changes, seem to be $\beta_n = \frac{\gamma}{a_n}$ (If some $a_n$ are zero, simply kick them out of sum), sort them by value and find the interval $[\beta_k, \beta_{k+1}]$ where the root is (binary search, $O(\log n)$ complexity). On that interval, the root can be found by solving a linear equation.

Here's MATLAB code for that

N = 20;
gamma = 3;
a = .1 + 3 * rand(N, 1);

mina = min(a);

x = linspace(0, gamma / mina, 1000);

f = @(beta) sum(min(gamma, a * beta(:)')) - N;

corners = gamma ./ a;
corners = [corners; 0; gamma / mina];
values = f(corners);
[values, reorder] = sort(values);
corners = corners(reorder);

low = 1;
upp = length(corners);

iters = 0;
while upp - low > 1
    iters = iters + 1;
    mid = floor((upp + low) / 2 + 0.1);
    % low < mid < upp
    if values(mid) > N
        upp = mid;
    else
        low = mid;
    end
end

fprintf('Binary search in %d iters, N at [%f, %f]\n', ...
    iters, corners(low), corners(upp));

root = corners(low) + ...
    (corners(upp) - corners(low)) / (values(upp) - values(low)) * ...
    (N - values(low));

fprintf('The root is %f\n', root);

plot(x, f(x), '-', corners, values, 'r.', ...
    x, N, 'k-', [root], [N], 'g*');

enter image description here