How to quickly compute the initial average half-life of multiple substances?

80 Views Asked by At

For example, imagine an object that consists to equal parts of two substances with the following half-lifes:

  • $t_{1, 1/2} = 1d$
  • $t_{2, 1/2} = 1000d$

I would now like to find out the time after the object has decayed by half. For this I have this formula: $$ N(t) = \frac{2^{-\frac{t}{t_{1,1/2}}} + 2^{-\frac{t}{t_{2,1/2}}}}{2} = 1/2 $$

Unfortunately, rearranging according to $t$ is not possible, because $log_2(a+b) \neq log_2(a) + log_2(b)$.

Therefore, the only thing that has occurred to me so far is to use this method to calculate the time (code in python):

def average(halfLifes, t):
    sum = 0.0
    for halfLife in halfLifes:
        sum += pow(2, -t / halfLife)

    return sum / len(halfLifes);

t = 0
halfLifes = [1, 1000]
stepSize = 4
error = 1

while error > 0.0001:
    y = average(halfLifes, t)
    if y < 0.5:
        t -= stepSize
        stepSize /= 2
        t += stepSize
    else:
        t += stepSize
    error = abs(y - 0.5)

print("t: " + str(t))
print("y: " + str(average(halfLifes, t)))

This gives the result $t \approx 7.625d$. Unfortunately, the calculation is very slow and for example with 20000 values, the calculation already takes ~9 seconds.

Is there a way to make this faster or to solve this mathematically better? It is not important that the solution is accurate.

3

There are 3 best solutions below

3
On

Let $x=2^{-t/t_{1,1/2}}$ to produce a sorta polynomial equation instead of an exponential equation. This will be much nicer to solve and probably much more stable. You can also guarantee fast convergence by bracketing the root over $x\in(0,1)$ and using methods such as Brent's method or the Newt-safe method.

0
On

Using your equation $$N(t) = \frac{2^{-\frac{t}{t_{1,1/2}}} + 2^{-\frac{t}{t_{2,1/2}}}}{2} = \frac 12$$ it converges quite fast with Newton method even starting with $t_0=1$. But much faster would be Halley and Householder methods with the same initial (poor) guess. Trying with your data

Here are the iterates $$\left( \begin{array}{cccc} n & \text{Newton} & \text{Halley} & \text{Householder} \\ 0 & 1.00000 & 1.00000 & 1.00000 \\ 1 & 2.43782 & 3.86030 & 5.23721 \\ 2 & 3.85963 & 6.42474 & 7.56234 \\ 3 & 5.22658 & 7.53871 & 7.57676 \\ 4 & 6.42908 & 7.57676 & \\ 5 & 7.24871 & & \\ 6 & 7.54668 & & \\ 7 & 7.57649 & & \\ 8 & 7.57676 & & \end{array} \right)$$

Now, the question could be : can we generate a better $t_0$ to save interations ? I shall see if I can find one (if I can do it, I shall be back).

Edit

We can have almost the exact solution writing $$N(t)=2^{-1-\frac{t}{1000}} \left(1+2^{-\frac{999}{1000}t}\right)$$

Take logarithms $$\log(N(t))=-\left(1+\frac{t}{1000}\right) \log (2)+\log\Big[1+2^{-\frac{999}{1000}t }\Big]$$ Approximate the logarithm $$\log(N(t))\sim-\left(1+\frac{t}{1000}\right) \log (2)+2^{-\frac{999}{1000}t }$$ making the approximate equation $$-\left(1+\frac{t}{1000}\right) \log (2)+2^{-\frac{999}{1000}t }=-\log(2)$$ and the approximate solution is given in terms of Lambert function $$t \sim \frac{1000 }{999 \log (2)}W(999)=7.57994$$ Now, for the fun, Newton method with a ridiculous number of figures $$\left( \begin{array}{cc} n & t_n \\ 0 & 7.5799435053377050733 \\ 1 & 7.5767539560898630995 \\ 2 & 7.5767569153441022456 \\ 3 & 7.5767569153466533521 \end{array} \right)$$

Update

Making the problem more general, the same approximation procedure would for $b=t_{2,1/2} \gg t_{1,1/2}=a$ to the equation $$-\frac t{b}\log(2)+\log \left(1+2^{ \left(\frac{a-b}{ab}\right)t}\right)=0$$ So, the approximate equation $$-\frac t{b}\log(2)+2^{ \left(\frac{a-b}{ab}\right)t} \simeq 0 \implies t \sim \frac{a b }{\log (2) (b-a)}W\left(\frac{b-a}{a}\right)$$ Now, admitting that $\frac{b-a}{a}$ could be large and that you do not want to use Lambert function , another approximation $$ t \sim \frac{a b }{\log (2) (b-a)}\Bigg[L_1-L_2+\frac{L_2}{L_1}\Bigg]$$ With $L_1= \log\left(\frac{b-a}{a}\right)$ and $L2=\log(L_1)$.

Applied to your case $a=1$, $b=1000$, this would give as an estimate $t_0=7.58758$ and one Newton iteration will give $t_1=7.57672$.

0
On

Okay, I think I figured out what you meant by your program taking 9 seconds to run. You mean that when you loaded the list halfLifes up with 20000 values, your code takes about that long to run.

Okay, so, yes, there are quite a few things you could do to run faster. Here are two possible methods:

  1. Bisection method

    The idea with this is to have a guess that's too high for t and one that's too low, take the average, see if that's too high or too low, and repeat. Modifying your code, that's:

    def average(halfLifes, t):
        sum = 0.0
        for halfLife in halfLifes:
            sum += pow(2, -t / halfLife)
    
        return sum / len(halfLifes);
    
    halfLifes = [1, 1000]
    halfLifes = range(1, 20000)
    tmin = min(halfLifes)
    tmax = max(halfLifes)
    y_atmin = average(halfLifes, tmin)
    y_atmax = average(halfLifes, tmax)
    assert y_atmin > 0.5, "This code assumes the low t val makes y > 0.5"
    assert y_atmax < 0.5, "This code assumes the high t val makes y < 0.5"
    error = 1
    
    while error > 0.0001:
        t = (tmin + tmax) / 2
        y = average(halfLifes, t)
        serror = y - 0.5
        if serror > 0: # t too low
            tmin = t
            y_atmin = y
        else:
            tmax = t
            y_atmax = y
        error = abs(serror)
        # print((t, error))
    
    print("t: " + str(t))
    print("y: " + str(average(halfLifes, t)))
    
  2. Newton's method

    Then there's the option of going with the method you said you were using at first. Note that this might converge slightly slower than bisection for the tolerance specified here (0.0001), but with Newton's method once you get close to the answer each step roughly doubles the number of digits of accuracy, whereas with bisection the error only roughly gets cut in half each time. (so you could get much better accuracy with Newton's method for only one or two more iterations)

    To apply Newton's method you need the derivative of the average function; fortunately, that's relatively easy to compute in this instance.

    import math
    
    def average(halfLifes, t):
        sum = 0.0
        for halfLife in halfLifes:
            sum += pow(2, -t / halfLife)
    
        return sum / len(halfLifes);
    
    def averagederiv(halfLifes, t):
        sum = 0.0
        for halfLife in halfLifes:
            sum += -math.log(2) * pow(2, -t/halfLife)/halfLife
    
        return sum / len(halfLifes);
    
    t = 0
    halfLifes = list(range(1,20000))
    error = 1
    y = average(halfLifes, t)
    
    while error > 0.0001:
        yd = averagederiv(halfLifes, t)
        t -= (y - 0.5)/yd
        y = average(halfLifes, t)
        error = abs(y - 0.5)
        # print((t, error))
    
    print("t: " + str(t))
    print("y: " + str(average(halfLifes, t)))