This equation showed up while I was studying an algorithm and even though it's simple-looking, it doesn't seem to be in the simple-to-solve category.
I'm not a mathematician so I did what every self-respecting engineer would do :
(1) I went to Wolfram Alpha : no success
(2) Used pen and paper to see if I would get an aha! moment : no success
(3) Went to my whiteboard thinking the outcome would be different : no comment
(4) Explored the equation graphically to figure out if and when it is solvable : success
(5) Used Maple to try to do what I wanted Wolfram Alpha to do : no success
Then I tried more serious things (please don't judge) :
(6) I tried to approximate the LHS of the equation with an exponential term hoping it would allow me to take one further step with the RHS : no success
(7) I sampled the constants randomly (1E3) and generated an array of numerical solutions to which I tried to fit different hyperplanes of some sorts : no success
(8) I sampled 1E6 solutions randomly and tried to see if I could make some statistical statement about them. The only thing that came out -- and this could simply be the result of the bounds I set on the variables -- is that the solution is often between 0 and 1, which isn't really surprising given the nature of the functions : no success
Things I thought about but didn't do because they provided no insight :
(8) Take the dataset generated in (7, 8) to train a neural net and see how good it could get while keeping a reasonably compact structure
---
There you have it. The reason why this is of interest to me is that I'm trying to make a statistical statement about the workings of a stochastic optimization algorithm and it would help to know more about the nature of the solution to this equation.
I would appreciate any sort of insights you might have, references you can point me to or magic tricks.
PS. I investigated the Lambert W function, because it " feels " like it could be what I'm looking for, but I'm still on that.
Thanks !
EDIT :
(1) As mentioned in the comments, the constants have to be positive with $t\,\epsilon\,[0,\infty)$ and $\alpha\,\epsilon\,(0,1)$
(2) Claude Leibovici has already did all of the leg work so far. The only thing still left to be figured out is the case when $0<\alpha<1$. Here is graphical proof that a solution exists in that case :
Example of graphical sln when $\alpha=0.9,\,A_{0},\,r=2,\,\gamma=0.5$
You are looking for the zero's of function $$f(t)=A \alpha ^t-r \left(1-e^{-\gamma t}\right)$$ which, I am afraid, will not show analytical solution even using special functions. More than likely, you will need some numerical methods.
The simplest form we could have is probably (let $A\alpha ^t=x$) $$\color{blue}{g(x)=x-a \left(1-x^{b}\right)}\qquad\text{where}\qquad \color{blue}{a=\frac {r} {A}}\qquad\text{and}\qquad \color{blue}{b=-\frac{\gamma }{\log (\alpha )}}$$ which, in some very particular cases, could reduce to a polynomial in $x$ (case that we shall forget). Notice that we went from three to two parameters.
My feeling is that, under this form, Newton method would work like a charm.
You must take care that the equation can have $0$, $1$ or $2$ solutions.
This analysis could go further if, at least, you precise if parameters $(A,\alpha,\gamma)$ are positive or negative.
Edit
Concerning the number of solutions, back to $f(t)$, we have $$f'(t)=A \log (\alpha ) \alpha ^t-\gamma r e^{-\gamma t}$$ $$f''(t)=A \log ^2(\alpha ) \alpha ^t+\gamma ^2 r e^{-\gamma t}$$ Assuming that the three parameters are positive (as said in comments), then $\forall t$, $f''(t) >0$.
The first derivative cancels at $$t_*=-\frac{\log \left(\frac{A \log (\alpha )}{\gamma\, r}\right)}{\log (\alpha )+\gamma }$$ which in the real domain, will exist only if $\alpha >1$. If $t_*$ exits and $t_*>0$, the point corresponds to a minimum by the second derivative test.
So,