Discovering that $\infty$ can be varied, it might make sense that very big numbers will limit theirselves to become $\infty$.
I went to use my phone for experimenting this. Once I reach $1e309$, it becomes $\infty$.
I went to see if my computer will limit as the same value as in my phone. Turns out $1e10000$ is the limit compared to my phone's $1e309$.
If you're confused what limit I'm talking about, it is the first quantity that turns into $\infty$. Also, I used a Windows for this experiment.
So why is it that these limits vary? Is it the same way when it comes to regular calculators (actual ones, not digital app)?
Short version: The max for your phone is (slightly less than) $2^{2^{10}}$ because your phone is using double precision floating points to represent numbers.
Computers typically represent numbers as strings of bits of some fixed length - so might say, "a number is represented by some string of $32$ bits" giving rise to $2^{32}$ possible values (at most) - of which there is necessarily some largest and smallest value that may be represented and where the exact behavior that happens when you exceed that varies. Largely, they may represent numbers in two forms: integral types (representing integers) and floating point types (trying to represent real numbers to some accuracy).
The simpler case is that integral types do the obvious thing and say, "I represent an integer in binary" - so, if we had $8$ bits to work with $01001111$ would represent $135$ and we'd could represent all integers $0$ to $255$ this way - and, if we tried to exceed this range, we'd wrap around as in modular arithmetic (mod $256$). Sometimes, to represent negative numbers, we'd use something called two's complement, which essentially says that the most significant place value is negated - so $10000000$ represents $-128$ and $10010100$ represents $-128+16+4=-108$ and $11111111$ represents $-1$. One can do this with any number of bits (or even a flexible number of bits), although most commonly these representations are used for $8$ or $16$ or $32$ or $64$ bits.
General purpose calculators tend to use a floating point representation of a number - essentially, they break the bit string up into pieces, an extremely common format for doing this with $32$ bits is as follows:
We use the first bit to represent the sign of the number.
The next $8$ bits represent an exponent $e$ ranging from $1$ to $254$.*
The next $23$ bits represent some number $n$ between $0$ and $2^{23}-1$.
Note that this representation contains two integral representations - one for an exponent and one for a "significand". The number represented by this format is $2^{e - 127}\cdot (1 + 2^{-23}\cdot e)$. This represents an odd set of numbers - note that there are about equally many numbers between $0$ and $1$ as there are exceeding $1$ - essentially, it represents writing down binary numbers to some precision, assuming the most significant place value is neither too large nor too small. The largest number representable is about $2^{127}\cdot 2 = 2^{2^7}==3.4082\times 10^{38}$. Floating points usually, if they would exceed this range, become some sort of representation of infinity - i.e. the computer no longer tracks the number in any meaningful way.
There are also longer floating point formats in common use - notably a double precision floating point uses 64 bits, using 11 bits for the exponent, and can represent numbers below $2^{2^{10}}=1.797\times 10^{308}$. This is almost certainly what is going on with your phone's calculator.
There are also larger formats for floating point numbers, which can represent an even greater range - your computer is likely using some larger representation and it may be the case that it is programmed to not expose the full range of its representation to the user, but rather to softly max out at some round number (e.g. to fail intentionally because of software rather than because of the limits of the floating point format). There are also some systems that use decimal representations of numbers, which would lead to round bounds, although I believe that's fairly rare these days. It's also possible to program systems that will compute to arbitrary precision - though, with the drawbacks to performance and memory usage that this entails. This is very common in computer algebra systems and other more heavy duty mathematical applications.
(*One might note that $8$ bits can represent $256$ values and I've only prescribed $254$ exponents - the remaining two values are used for special floating point values such as infinity)