I was reading a calculus book, and at some point the following standard limit was mentioned: $$\lim_{n\to\infty}\left(1+\frac1n\right)^n=e$$ Afterwards, the author of this book invited the reader to try and compute, using a calculator, the value of the expression $$\left(1+\frac1n\right)^n$$ for various inputs. He then stated that the outputs would look something like this: $$\begin{array}{c|c} n&(1+1/n)^n\\ \hline 10^3&2.7169239\\ 10^4&2.7181459\\ \dots&\dots\\ 10^{11}&1\\ 10^{12}&1 \end{array}$$ The author then invites the reader to try and work out why this happens.
I thought this was an interesting question and one that actually shows how calculus can make concepts work that wouldn't normally be quantifiable or measurable in a physical world.
My theory is that, since any calculator has a certain precision that it can store values up to, after a certain point the expression $1/n$ just gets rounded down to $0$, aren't there being enough bits to store the floating point value.
The whole expression is then evaluated as $1^n$ which is constantly equal to $1$. It doesn't matter if eventually the calculator can't keep up with the value of the exponent either and rounds it too up to something else, because the value of the power will be $1$ regardless of what the exponent ends up being.
Any other theories?


Not exactly.
$\dfrac1n$ can be represented in floating-point for huge values of $n$; $\dfrac1n=10^{-12}$ is not at all a problem. This is because the float uses an exponent, so that its order of magnitude can vary wildly.
The trouble comes from the addition of $1$, which is comparatively large, and the sum is one followed by many zeroes (like $\color{green}{1.0000000000}\color{red}{01}$). As the number of significant digits in the representation is limited, sooner or later only the zeroes remain. Even before that, the accuracy that you get is poor because many significant figures are dropped in the binary representation.
Note that several programming languages have a special function to deal with this particular case of loss of accuracy, usually denoted $\text{log1p}$. It computes the value of
$$\log(x+1)$$ accurately for small $x$, i.e. without suffering from the rounding problem. You would use it as follows:
$$\left(1+\frac1n\right)^n=e^{n\log(1+1/n)}=e^{n\text{ log1p}\left(1/n\right)}.$$
Note that by Taylor
$$n\text{ log1p}\left(\frac1n\right)=1-\frac1{2n}+\frac1{3n^2}-\cdots$$
so that for all large $n$ we will obtain $e^1$ (even when there is rounding in the computation of $1-\dfrac1{2n}$). Compare the values with $\log$ and $\text{log1p}$:
$$\begin{align} 2.5937424601000023 &,\ \ \ \ \ 2.5937424601 \\ 2.7048138294215285 &,\ \ \ \ \ 2.7048138294215263 \\ 2.7169239322355936 &,\ \ \ \ \ 2.7169239322358925 \\ 2.7181459268249255 &,\ \ \ \ \ 2.718145926825225 \\ 2.7182682371922975 &,\ \ \ \ \ 2.7182682371744895 \\ 2.7182804690957534 &,\ \ \ \ \ 2.7182804693193767 \\ 2.7182816941320818 &,\ \ \ \ \ 2.718281692544966 \\ 2.7182817983473577 &,\ \ \ \ \ 2.7182818148676366 \\ 2.7182820520115603 &,\ \ \ \ \ 2.7182818270999043 \\ 2.7182820532347876 &,\ \ \ \ \ 2.7182818283231316 \\ 2.7182820533571100 &,\ \ \ \ \ 2.7182818284454537 \\ 2.7185234960372378 &,\ \ \ \ \ 2.718281828457686 \\ 2.7161100340869010 &,\ \ \ \ \ 2.718281828458909 \\ 2.7161100340870230 &,\ \ \ \ \ 2.7182818284590318 \\ 3.0350352065492620 &,\ \ \ \ \ 2.718281828459044 \\ 1.0 &,\ \ \ \ \ 2.718281828459045 \\ 1.0 &,\ \ \ \ \ 2.718281828459045 \\ 1.0 &,\ \ \ \ \ 2.718281828459045 \\ 1.0 &,\ \ \ \ \ 2.718281828459045 \\ 1.0 &,\ \ \ \ \ 2.718281828459045 \end{align}$$
Final comment:
The numerical computation of limits (virtually the core of all numerical computations) suffers from two limitations:
you cannot take $n$ to be as large as you want, because of the workload involved; this is called the truncation error;
you have to limit the computation accuracy for practical reasons.
Usually, the design of a numerical solution is a tradeoff between both.