Chaos theory is about the sensitive dependence on initial conditions, i.e., in a simple system, when the initial conditions are altered, this leads to an enormous change in the output. I was thinking about whether the same kind of effect applies in the case of numerical methods, where we discard errors like rounding off or chopping errors. However, here the difference is that we don’t change the initial conditions slightly, but rather we take the value by rounding or chopping off.
Is there any resource where I can find a connection between these two?
I think you are being too fixated on the typical setup of chaos theory. A numerical error is not really different from a change in the initial conditions: Suppose the state of your system at time $t$ is $y(t)$. If you evolve a system from $t_0$ to $t_1$ and then continue to evolve it to $t_2$, you may as well regard $y(t_1)$ instead of $y(t_0)$ as the initial condition for integrating to $t_2$.
From yet another point of view, you do not consider initial conditions at all but trajectories with a certain distance. And here, a separation to numerical errors is equivalent to a separation of initial conditions.
Of course, in reality you have more than one isolated numerical error, but this just makes things tediously complicated without any fundamental qualitative changes. This is why chaos theory typically considers initial conditions and not permanent noise.
Finally, note the existence of the shadowing lemma, which states that you can find a true trajectory (without numerical errors) close to any numerically obtained trajectory.