Wikipedia introduces deterministic chaos as
[Small differences in initial conditions yielding widely diverging outcomes] can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable
I understand that this is a physical inability to reproduce identical initial conditions. Or that the imprecision in computation between different code or computers will produce different results.
This is therefore related to imperfections in the real world (either physical or computational) but I fail to understand how this can have an effect in simulations.
If my code states that $\frac{1}{3}=0.34$, it will always be the case and subsequent simulations will lead to the same results. It is not like $\frac{1}{3}=0.34$ on Monday, and $0.3333$ on Tuesday.
My question: does the chaotic behaviour of equations happen in simulations without introducing random rounding errors or changes in initial conditions (in one code and computer)? If so, what is the fundamental reason for this?
In other words, is chaos an expression of real-world limitation more than a fundamental formal inability to predict the future?
I think mathematicians tend to think of things on a theoretical level. A course on deterministic models or dynamical systems would brush up on some concepts like the Lyapunov exponent (comment from M. Nestor): https://en.wikipedia.org/wiki/Lyapunov_exponent
I want to start by saying that the inability to perfectly replicate experimental conditions or numbers on a computer each time is not what is meant by "chaos". These real-world limitations exist but are often never studied because these differences in initial conditions occur on a microscopic level. Scientists who cannot repeat experiments identically each time must take averages and have error bars. Only those studying highly-controlled systems (e.g. simulations) can start talking about or quantifying chaos.
Your question of "when/where does chaotic behaviour arise" is more of a question of semantics, at least in my day-to-day experience.
In a pragmatic sense, yes, rounding errors or changes in initial conditions are how we see chaotic behaviour. When doing deterministic simulations, or at least in what I and my colleagues do, reproducability is important and expected. When we compare two simulations that have the same initial conditions and physics, we expect them to behave the same up until some point (after which, we then say some small rounding error occurred). We also both understand that if the initial conditions were changed slightly, the two simulations will behave similarly at first and then diverge wildly later.
This is the point of view that can most easily be shared with non-maths colleagues. I don't think the second question of "what is the fundamental reason" is normally answered, especially for complex systems. Often, instead, statistical properties are studied (i.e. thermodynamics, etc.).
In a theoretical sense, no, you do not need to test different initial conditions to know there is a difference. A system inherently has a degree of chaos. There will always be a question as to how big of a change in initial conditions over how long a time will be detectable. And just looking at a finite number of simulations will never give you a full picture of things. The most obvious difference is between a single and double pendulum system: https://en.wikipedia.org/wiki/Double_pendulum
As I understand, rigorously exploring the "fundamental reason" a system behaves a certain way requires quite a bit of work. See: How to study this highly nonlinear and coupled suit of ODE in network application. I am no expert in this, though.