I have two questions regarding how two concepts that involve complex valued functions may come up in a natural way. (Non-natural ways are: These concept come up in order to present a unified theory, for the sake of generalization etc.; natural ways are: These is a specific problem that can only be solved if we use complex-valued functions, complex-valued functions help us to make arguments shorter etc.)
The first is the Lebesgue integral for complex-valued functions. The second is a complex Hilbert space (an example of which, to make a connection, are the $L_p$ spaces of complex-valued functions).
Remarks (except the first one, the EDIT, these are not essential for the above question, if you don't have the time to read them):
EDIT: Since a lot of answer essentially said "when building up our theory you just generalize [to complex-valued functions] because you can - and much later it will turn out that the generalization will have benefits" consider the following way of approaching how to learn a new mathematical theory.
You take the perspective of me inventing it, with the textbook being our oracle and giving you a constant stream of sudden insight. Since developing/inventing a theory takes hard work, you need a strong incentive to do so. Merely generalizing for the sake of it won't do! So for example suppose I had already developed complex analysis. (This itself is easily motivate-able, because after rigorously arriving at these mysterious complex numbers, which turned out to be useful to solve equations, you can ask yourself: Can you also do analysis with these new numbers ? And there you have the motivation for developing complex analysis.) But why go further ? At this point intrinsic mathematical interest is exhausted and unless a compelling reason is give, you wouldn't go further, e.g., generalize this again and again to the holomorphic function calculus (which is mentioned in one answer below).
Now I'm not saying you shouldn't generalize, I'm just saying that at this point I don't know the applications/reasons etc. that make that effort worthwhile - and finding these applications/reasons is precisely what I'm after. The easier it is to describe these motivations, the better. The also don't need to be historically accurate.
To give an example for the statement from the last sentence: Today no one would use as a first motivation for the concept of a group the fact that they arose as permutation of roots of equations of fifth order - as it was historically the case. Instead, today one simply notices some of the many examples of groups that come up in the most basic of mathematical constructions, such as $(\mathbb{R},+)$ and uses the prevalence of these structures as a motivation of distilling their common properties in the group axioms.For the (unsigned) Lebesgue integral for $[0,+\infty)$-valued functions there is an easy, natural way in which it comes up: The geometric question what kind of integral one gets if one substitutes the Jordan content of the "area under the graph" of the functions, which gives the Riemann integral, with the Lebesgue measure. Since the Lebesgue measure - and more generally, the question of what it means to measure something - itself also enjoys nice geometric descriptions, so this entire approach is a natural line of questions.
(With a bit of a stretch one may also say the Lebesgue integral for $\overline{ \mathbb{R}}$-valued functions (which have to absolutely integrable) comes up naturally. But I can't see any natural motivation that allows the further generalization of the Lebesgue integral to $\mathbb{C}$-valued functions.)For Hilbert spaces I beg you not to mention the complex-valued Hilbert spaces which come up in quantum mechanics: I understand nothing of theoretical physics so using this as a motivation won't help me (and if this is the only motivation, I'd be disappointed of the concession mathematics makes to physics by inventing a whole class of spaces just for them). Other than that I don't know of any complex Hilbert space that arise naturally. If one wants to solve PDEs, apparently the main application of Hilbert spaces, it seems that all relevant Hilbert spaces are real. The properties of complex Hilbert spaces can readily be abstract from concrete spaces (like the $L_p$ spaces for complex-valued Hilbert spaces), but a natural way in which they come up is unknown to me.
In addition to, and reiterating some points of, @TrialAndError's good answer and comments:
First, agreed, textbooks very often do a lousy job of explaining why we'd care about some generalization-or-other. Apart from the usual corruptions of textbook writing, this is substantially due to the style of inverting examples and "definitions/theorems": instead of giving examples which compel and/or predict theorems (and essentially determine definitions, up to meaningless differences), the style is to first give formalizations which, in fact, were the result of long experimentation with many examples that arose in their own right.
Another bad possibility is invention of fake history, or fake dialectics, giving supposedly-easily-understood reasons for why we do things the way we do. Again, the genuine history is often much more complicated than textbooks pretend, and tangled up with many other issues. But those complicated, real-life-math issues mandated innovation. Not idle generalization or axiomatization. E.g., Cantor invented set theory to try to understand pointwise convergence of Fourier series, and many of the early 20th-century set theory and point-set topology continued in that vein. Instead, as in calculus taught to students who don't know any science, it is common to make up fake applications, which are mostly unconvincing and visibly contrived. Leaky conical water tanks, forsooth!
Complex numbers arise inevitably in looking at the solution by radicals of the cubic, even if all the roots are real.
Discovery of multiply periodic functions (=elliptic functions) by inverting elliptic integrals like $\int_0^x dt/\sqrt{t^3+1}$, and more-periodic (="abelian" functions) in the very early 19th century made the complex numbers inescapable, because the "periods" were not usually real.
Riemann's celebrated 1858/59 paper on the zeta function (and distribution of prime numbers) showed that there were infinitely-many complex roots of the analytically-continued zeta, and (in case one thought the previous was just a fantasy) that the location of these tightly controlled the distribution of prime numbers (arguably a tangible, not-made-up thing!) (No, we do not know exactly why Riemann did this. Probably experiments to try to make progress on proving the Prime Number Theorem.)
That is, in the tumult and chaos that is genuine mathematics, one is eager to pursue any experimental avenue that has some hope of breaking log-jams... It is not so common to say that "pure mathematicians" experiment, but if we look at what they're literally doing, they are indeed experimenting with different possibilities, noting what happens, and proceeding to the next. Thus, in various situations, complex numbers, complex vector spaces, complex function theory, etc., all were discovered to have remarkable utility. So these ideas are "keepers". The "losers" are not so often reported, so we don't so easily know ...
Edit: Why complex rather than real vector spaces and Hilbert spaces? Well, @TrialAndError already really explained, but perhaps some reiteration and addition is worthwhile. Indeed, the Sturm-Liouville spectral theory starting c. 1833, up through Bocher's and Steklov's (and others') completion of many details c. 1896, real vector spaces seemed fine, and reflective of much physical reality besides. c. 1900, Hilbert, Schmidt, Fredholm, Volterra, and many others succeeded in giving rigor to PDE (as opposed to ODE, basics dealt with well by Picard a few years earlier) by converting them to "integral equations". In happy circumstances (compact self-adjoint compact operators...), the real-vector-space ideas similar to Rayleigh-Ritz method for finding eigenvalues still succeeded. No imperative to look at complex numbers, perhaps. The applecart was upset a good bit by the advent of the (very successful) "mathematics" of quantum mechanics in the 1920s, in the hands of Dirac and others. Then as now, "observables" were (typically) unbounded "self-adjoint operators" on (sure, real, for a moment) Hilbert spaces. For example, in the simplest case, multiplication by $x^2$ and $d^2/dx^2$ on square-integrable functions on the real line, and the "Schrodinger operator" for the "quantum harmonic oscillator", $-d^2/dx^2+x^2$. The problem is that this isn't really defined on $L^2(\mathbb R)$, real-valued or not. But physicists did not care about this too much. Apparently some mathematicians were unhappy with the state of affairs, e.g., J. von Neumann, who in 1929 gave a rigorous criterion for extendability of "(obviously) symmetric" operators to (genuinely) self-adjoint... in terms of complex eigenspaces for the adjoint of the given operator. I do not know of any simpler way to formulate this. (Hilariously, supposedly Dirac, when told of the great accomplishment of distinguishing "symmetric" from "self-adjoint", expressed inability/disinterest in the distinction.)
Indeed, from what I have read, Banach formulated much of his work on real vector spaces... And, for example, the Hahn-Banach theorem is really a theorem on real vector spaces, with the complex case a mere corollary.