I've gone through classes on differential and integral calculus, but I feel we've mostly just learned different methods for computing integrals, and different questions regarding the computation of derivatives and integrals.
I'd like to know: why exactly is it that these computations work so well for elementary functions like $f(x) = x^k$ or $g(x) = ax + b$, not for other functions? For instance, the different combinations of $f(x)$ and $g(x)$ already begin to cause some trouble, even if they might eventually be integrable, at a first glance this is not immediately obvious for $f(x)^{g(x)}$, or if it is, maybe not for $f(x)^{{g(x)}^{f(x)}}$ or some more complicated functions.
In an attempt to better understand the integrals of simple functions, I want to understand: if we know the integral is nothing else than the area under a function, and we have a perfect way for finding the area under a function (Riemann sums), why is the any problem if we're given a function and we can compute that function, even if we at first might not know the corresponding area? We have the value of the function, i.e. of $y$, at every $x$, so the area should come easily enough, no?
I can look at a graph, and instantly I can say: well, the following other curve could be an integral of the above one.
Ultimately, I'd like to deepen my understanding of calculus, so if someone has recommendation of books or of resources I might look at, I would highly appreciate it. I know this is a vague question, but at the moment I look at a graph of a function and its derivative, and fail to see the relationship between them in a way that could be generalized, or I don't see the bigger picture, so I'm not sure exactly how to frame this properly.
The primary method you probably have for computing integrals is applying the fundamental theorem of calculus, which turns the problem of calculating an integral into the problem of finding an anti-derivative and calculating its values at two points.
The problem you're adressing is two-fold:
1) We don't really like most numbers.
What I mean here is that the "typical" real number is some transcendental thing whose exact value is unlikely to make you happy. In this case, you probably only really care about some finite part of the decimal expansion of the given number, in which case you would calculate the integral exactly the way you propose: Have some approximation scheme for the values of the function and simply compute a suitably large Riemann sum. This is the fundamental idea of numeric integration.
In this vein, it is worth noting that polynomials have nice algebraic properties: If you know the input, you know the universe of the output. Say my polynomial has rational coefficients and I compute it at some $x$. Then I know the result must have the form $\sum_{i=1} a_i x^k$ for $a_i\in \mathbb{Q}$ which means that if I know $x$, I can actually say something about this number for accurately. This is one reason integration of polynomials works out nicely.
2) What does an anti-derivative even look like for a general function?
The other thing going on here is that, when applying the fundamental theorem of calculus, you need to have some idea of what an anti-derivative your function is without actually calculating the anti-derivative as an integral - in that case, you'd have already calculated the integral, which would leave the exercise pointless. Elementary functions have the nice property that their anti-derivatives are also elementary functions and this helps a lot.
You'll notice that elementary functions tend to be solutions to nice differential equations, i.e. $y^{(n)}\equiv 0$ for polynomials of degree at most $n$, $y'=y$ in the case of $\exp(x)$ and $y''=-y$ in the case of $\sin$ and $\cos$, and you can deduce their antiderivatives simply from looking at the equations. Then, you can begin to start studying those equations separately and learn more about them.
$\pi$ and $e$ are not natural constants in any other sense than this: They popped up in problems we wanted to solve, and so we decided that they were important. In a completely parallel fashion, elementary functions become elementary because they arise as solutions to nice problems (or at least pop up in nice problems) and then, we start to work with them.