When we try to evaluate an integral such as, say
$$\int_a^b{f(x)dx}$$
there is often the case that we can analytically find
$$\int{f(x)dx}$$
a little faster (imagine leaving away the evaluation for specific terms for partial integrations, if you lack an example. Partially integrating forces you to evaluate two terms instead of one all of the sudden).
Therefore, I feel tempted to leave away the limits of integration in general and evaluating it at the very end, to save some time.
Unfortunately, I'm not sure if this holds for every scenario. Think of integration by substitution, where the limits might change due to substituting.
And so, I want to ask: When exactly can I just ignore the limits of integration and apply them to an indefinite integral instead?
If you're having trouble to understand the difference, just give $\int_0^{\pi}{e^x\cdot cos(x)dx}$ a try. Finding $\int{e^x\cdot cos(x)dx}$ is easy, but trying to find $\int_0^{\pi}{e^x\cdot cos(x)dx}$ directly (using partial integration and evaluating all the terms mid-way) is somewhat cumbersome.
Alright, I'll try to answer it myself, now that the question has been discussed in the comments. Thanks a lot to Travis and André Nicolas!
In general, we may always ignore the limits of integration. This is easily justifiable by the Fundamental Theorem of Calculus:
$$\int_a^bf(x)\,dx = F(b) - F(a) = \left[ F(x) \right]_a^b = \left[ \int f(x)\,dx \right]_a^b $$
given that $f$ is well-behaved on $[a, b]$.
Note, though, that there are some definite integrals that we can solve exactly without being able to express their anti-derivatives in terms of elementary functions (thanks, Travis).