How is Maclaurin Series different from Taylor Series?
With a little bit of surfing, I figured out that Maclaurin series is an approximation about the point $0$. Does that mean that Maclaurin series would give correct answers only about $x=0$ and, for example, if we need to calculate bigger values we will need to use Taylor theorem and approximate about that big value to get our result?
For an analytic function $f(x)$ we define it's Taylor series centered at $a$ by
$$T(x) = \sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!}(x-a)^n.$$
The Maclaurin series for $f(x)$ is simply the Taylor series centered at $0$ so:
$$M(x) = \sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n.$$
To determine the Maclaurin series we need find the values of $f(x)$ and its derivatives at $x=0$ while to find the Taylor series centered at $a$ we need do the same at $x = a$. Depending on our function this can be very hard for certain values of $a$. (For example, try to compute $e^{1.2}$ versus $e^0$)
The closer the center of a Taylor series is to a point $b$ dictates how "easy" it is to approximate $f(b)$.
So, for a particular function $f(x)$ ($\sin(x)$ for example) the Maclaurin series may be easier to find than the Taylor series centered at 998 (compare $\sin(998)$ and $\cos(998)$ versus $\sin(0)$ and $\cos(0)$) but the Maclaurin series may need many, many more terms to approximate $f(1000)$ than the Taylor series at 998.
Both are extremely useful, it's a matter of computational cost as to which one you might use in a given situation. (With $\sin(x)$, we either pay it in the beginning, when finding the series, or the end, when computing a lot of terms.)
As we've pointed out, the Maclaurin series is a special case of the more general Taylor series. If you want to know the history and why we sometimes give the Taylor series at $0$ this special name, this bit on Wikipedia is a good starting point!