The following is Taylor's theorem given in a numerical analysis textbook:
I couldn't interpret what $C^n[a,b]$ means and what the intuition behind it is. Could someone explain?
The following is Taylor's theorem given in a numerical analysis textbook:
I couldn't interpret what $C^n[a,b]$ means and what the intuition behind it is. Could someone explain?
The notation $C^n[a,b]$ is referring to a differentiability class:
Basically, $C^0$ denotes a class of functions that are continuous. Next, $C^1$ consists of functions that are differentiable with a continuous first derivative, and etc.
Essentially then, $C^n$ denotes the set of all $n$-differentiable functions, i.e. a function that differentiable $n$ times. The interval $[a,b]$ means that is differentiable $n$ times over the interval $[a, b]$ with the derivative being continuous. Thus, $f', f'', f''' \dots f^{(n)}$ exist on $[a,b]$ and are continuous.
The notation $f \in C^n[a,b]$ means that $f$ is in the set $C^n$ on $[a, b]$, thus has $n$ derivatives on the interval $[a, b]$. This is also basically explained by the next few words:
This implies that $f^{(n)}$ exists and is continuous because it at least must be because continuity is a requirement of differentiability. The fact $f^{(n+1)}$ is needed to exist is because it is needed for the remainder term of Taylor's Theorem.