In this video at 46:11, the Taylor series of function is rewritten a very different way by using exponential operator. I'll describe the method used:
$$ f(x) = \sum_{i} \frac{(x-a)^i}{i!} \frac{d^i f(x)}{dx^i}|_a= \left[ \sum_i \frac{(x-a)^i}{i!} \frac{d^i}{dx^i} |_a \right]f(x) = e^{ (x-a) \frac{d}{dx}|_a } f(x)$$
Are there any interpretations for what exactly is going on in the above manipulation?
For all operators $A$ on a vector space $V$, the exponential $\exp(A)$ can be defined via the series $$ (*)\qquad\exp(A)v:=\sum_{n\geq 0}\frac{A^nv}{n!} $$ whenever it converges, for $v\in V$. On the RHS $A^n$ is the $n$-fold composition of $A$.
For example, if $A$ is a bounded operator on a Banach space $V$, then it is possible to prove that $\exp(A)$ is again a bounded operator on $V$ (namely the series converges in the operator norm).
About your question proper, there are several formalization of the formula $$ (\exp(a\partial_x)f)(x)=f(x+a). $$ For instance, it holds true for all polynomials $f$, and more generally for all entire functions $f$; indeed in such case, the formal definition $(*)$ of the exponential of the operator $a\partial_x$ makes sense, because as the computation you have written down in the question shows, $(*)$ reduces to their Taylor series at zero, which has infinite radius of convergence. (For polynomials, it has actually finitely many terms.)
This should convince you that the identity you write is not merely formal. However, if you are interested, I add a few more observations/examples below, to convince you that it is often useful to use this type of arguments.
The exponential is only one example of operator defined through a series (though, one of the most prominent for sure). One can do several similar things. Some examples that come to my mind now are the following.
Define the logarithm of an operator $A$ via $$ \log(1+A)=\sum_{n\geq 1}(-1)^{n+1}\frac{A^n}{n}. $$ Again, when it converges. Take for example $A$ as $\Delta=\exp(\partial_x)-1$, namely the "shift operator" $$ (\Delta f)(x)=f(x+1)-f(x), $$ working on the space of polynomials. You get the formula $$ f'(x)=-\sum_{n\geq 1}\frac{(-\Delta)^nf}{n}(x), $$ where $(-\Delta)^n f=\sum_{i=0}^n(-1)^{i}{n\choose i}f(x+i)$; in a sense this formula "inverts" the Taylor series (Taylor gives the translation in terms of derivatives, this one gives the derivative in terms of translations). Try it on polynomials, it works! (And is a finite sum.)
For another example, it is a nice exercise to show that if $B_k$ are the Bernoulli numbers, then you can define the operator $$ \frac {\partial_x}{\exp(\partial_x)-1}=\sum_{n\geq 0}\frac{B_n}{n!}\partial_x^n. $$ Again, whenever the series converges. For instance, it converges for all polynomials. You can also prove (using the Bernoulli numbers' properties) that the operator composition $$ (\exp(\partial_x)-1)\circ\frac {\partial_x}{\exp(\partial_x)-1}\circ \int_0^x=id $$ is the identity in the space of polynomials of $x$. To be pedantic, here the first operator is defined as in $(*)$. An application of this argument is then to have an explicit formula for the sums $$ \sum_{i=1}^nf(i)=F(n)-F(0) $$ when $f$ is a polynomial. Here $F=\frac {\partial_x}{\exp(\partial_x)-1}\int_0^xf(t)dt$ is a polynomial, defined using the series above (which is a sum of finitely many terms once applied to the polynomial $f$). An example is the Faulhaber-Bernoulli formula when $f(x)=x^p$. I took this example from thies nice note if you wish to read more.
To conclude, another (less trivial) interpretation can be given in the Hilbert space $\mathcal H:=L^2(\mathbb R)$. Then the transformations $U_a$ (for all real $a$) which are defined on elements $f\in\mathcal H$ via $(U_af)(x):=f(x+a)$, form a one-parameter group of unitary transformations of $\mathcal H$. There is a very deep result, known as Stone's theorem, that tells you that (omitting many very important technical details which you can look up in the Wikipedia page for instance) for any such group there is a (possibly unbounded) self-adjoint operator $P$ on $\mathcal H$ such that $\exp(i Pa)=U_a$. In the case $U_a$ are the translation operators as above, $P$ is indeed the self-adjoint extension of the symmetric operator $-i\partial_x$.