A quadratic form $H:\mathbb{R}^n\to\mathbb{R}$ is a function such that its value in a vector $v = (\alpha_1,\cdots,\alpha_n)$ is given by $$\sum_{i,j=1}^{n}h_{ij}\alpha_i\alpha_j$$ where $(h_{ij})$ is a symetric matrix $n\times n$. We indicate with the notation $H\cdot v^2$ the value of the quadratic form $H$ in the vector $v$. So:
$$H\cdot v^2 = \sum_{i,j=1}^{n}h_{ij}\alpha_i\alpha_j$$
The hessian form of a twice differentiable function $f:U\to\mathbb{R}$ in the point $x\in U$ is denoted by $H(x)$ or $Hf(x)$. We know that $H(x) = d^2f(x)$, so:
$$H(x)\cdot v^2 = \sum_{i,j=1}^n\frac{\partial^2 f}{\partial x_i\partial x_j}(x)\alpha_i\alpha_j$$
Shcwarz theorem guarantees that the matrix $\left(\frac{\partial^2 f}{\partial x_i\partial x_j}(x)\right)$, called hessian matrix of $f$ in the point $x$, is symmetric.
I know that the hessian form appears, for example, in the taylor expansion for a function $f:\mathbb{R}^n\to\mathbb{R}$, besides me not knowing how to find such expansion.
It makes sense to talk about the hessian form of a function at a point, since the taylor expansion is in a point and it 'generates' the hessian terms, which can be compacted in a matrix determinant.
As I understood in the definition of my book, a 'quadradic form $H$ is just a generalization of this particular case where the hessian appears naturally in the taylor expansion:
$$f(a+v) = f(a) + df(a)\cdot v + \frac{1}{2}d^2f(a)\cdot v^2 + \cdots + \frac{1}{p!}d^pf(a)\cdot v^p + r_p(v)$$
I understand how the quadratic form for a twice differentiable function appears (even though I do not know how to get there), but it's completely unknown to me why someone would want to generalize and study other quadratic forms. So this question is just to get a better understanding of all of this:
do you know an easier way to make the hessian appear naturally or do you know how to explain how it appears in the taylor expansion
and
what is the utility to study other quadratic forms?
also: why it's called quadratic form?