I am reading the proof of Itō's formula and self-studying stochastic integration.
Essentially, the authors proved the formula for dimension $d = 1$ and they ask the learner to do for greater generality (which is fine with me).
When they do dimension $d = 1,$ they do the following:
$$\begin{align*} f(X_t) - f(X_0) &= \sum_{k = 1}^m [f(X_{t_k}) - f(X_{t_{k - 1}})] \\ &= \sum_{k = 1}^m f'(X_{t_{k - 1}})(X_{t_k} - X_{t_{k - 1}}) + \dfrac{1}{2} \sum_{k = 1}^m f''(\eta_k)(X_{t_k} - X_{t_{k - 1}})^2 \end{align*}$$ where $\eta_k$ is defined in the line $f(X_{t_k}) - f(X_{t_{k - 1}}) = f'(X_{t_{k - 1}})(X_{t_k} - X_{t_{k - 1}}) + \dfrac{1}{2} f''(\eta_k)(X_{t_k} - X_{t_{k - 1}})^2$ and therefore, it makes the $f(\eta_k)$ measurable on the set $\{X_{t_k} \neq X_{t_{k - 1}}\}$ and if $X_{t_k} = X_{t_{k-1}}$ we can choose $\eta_k = X_{t_{k - 1}},$ say. Fine, I understand their proof in dimension $d = 1.$ The problem comes when I am solving the problem of proving the same theorem for higher dimension. I would like to do the following Taylor's expansion (which, by the say, is the suggestion the authors give) $$\begin{align*} f(X_t) - f(X_0) &= \sum_{k = 1}^m [f(X_{t_k}) - f(X_{t_{k - 1}})] \\ &= \sum_{k = 1}^m \nabla f(X_{t_{k - 1}}) \cdot (X_{t_k} - X_{t_{k - 1}}) + \dfrac{1}{2} \sum_{k = 1}^m \mathbf{H}f (\eta_k) \cdot (X_{t_k} - X_{t_{k - 1}})^{(2)}, \end{align*}$$ where $\mathbf{H} f$ is the Hessian of $f$ and for any vector $h$ I write $h^{(2)} = (h, h).$
The crux is, how to show $\eta_k$ can be chosen so that $\mathbf{H} f(\eta_k)$ is measurable or do you have an alternative approach?