Let $u:\mathbb R^d\to\mathbb R$ be twice Fréchet differentiable.
What's the second Fréchet derivative ${\rm D}^2u$ of $u$?
It's clear that ${\rm D}u$ is a mapping$^1$ $\mathbb R^d\to\mathfrak L(\mathbb R^d,\mathbb R)$ and ${\rm D}^2u$ is a mapping $\mathbb R^d\to\mathfrak L(\mathbb R^d,\mathfrak L(\mathbb R^d,\mathbb R))$. From that, it's easy to see that $${\rm D}u(x)y=\nabla u(x)\cdot y=\sum_{i=1}^dy_i\frac{\partial u}{\partial x_i}(x)=(y\cdot\nabla)u(x)\;\;\;\text{for all }x,y\in\mathbb R^d\;.\tag 1$$ How can we obtain ${\rm D}^2u$? I'm sure ${\rm D}^2u$ can be expressed somehow in terms of the Hessian $\nabla^2u$, maybe we've got $${\rm D}^2u(x)yz=\nabla^2u(x)y\cdot z\;\;\;\text{for all }x,y,z\in\mathbb R^d\;,$$ but I'm unsure.
$^1$ Let $\mathfrak L(A,B)$ be the set of bounded, linear operators from $A$ to $B$.
Fix $x_0 \in R^d$. As you have noticed, for every $x \in R^d$ and for every $k = (k_1,...,k_d) \in R^d$, $$Du(x)\cdot k = \sum_{i=1}^d u_{x_i}(x)k_i.$$ Now, for each $i=1,...,d$ and $h=(h_1,...,h_d) \in R^d$, let us write $$u_{x_i}(x_0+h)-u_{x_i}(x_0) = \nabla u_{x_i}(x_0)\cdot h + r_i(h),$$ where $r_i(h) = o(\|h\|),$ as $\|h\| \rightarrow 0$. For $k=(k_1,...,k_d) \in R^d$, we have $$Du(x_0+h)\cdot k - Du(x_0) \cdot k = \sum_{i=1}^d [u_{x_i}(x_0+h) - u_{x_i}(x_0)] k_i = \sum_{i=1}^d [\nabla u_{x_i}(x_0)\cdot h]k_i + \sum_{i=1}^d r_i(h)k_i. $$ Thus, $$\left|(Du(x_0+h) - Du(x_0) - T \cdot h ) \cdot k \right| \leqq \sum_{i=1}^d |r_i(h)k_i|\leqq \left( \sum_{i=1}^d (r_i(h))^2 \right)^{1/2}\|k\|,$$ where $T: h \in R^d \mapsto T\cdot h \in L(R^d,R^1)$ is such that for each $h = (h_1,...,h_d) \in R^d$ and each $k = (k_1,...,k_d) \in R^d$, $$(T \cdot h) \cdot k := \sum_{i=1}^d [\nabla u_{x_i}(x_0)\cdot h] k_i = \sum_{i=1}^d \sum_{j=1}^d u_{x_i x_j}(x_0) h_jk_i.$$ Hence $$ \left\| Du(x_0+h) - Du(x_0) - T\cdot h \right\|_{op} \leqq \left( \sum_{i=1}^d (r_i(h))^2 \right)^{1/2}.$$ Given $\epsilon > 0$, one can choose $\delta>0$ such that $h \in R^d$, $\|h\|\leqq \delta$, implies $$|r_i(h)| \leqq \frac{\epsilon}{\sqrt{d}} \|h\| \hspace{1.0cm} ( i=1,...,d),$$ whence $$ \left\| Du(x_0+h) - Du(x_0) - T\cdot h \right\| \leqq \epsilon\|h\| ,$$ for these $h$. It follows that $D^2u(x_0)=T,$ so the previous description of $T$ is precisely what $D^2u(x_0)$ is.
If one is willing to identify the space $L(R^d,L(R^d,R^1))$ with the space $L(R^d,R^d;R^1)$ which consists of all bilinear forms $b :R^d\times R^d \rightarrow R^1$, the visualization of $D^2u(x_0)$ becomes a bit clearer. The $d \times d$ matrix of the bilinear form $$D^2u(x_0) :((h_1,...,h_d),(k_1,...,k_d)) \in R^d \times R^d \mapsto \sum_{i=1}^d \sum_{j=1}^d u_{x_i x_j}(x_0) h_j k_i \in R^1$$ has $u_{x_i x_j}(x_0)$ as its entry in the $j$-th row and $i$-th column ($1 \leqq i,j \leqq d$): $$[D^2u(x_0)]= \left( \begin{array}{ccc} u_{x_1 x_1} (x_0) & \cdots & u_{x_d x_1} (x_0) \\ \vdots & \ddots & \vdots \\ u_{x_1 x_d}(x_0) & \cdots & u_{x_d x_d}(x_0) \end{array} \right). $$ In this way, for each $h=(h_1,...,h_d) \in R^d$ and $k=(k_1,...,k_d) \in R^d$, we recover $(D^2u(x_0) \cdot h)\cdot k$ by performing the product $$[(D^2u(x_0) \cdot h) \cdot k ] = \left( \begin{array}{ccc} h_1 & \cdots & h_d \end{array} \right) \left( \begin{array}{ccc} u_{x_1 x_1} (x_0) & \cdots & u_{x_d x_1} (x_0) \\ \vdots & \ddots & \vdots \\ u_{x_1 x_d}(x_0) & \cdots & u_{x_d x_d}(x_0) \end{array} \right) \left( \begin{array}{ccc} k_1 \\ \vdots \\ k_d \end{array} \right).$$