Given a sufficiently nice function $$f:\mathbb R^n\rightarrow\mathbb R$$ one can define a first-order tensor of all first-order partial derivatives in the standard way to obtain the gradient, and the same kind of idea allows one to construct a second-order tensor of all second-order partial derivatives to obtain the Hessian.
Is there a name for the analogously obtained third-order tensor of third-order partial derivatives? I'm writing a bit of code that uses that object to compute the Hessian of a matrix's eigenvalues with respect to some other voodoo that's probably irrelevant to this question, and if there is a standard nomenclature I'd prefer to use it.
Based on the relative silence here at MSE and my inability to find an answer online or at my local university, I'm going to go out on a limb and say that this tensor does not have a standard name.
In my own work, I've simply adopted the nomenclature of $d_0$ for the original function, $d_1$ for its gradient, $d_2$ for the Hessian, $d_3$ for this nonsense, and so on. Strictly speaking, I'm working with maps to $\mathbb R^m$, and each of these tensors is one order higher (e.g. Jacobian rather than gradient)
As somewhat of an aside, if anyone else stumbles across this answer looking for information about what I'm calling $d_3$, as a practical matter one can get often away with an approximation to the product $d_3T$ for an arbitrary tensor $T$, or even just $d_3v$ for an arbitrary vector $v$. There are more complicated schemes that control rounding error better, but one can use the following identity to save on the computation of $d_3$: $$d_3(x)v\approx\frac{d_2(x+rv)-d_2(x)}r.$$ Simply choose $r$ small enough for the identity to approximately hold and large enough to control rounding error. In the limit they aren't necessarily quite the same, but in practice the following scheme is nearly as simple to implement and controls rounding error better: $$d_3(x)v\approx\frac{d_2(x+rv)-d_2(x-rv)}{2r}.$$ Since $d_3$ typically dwarfs $d_2$ in total number of elements, this represents a significant savings in time at the cost of a hopefully small amount of approximation error.