Suppose one is given a function
\begin{equation} f(x_1,\dots,x_n) = g\bigg(x_1,\bigg(\sum_{i=2}^n x_i^2\bigg)^{1/2}\bigg), \end{equation}
and denote \begin{equation} t:=x_1 \quad \text{and}\quad r:= \bigg(\sum_{i=2}^n x_i^2\bigg)^{1/2}. \end{equation}
I am told that the determinant of the Hessian of $f$ is given by \begin{equation} \det D^2f = (g_{tt}g_{rr}-g_{tr}^2)\bigg(\frac{g_r}{r}\bigg)^{n-2}, \end{equation} and it seems there must be an easy way to see this, but I cannot work it out. I have tried to derive this by computing the Hessian: the first partial derivatives are given by \begin{align} \frac{\partial f}{\partial x_i} & = \frac{\partial g}{\partial t}\frac{\partial t}{\partial x_i} + \frac{\partial g}{\partial r}\frac{\partial r}{\partial x_i} = \begin{cases}g_t & \text{if } i=1 \\ g_r\frac{x_i}{r} & \text{if } i\not=1 \end{cases} \end{align} and then the second partial derivatives are given by \begin{equation} \frac{\partial^2 f}{\partial x_i\partial x_j} = \begin{cases}g_{tt} & \text{if } i=j=1 \\ g_{tr}\frac{x_j}{r} & \text{if }i=1, j\not=1 \\ g_{tr}\frac{x_i}{r} & \text{if }i\not=1, j=1 \\ g_{rr}\frac{x_i x_j}{r^2} + g_r\frac{\delta_{ij}}{r} - g_r\frac{x_ix_j}{r^2} & \text{if }i\not=1, j\not=1. \end{cases} \end{equation} I was hoping the Hessian would be of a nice form (block diagonal or something) so I could compute the determinant easily, but this doesn't seem to be the case, unless I've calculated something wrong. Any help would be much appreciated! Thanks
Here is my take on it. Let $f:\mathbb R^n\to\mathbb R$ given by $f(\vec x) = g(t(\vec x), r(\vec x))$, where $g:\mathbb R^2\to\mathbb R$ is some differentiable function, $t:\mathbb R^n\to\mathbb R$ is given by $t(\vec x) = x_1$ and $r:\mathbb R^n\to\mathbb R$ is given by $r(\vec x) = \sqrt{\sum_{i=2}^n x_i^2}$. For ease of notation, we will also define $h:\mathbb R^n\to\mathbb R^2$ with $h(\vec x) = (t(\vec x), r(\vec x))$ and $\pi_i : \mathbb R^3\to\mathbb R$ with $\pi_i(\vec x) = \frac{x_i}{r(\vec x)}$. We have that $f = g\circ h$. Using the chain rule, $$(Df)_{\vec x} = (D(g\circ h))_{\vec x} = (Dg)_{h(\vec x)}\circ(Dh)_{\vec x}.$$ This can be easily computed (you already did it correctly) and we obtain that the first component $Df_1$ is $(g_t\circ h)(\vec x)$ whereas the $i^{\text{th}}$ component $Df_i$ for $i\geq 2$ is $((g_r\circ h)\cdot\pi_i)(\vec x)$ where $\cdot$ is the usual multiplication in $\mathbb R$. The matrix representation of $D^2f$ may now be obtained by simply differentiating each component as a $\mathbb R^n\to\mathbb R$ function. We differentiate the first component separately and obtain $$(D(Df_1))_{\vec x} = (Dg_t)_{h(\vec x)}\circ (Dh)_{\vec x},$$ whereas for the $i^{\text{th}}$ component ($i\geq 2$), using the chain rule as well as the product rule, we have $$(D(Df_i))_{\vec x} = \underbrace{\pi_i(\vec x)\cdot((Dg_r)_{h(\vec x)}\circ (Dh)_{\vec x})}_{C_{i, 1}(\vec x)} + \underbrace{((g_r\circ h)(\vec x))\cdot(D\pi_i)_{\vec x}}_{C_{i, 2}(\vec x)}.$$ Just to make sure we are on the same page, note that this is a vector consisting of $n$ components and in particular, $\pi_i(\vec x)$ and $(g_r\circ h)(\vec x)$ are scalars.
To summarize what we have so far, we will write $$(D^2f)_{\vec x} = \left(\begin{matrix} (D^2f_1)_{\vec x}\\ C_{2, 1}(\vec x) + C_{2, 2}(\vec x) \\ \vdots \\ C_{n, 1}(\vec x) + C_{n, 2}(\vec x) \end{matrix} \right).$$
Now, we need to go through some tedious computations. Let's start off with computing $(Dg_r)_{h(\vec x)}\circ (Dh)_{\vec x}$. $$(Dg_r)_{h(\vec x)}\circ (Dh)_{\vec x} = (g_{rt}(h(\vec x)), g_{rr}(h(\vec x)))\circ\left(\begin{matrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & r_{x_2}(\vec x) & r_{x_3}(\vec x) & \cdots & r_{x_n}(\vec x)\end{matrix}\right) =$$ $$= (g_{rt}(h(\vec x)), r_{x_2}(\vec x)g_{rr}(h(\vec x)), r_{x_3}(\vec x)g_{rr}(h(\vec x)), \ldots, r_{x_n}(\vec x)g_{rr}(h(\vec x))).$$
Similarly, we get $$(Dg_t)_{h(\vec x)}\circ (Dh)_{\vec x} = (g_{tt}(h(\vec x)), r_{x_2}(\vec x)g_{tr}(h(\vec x)), r_{x_3}(\vec x)g_{tr}(h(\vec x)), \ldots, r_{x_n}(\vec x)g_{tr}(h(\vec x))).$$
Computing $(D\pi_i)_{\vec x}$ is easier done than said. We will just provide the result below. $$(D\pi_i)_{\vec x} = \left(0, -\frac{x_2x_i}{r^3(\vec x)}, -\frac{x_3x_i}{r^3(\vec x)}, \ldots, \frac{1}{r(\vec x)} - \frac{x_i^2}{r^3(\vec x)}, \ldots, -\frac{x_nx_i}{r^3(\vec x)} \right), $$ where only the $i^{\text{th}}$ component differs.
We now have everything we need. Note that $\det D^2f$ can be split into $2^{n-1}$ determinants by splitting the sum $C_{i, 1} + C_{i,2}$ into a sum of two determinants. However, many of these determinants will be zero. This is because $C_{i, 1}$, up to a constant, does not depend on $i$, hence any determinant containing $C_{i, 1}$ and $C_{j, 1}$ as rows for $i\neq j$ will contain (at least) two linearly dependent rows, meaning that the determinant is zero. So, we obtain that $\det D^2f$ is the sum over $i$ of $n$ determinants with $D^2f_1$ as the first row, $C_{j, 2}$ as the $j^{\text{th}}$ row for $j\neq i$ and $C_{i, 1}$ as the $i^{\text{th}}$ row in addition to the determinant with the first row $D^2f_1$ and all other rows being $C_{j, 2}$, namely $$\det D^2f = \sum_{i=2}^{n+1} \left|\begin{matrix} (D^2f_1)_{\vec x} \\ C_{2, 2}(\vec x) \\ \vdots \\ C_{i, 1}(\vec x) \\ \vdots \\ C_{n, 2}(\vec x)\end{matrix}\right|,$$ where the last summand is interpreted as all rows being $C_{j, 2}$. We will now assume that $n=3$. The general case should be easy (but tedious) to derive using basic properties of determinants. Throughout the remainder of this proof, we will denote $(g_r\circ h)(\vec x)$ by $g_r$, $r_{x_i}$ by $r_i$ and we shall not write the arguments of functions. We have: $$\det D^2f = \frac{x_2 g_r}{r}\left| \begin{matrix}g_{tt} & r_2g_{tr} & r_3g_{tr} \\ g_{rt} & r_2g_{rr} & r_3g_{rr} \\ 0 & -\frac{x_2x_3}{r^3} & \frac{1}{r} - \frac{x_3^2}{r^3} \end{matrix}\right| + \frac{x_3 g_r}{r}\left| \begin{matrix}g_{tt} & r_2g_{tr} & r_3g_{tr} \\ 0 & \frac{1}{r} -\frac{x_2^2}{r^3} & - \frac{x_2x_3}{r^3}\\ g_{rt} & r_2g_{rr} & r_3g_{rr} \end{matrix}\right| + \frac{g_r^2}{r^2}\left| \begin{matrix}g_{tt} & r_2g_{tr} & r_3g_{tr} \\ 0 & 1 -\frac{x_2^2}{r^2} & - \frac{x_2x_3}{r^2} \\ 0 & -\frac{x_2x_3}{r^2} & 1 - \frac{x_3^2}{r^2}\end{matrix}\right|.$$ The third determinant is zero. Expanding the first determinant by the last row and the second determinant by the second row, we obtain $$\det D^2f = \frac{x_2g_r}{r}(g_{tt}g_{rr}-g_{tr}^2)\left(\frac{x_2x_3r_3}{r^3}+\frac{r_2}{r}-\frac{x_3^2r_2}{r^3}\right) + \frac{x_3g_r}{r}(g_{tt}g_{rr} - g_{tr}^2)\left(\frac{r_3}{r} - \frac{r_3x_2^2}{r^3} + \frac{x_2x_3r_2}{r^3} \right),$$ which by using $r_i = \frac{x_i}{r}$ and $r^2 = x_2^2 + x_3^2$, simplifies to $$(g_{tt}g_{rr}-g_{tr}^2)\frac{g_r}{r}\left(\frac{x_2^2}{r^2} + \frac{x_3^2}{r^2}\right) = (g_{tt}g_{rr}-g_{tr}^2)\frac{g_r}{r},$$ which is what we wanted to show. $\blacksquare$
I hope you will find this helpful!
EDIT: The general case is somewhat more computationally involved but the ideas are simple. If we let $D_i$ be the determinant containing $C_{i, 1}$ in the $i^{\text{th}}$ row (again, for $i=n+1$, we interpret this as the determinant with all rows being of the form $C_{j, 2}$), we claim that $\det D_i = \frac{g_r^{n-2}x_i^2}{r^n}(g_{tt}g_{rr} - g_{tr}^2)$ for $2\leq i\leq n$ and $\det D_{n+1} = 0$. This then gives $$\det D^2f = \sum_{i=2}^n \frac{g_r^{n-2}x_i^2}{r^n}(g_{tt}g_{rr} - g_{tr}^2) = (g_r)^{n-2}(g_{tt}g_{rr}-g_{tr}^2)\frac{\sum_{i=2}^n x_i^2}{r^n} = $$ $$ = \left(\frac{g_r}{r}\right)^{n-2}(g_{tt}g_{rr}-g_{tr}^2),$$ since $\sum_{i=2}^n x_i^2 = r^2$, which is what we want to show. In order to prove the claim, due to enormous symmetry, it's enough to just consider $D_2$. After standard computations, we obtain $$\det D_2 = \frac{g_r^{n-2}x_2^2}{r^n}\left|\begin{matrix}g_{tt}r & g_{tr} & x_3g_{tr} & \cdots & x_ng_{tr} \\ g_{tr} & \frac{1}{r}g_{rr} & \frac{x_3}{r}g_{rr} & \cdots & \frac{x_n}{r}g_{rr} \\ 0 & -\frac{x_3}{r^2} & 1 - \frac{x_3^2}{r^2} & \cdots & -\frac{x_3x_n}{r^2} \\ 0 & -\frac{x_4}{r^2} & -\frac{x_4x_3}{r^2} & \cdots & -\frac{x_4x_n}{r^2} \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & -\frac{x_n}{r^2} & -\frac{x_nx_3}{r^2} & \cdots & 1 - \frac{x_n^2}{r^2} \end{matrix} \right|.$$ You can approach computing this determinant by writing row $j\geq 3$ as $$r_j = r_{j, 1} + r_{j, 2}$$ where $$r_{j, 1} = (0, 0, \ldots, \underbrace{1}_{j^{\text{th}} \text{position}}, 0, \ldots, 0), r_{j, 2} = \left(0, -\frac{x_j}{r^2}, -\frac{x_jx_3}{r^2}, \ldots, \frac{x_j^2}{r^2}, \ldots, -\frac{x_jx_n}{r^2}\right).$$ Note that $r_{j, 2}, r_{k, 2}$ for $j\neq k$ are linearly dependent, hence when we expand the determinant in a similar fashion as we did before, only the determinants containing at most one row of the form $r_{j, 2}$ for some $j$ will be non zero. The remaining determinants are easy (but tedious) to compute using at most two steps of Gaussian elimination (you will have at most two non zero entries below the diagonal), which should give you the final desired result.