Differentiating with respect to unit vectors on a hypersphere

161 Views Asked by At

I have this 'potential energy' function $V$ defined on $(\mathbb{S}^{d-1})^n \subset \mathbb{R}^{nd}$.

$$V(\{ \hat{\underline s_i} \}) = -\frac{K}{2} \sum_{i,j \ i \neq j}^n J_{ij} \hat{\underline{s}_i} \cdot \hat{\underline{s}_j} - K_s \sum_{i}^n \big(\hat{\underline s_i} \cdot \hat{\underline{z}} \big)^2$$

where

$$ \hat{\underline s_i} \in \mathbb{S}^{d-1}$$

It determines the dynamics of $n$ unit spins of dimension $d$ via the following partial derivative:

$$\dot{\hat{\underline s_i}} := -\frac{\partial V}{\partial \hat{\underline{s}_i}}$$

In the $d=2$ case, the phase parameterisation of these spins makes taking the partial derivative trivial, but for $d>2$ the parameterisation gets messy so I assumed I'd have an easier time with a geometric description. I'm now struggling to incorporate the $|\underline{s}_i| = 1$ constraints easily into the partial derivatives.

Am I missing something obvious?

1

There are 1 best solutions below

23
On BEST ANSWER

What you want I think is the covariant derivative on the sphere, but because $V$ is a scalar this is the same as the tangential derivative which makes this easy.

If $P_x(v)$ is the projection of a vector $v$ onto the tangent space of the sphere at some point $x$, then the tangential derivative is simply $\partial_x = P_x(\nabla_x)$. Here, $x$ simply denotes the vector variable we're differentiating with respect to, but crucially $\nabla_x$ is not differentiating any $x$-dependence of $P$. In terms of the normal vector $n(x)$ this looks like $$ \partial_x = \nabla_x - n(x)(n(x)\cdot\nabla_x). $$

The full derivative $\nabla$ has some basic properties, one of which is $$ \nabla_x(x\cdot v) = v $$ for any $x$-independent vector $v$. But $\partial$ also enjoys this property with a twist: $$ \partial_x(x\cdot v) = P_x(\nabla_x)(x\cdot v) = P_x(\nabla_x(x\cdot v)) = P_x(v). $$

Now we can proceed using the product rule and assuming the $\hat s_i$ are mutually independent: $$\begin{aligned} \dot{\hat s}_k &= -\partial_{\hat s_k}V = \frac K2\sum_{i\ne k}(J_{ik}+J_{ki})P_{\hat s_k}(\hat s_i) + 2K_s(\hat s_k\cdot\hat z)P_{\hat s_k}(\hat z) \\ &= \frac K2\sum_{i\ne k}(J_{ik}+J_{ki})[\hat s_i - (\hat s_i\cdot\hat s_k)\hat s_k] + 2K_s(\hat s_k\cdot\hat z)[\hat z - (\hat z\cdot\hat s_k)\hat s_k]. \end{aligned}$$


Suppose now want to compute the "Jacobian" $\partial\dot{\hat s}_i/\partial\hat s_j$. The standard Jacobian $J_F$ of a vector function $F$ in flat space is the matrix representing the total differential $\mathrm dF(x; v) = J_Fv$. However, the total differential evaluated on $v$ is (essentially by definition) the $v$-directional derivative of $F$; in terms of $\nabla$ this directional derivative operator is $v\cdot\nabla$, so we have the very convenient form $$ \mathrm dF(x; v) = (v\cdot\nabla)F(x). $$ Once we compute this, we can recover the matrix $J_F$ be evaluating on the standard basis $e_i$: $$ (J_F)_{ij} = e_i\cdot[\mathrm dF(x; e_j)]. $$

So the quantities we want to compute are $(v\cdot\partial_{\hat s_i})\dot{\hat s}_j$ for arbitrary $v$. The key identity here is $$ (v\cdot\nabla_x)L(x) = L(v),\quad (v\cdot\partial_x)L(x) = L(P_x(v)) $$ where $L$ can be any linear function (this following from the fact that a linear function is its own differential). For brevity write $v' = P_x(v)$. By also using the product rule, we now see $$ (v\cdot\partial_{\hat s_k})\dot{\hat s}_k = -\frac K2\sum_{i\ne k}(J_{ik}+J_{ki})[(\hat s_i\cdot v')\hat s_k + (\hat s_i\cdot\hat s_k)v'] + 2K_s(v'\cdot\hat z)(\hat z - (\hat z\cdot\hat s_k)\hat s_k) - 2K_s(\hat s_k\cdot\hat z)(\hat z\cdot v')\hat s_k - 2K_s(\hat s_k\cdot\hat z)^2v' $$ and when $j \ne k$ $$ (v\cdot\partial_{\hat s_j})\dot{\hat s}_k = \frac K2(J_{jk} + J_{kj})[v' - (v'\cdot\hat s_k)\hat s_k]. $$ To get the matrix components, we can do is write dot products with $v$ in the matrix form $w^Tv$ and try to get the overall form $Jv$ with $J$ a matrix. First note that $$ v' = v - (v\cdot\hat s_j)\hat s_j = (1 - \hat s_j\hat s_j^T)v. $$ So when $j = k$ $$ -\frac K2\sum_{i\ne k}(J_{ik} + J_{ki})[\hat s_k\hat s_i^T + \hat s_i\cdot\hat s_k]v' + 2K_s\Bigl[(1 - \hat s_k\hat s_k^T)\hat z\hat z^T - \hat s_k\hat s_k^T\hat zz^T - (\hat s_k\cdot\hat z)^2\Bigr]v' $$ and when $j \ne k$ $$ \frac K2(J_{jk} + J_{kj})[1 - \hat s_k\hat s_k^T]v'. $$ Finally, we get the matrix expressions $$ \frac{\partial\dot{\hat s}_k}{\partial\hat s_k} = M[1 - \hat s_k\hat s_k^T],\quad \frac{\partial\dot{\hat s}_k}{\partial\hat s_j} = \frac K2(J_{jk} + J_{kj})[1-\hat s_k\hat s_k^T][1-\hat s_j\hat s_j^T], $$$$ M = -\frac K2(\hat s_kS_k^T + \hat s_k\cdot S_k) + 2K_s\Bigl[(1-2\hat s_k\hat s_k^T)\hat z\hat z^T - (\hat s_k\cdot\hat z)^2\Bigr], $$$$ S_k = \sum_{i\ne k}(J_{ik} + J_{ki})\hat s_i. $$ The terms $\hat s_k\cdot S_k$ and $(\hat s_k\cdot\hat z)^2$ should be interpreted as multiplying an identity matrix.

Edit: It occurred to me that maybe we want the covariant directional derivatives here rather than the tangential; we were in fact using the tangential derivative as a special case of the covariant derivative. This is easy though; following Doran and Lasenby (as I mention in the comments), the covariant directional derivative is just the projection of the tangential one. This means all we need to do is multiply the above Jacobians on the left by $[1 - \hat s_j\hat s_j^T]$.

Therefore the final results would be:

$$ \frac{\partial\dot{\hat s}_k}{\partial\hat s_k} = [1 - \hat s_k\hat s_k^T]M[1 - \hat s_k\hat s_k^T] $$

$$\frac{\partial\dot{\hat s}_k}{\partial\hat s_j} = [1 - \hat s_j\hat s_j^T]\frac K2(J_{jk} + J_{kj})[1-\hat s_k\hat s_k^T][1-\hat s_j\hat s_j^T]$$