Show that $\nabla \times (\vec c \psi)= \nabla \psi \times \vec c=-\vec c \times \nabla \psi$, for constant vector $\vec c$ and scalar field $\psi$

96 Views Asked by At

I want to do this from first principles, so I will be writing out the cofactors of the determinant each time.

Writing out the LHS explicitly as a determinant and using the first row to expand out the matrix
$$ \nabla \times (\vec c \psi) = \begin{vmatrix} \hat i & \hat j & \hat k \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ c_x\psi & c_y\psi & c_z\psi \\ \end{vmatrix}\tag{1} $$

$$=\hat i \left[\frac{\partial}{\partial y}c_z\psi-\frac{\partial}{\partial z}c_y\psi\right]-\hat j\left[\frac{\partial}{\partial x}c_z \psi-\frac{\partial}{\partial z}c_x \psi\right]+\hat k\left[\frac{\partial}{\partial x}c_y \psi-\frac{\partial}{\partial y}c_x \psi\right] $$ $$ = \frac{\partial}{\partial x}\left[c_y\psi\hat k-c_z\psi\hat j\right]+\frac{\partial}{\partial y}\left[c_z\psi\hat i-c_x\psi\hat k\right]+\frac{\partial}{\partial z}\left[c_x\psi\hat j-c_y\psi\hat i\right]$$ $$=\color{blue}{\frac{\partial\psi}{\partial x}\left[c_y\hat k-c_z\hat j\right]+\frac{\partial\psi}{\partial y}\left[c_z\hat i-c_x\hat k\right]+\frac{\partial\psi}{\partial z}\left[c_x\hat j-c_y\hat i\right]} $$ The expression in blue was obtained from expanding out the determinant using the first row, but I could have equally used any row or column and still get the same result (in blue). So, just as a quick check using the 2nd row of the matrix $(1)$ to expand I find that

$$ \begin{vmatrix} \hat i & \hat j & \hat k \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ c_x\psi & c_y\psi & c_z\psi \\ \end{vmatrix} $$ $$=\frac{\partial\psi}{\partial x}\left[c_z \hat j-c_y\hat k\right] -\frac{\partial\psi}{\partial y}\left[c_z \hat i-c_x \hat k\right]+\frac{\partial\psi}{\partial z}\left[c_y \hat i - c_x \hat k\right] $$ $$=-\left(\color{blue}{\frac{\partial\psi}{\partial x}\left[c_y\hat k-c_z\hat j\right]+\frac{\partial\psi}{\partial y}\left[c_z\hat i-c_x\hat k\right]+\frac{\partial\psi}{\partial z}\left[c_x\hat j-c_y\hat i\right]}\right) $$ So I have expanded the determinant using the 1st row and then again using the 2nd row and the results are not equal, but instead one is exactly the negative of the other.

At this point I am basically completely stuck and am unable to prove the statement given in the title. It was my understanding that we may use any row/column to evaluate a determinant and get the same result everytime.

Could someone please explain what I'm doing wrong?


Edit:

Thanks to the answer given by @md2perpe I now understand the origin of the minus sign.

Since everything has now been verified; part of the statement in the title has been proved:

$$\frac{\partial\psi}{\partial x}\left[c_y\hat k-c_z\hat j\right]+\frac{\partial\psi}{\partial y}\left[c_z\hat i-c_x\hat k\right]+\frac{\partial\psi}{\partial z}\left[c_x\hat j-c_y\hat i\right]$$ $$=\begin{vmatrix} \hat i & \hat j & \hat k \\ \frac{\partial\psi}{\partial x} & \frac{\partial\psi}{\partial y} & \frac{\partial\psi}{\partial z} \\ c_x & c_y & c_z \\ \end{vmatrix}=\nabla \psi \times \vec c\tag{2}$$

But it remains to show that the same expression $(2)$ is also equivalent to

$$\frac{\partial\psi}{\partial x}\left[c_y\hat k-c_z\hat j\right]+\frac{\partial\psi}{\partial y}\left[c_z\hat i-c_x\hat k\right]+\frac{\partial\psi}{\partial z}\left[c_x\hat j-c_y\hat i\right]$$ $$=-\vec c \times \nabla \psi$$


I know that if I swap the second and third row of $(2)$ it will introduce a minus sign and the statement will then be proven, but this is a property of the determinant and I want to prove the statement in the title more rigorously by considering the cofactors of the determinant.

By rewriting the cofactors I find that

$$\frac{\partial\psi}{\partial x}\left[c_y\hat k-c_z\hat j\right]+\frac{\partial\psi}{\partial y}\left[c_z\hat i-c_x\hat k\right]+\frac{\partial\psi}{\partial z}\left[c_x\hat j-c_y\hat i\right]$$ $$=c_x\left[\frac{\partial\psi}{\partial z}\hat j-\frac{\partial\psi}{\partial y}\hat k\right]+c_y\left[\frac{\partial\psi}{\partial x}\hat k-\frac{\partial\psi}{\partial z}\hat i\right]+ c_z\left[\frac{\partial\psi}{\partial y}\hat i-\frac{\partial\psi}{\partial x}\hat j\right]\tag{3}$$ But how do I proceed to show that $(3)$ is equal to $$-\begin{vmatrix} \hat i & \hat j & \hat k \\ c_x & c_y & c_z \\ \frac{\partial\psi}{\partial x} & \frac{\partial\psi}{\partial y} & \frac{\partial\psi}{\partial z}\\ \end{vmatrix}={-\vec c \times \nabla \psi}?$$

2

There are 2 best solutions below

6
On BEST ANSWER

Recall the determinant formula for an $n\times n$ matrix $A$, $$\det A = \sum_{\sigma\in S_n} \operatorname{sgn} \sigma \prod_{i=1}^n a_{i,\sigma(i)}$$

If $$ \hat e \in \{ \hat \imath, \hat\jmath,\hat k\} \\ \partial_\star \in \{ \partial_x , \partial_y, \partial_z \}\\ c_*\in\{c_x,c_y,c_z\} $$

The determinant is then a signed sum of terms that are of the form $$ a_{i,\sigma(i)}=\hat e \partial_\star (c_* \psi) $$ but observe this is plainly $$ \hat e \partial_\star (c_* \psi) = \hat e c_* (\partial_\star \psi) = \hat e (\partial_\star\psi) c_* $$ for any choices of $\hat e, \partial_\star,$ and $c_*$.


You also want to know why $$ a \times b = -b\times a$$

but this is because switching two rows of a matrix is an operation that multiplies the determinant by $-1$. In addition this is true for any vectors $a,b$.

Concretely, if $$ A = \begin{bmatrix} \hat \imath & \hat \jmath & \hat k \\ \frac{\partial\psi}{\partial x} & \frac{\partial\psi}{\partial y} & \frac{\partial\psi}{\partial z} \\ c_x & c_y & c_z \\ \end{bmatrix}, B = \begin{bmatrix} \hat \imath & \hat \jmath & \hat k \\ c_x & c_y & c_z \\ \frac{\partial\psi}{\partial x} & \frac{\partial\psi}{\partial y} & \frac{\partial\psi}{\partial z} \\ \end{bmatrix}, E = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}$$ then $B = EA$, so that $$ \vec c \times \nabla \psi = \det B = \det(EA) = \det E \det A = - \det A = - \nabla \psi \times \vec c,$$ since the determinant of a product is the product of determinants, e.g. see Prove $|A.B|=|A|.|B|$ using matrix algebra.

But you don't want this proof, so you can proceed via direct computation. I'll choose expansion along the first row. $$ \begin{vmatrix} \hat \imath & \hat \jmath & \hat k \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \\ \end{vmatrix} \\= \hat \imath (b_2c_3 - b_3c_2) - \hat \jmath(b_1c_3 - b_3c_1) + \hat k (b_1c_2 - b_2c_1) \\= -\hat \imath (c_2b_3 - c_3b_2) + \hat \jmath(c_1b_3 - c_3b_1) - \hat k (c_1b_2 - c_2b_1) \\= -\Big(\hat \imath (c_2b_3 - c_3b_2) - \hat \jmath(c_1b_3 - c_3b_1) + \hat k (c_1b_2 - c_2b_1) \Big) \\ = -\begin{vmatrix} \hat \imath & \hat \jmath & \hat k \\ c_1 & c_2 & c_3 \\ b_1 & b_2 & b_3 \\ \end{vmatrix}$$

0
On

When you expand over the second row you should add a minus sign. Have you seen the following pattern? $$\begin{vmatrix} + & - & + \\ - & + & - \\ + & - & + \\ \end{vmatrix}$$

This means that when expanding over the second row you get $$\begin{vmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{vmatrix} = -a_{21} \begin{vmatrix}a_{12} & a_{13} \\ a_{32} & a_{33}\end{vmatrix} +a_{22} \begin{vmatrix}a_{11} & a_{13} \\ a_{31} & a_{33}\end{vmatrix} -a_{23} \begin{vmatrix}a_{11} & a_{12} \\ a_{31} & a_{32}\end{vmatrix} $$