Does this vector/ partial derivative/ Gaussian equation have any solutions?

99 Views Asked by At

I am trying to find a vector $\mathbf{m}$, such that $$(\nabla \times \mathbf{m}v(\mathbf{r}))= \mathbf{k} v(\mathbf{r})$$ $$v(r)=e^{i\mathbf{q \cdot r}-\frac{(\mathbf{r-a})^2}{\mathbf{b}}}$$ where $\mathbf{k,q} \in \mathbb{C}^3$ and $ \mathbf{r,n,a,b} \in \mathbb{R}^3$. $\mathbf{n}$ is a unit vector, and $\mathbf{k}$ can be any constant vector.

Expanding that out, I want to find $m_1, m_2, m_3$ such that $$\left(\begin{array}{c} \frac{\partial}{\partial y}(m_{3}v(r))-\frac{\partial}{\partial z}(m_{2}v(r))\\ \frac{\partial}{\partial z}(m_{1}v(r))-\frac{\partial}{\partial x}(m_{3}v(r))\\ \frac{\partial}{\partial x}(m_{2}v(r))-\frac{\partial}{\partial y}(m_{1}v(r)) \end{array}\right)=\left(\begin{array}{c} k_{1}\\ k_{2}\\ k_{3} \end{array}\right)v(r)$$ Edit: Progress after reformulating the problem correctly (I previously thought I needed a more complicated matrix equation)...

Another edit: Based on MasterYoda's answer below, I tried the following (using MasterYoda's notation):


Firstly, that $k$ was acceptable to me, but not desirable. I don't want it, so saying I use

$$\boldsymbol m=\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times^+ v(\boldsymbol r)$$

so that

$$\boldsymbol\nabla\times\boldsymbol mv(\boldsymbol r)=\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times^+v(\boldsymbol r)=v(\boldsymbol r)$$

then from MasterYoda's reply:

$$\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times^+=-\frac{1}{|\partial_1v|+|\partial_2v|+|\partial_3v|}\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times^\dagger$$

You can see that transposing $\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times$ is the same as multiplying by minus 1, so

$$\frac{1}{|\partial_1v|+|\partial_2v|+|\partial_3v|}\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times^*$$

Edit: I first thought this would result in the identity matrix, but no, this is not a property of the pseudoinverse - from wikipedia:

(AA+ need not be the general identity matrix, but it maps all column vectors of A to themselves);

Labelling the elements of $\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times$ as "aij", where i is row and j column, the product of the two matrices should be

$$\left( \begin{array}{ccc} \text{a12} \text{a21}^*+\text{a13} \text{a31}^* & \text{a13} \text{a32}^* & \text{a12} \text{a23}^* \\ \text{a23} \text{a31}^* & \text{a21} \text{a12}^*+\text{a23} \text{a32}^* & \text{a21} \text{a13}^* \\ \text{a32} \text{a21}^* & \text{a31} \text{a12}^* & \text{a31} \text{a13}^*+\text{a32} \text{a23}^* \\ \end{array} \right)$$

i.e. $$\begin{bmatrix}-(\partial_{3}v(\partial_{3}v)^{*}+\partial_{2}v(\partial_{2}v)^{*}) & \partial_{2}v(\partial_{1}v)^{*} & \partial_{3}v(\partial_{1}v)^{*}\\ \partial_{1}v(\partial_{2}v)^{*} & -(\partial_{3}v(\partial_{3}v)^{*}+\partial_{1}v(\partial_{1}v)^{*}) & \partial_{3}v(\partial_{2}v)^{*}\\ \partial_{1}v(\partial_{3}v)^{*} & \partial_{2}v(\partial_{3}v)^{*} & -(\partial_{2}v(\partial_{2}v)^{*}+\partial_{1}v(\partial_{1}v)^{*}) \end{bmatrix}$$

so all those off diagonal terms need to cancel - do they? looking at my function $$v(r)=e^{i\mathbf{q \cdot r}-\frac{(\mathbf{r-a})^2}{\mathbf{b}}}$$ you can see that each partial derivative will be equal to a complex constant multiplied by the original function

$$\frac{\partial}{\partial r_i}v(r)=\left(iq_i - \frac{2(r_i-a_i)}{b_i}\right)e^{i\mathbf{q \cdot r}-\frac{(\mathbf{r-a})^2}{\mathbf{b}}}=c_iv(r)$$ you can also see that the effect of conjugation is $$\left(\frac{\partial}{\partial r_i}v(r)\right)^*=\left(-iq_i - \frac{2(r_i-a_i)}{b_i}\right)e^{-i\mathbf{q \cdot r}-\frac{(\mathbf{r-a})^2}{\mathbf{b}}}$$ so any product of a partial derivative and a conjugate partial derivative will be of the form $$\left(\frac{\partial}{\partial r_i}v(r)\right)\left(\frac{\partial}{\partial r_j}v(r)\right)^*=\left(iq_i - \frac{2(r_i-a_i)}{b_i}\right)\left(-iq_j - \frac{2(r_j-a_j)}{b_j}\right)e^{-2\frac{(\mathbf{r-a})^2}{\mathbf{b}}}$$ so defining $f$ such that $$f=e^{-2\frac{(\mathbf{r-a})^2}{\mathbf{b}}}$$

each product is of the form (perhaps the letter 'c' was a poor choice as these are not constants)

$$c_ic_j^*f$$ and the matrix is therefore $$\begin{bmatrix}-(c_{3}c_{3}^{*}+c_{2}c_{2}^{*})f & c_{2}c_{1}^{*}f & c_{3}c_{1}^{*}f\\ c_{1}c_{2}^{*}f & -(c_{3}c_{3}^{*}+c_{1}c_{1}^{*})f & c_{3}c_{2}^{*}f\\ c_{1}c_{3}^{*}f & c_{2}c_{3}^{*}f & -(c_{2}c_{2}^{*}+c_{1}c_{1}^{*})f \end{bmatrix}$$ or $$\begin{bmatrix}-(c_{3}c_{3}^{*}+c_{2}c_{2}^{*}) & c_{2}c_{1}^{*} & c_{3}c_{1}^{*}\\ c_{1}c_{2}^{*} & -(c_{3}c_{3}^{*}+c_{1}c_{1}^{*}) & c_{3}c_{2}^{*}\\ c_{1}c_{3}^{*} & c_{2}c_{3}^{*} & -(c_{2}c_{2}^{*}+c_{1}c_{1}^{*}) \end{bmatrix}f$$ so what about that factor at the front? $$\frac{1}{|\partial_1v|+|\partial_2v|+|\partial_3v|}$$ well clearly it is

$$\frac{1}{c_{1}c_{1}^{*}f+c_{2}c_{2}^{*}f+c_{3}c_{3}^{*}f} = \frac{1}{(c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*})f}$$

$$= \frac{e^{+2\frac{(\mathbf{r-a})^2}{\mathbf{b}}}}{(c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*})}$$

which cancels out that $f$ leaving

$$\begin{bmatrix}\frac{-(c_{3}c_{3}^{*}+c_{2}c_{2}^{*})}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{c_{2}c_{1}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{c_{3}c_{1}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}}\\ \frac{c_{1}c_{2}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{-(c_{3}c_{3}^{*}+c_{1}c_{1}^{*})}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{c_{3}c_{2}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}}\\ \frac{c_{1}c_{3}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{c_{2}c_{3}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{-(c_{2}c_{2}^{*}+c_{1}c_{1}^{*})}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} \end{bmatrix}$$

... now what? I'm not sure how to proceed from here... Edit again: So, it seems that $k$ was needed. No worries, let's bring it back, and multiply this matrix by $k$

$$\begin{bmatrix}\frac{-(c_{3}c_{3}^{*}+c_{2}c_{2}^{*})}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{c_{2}c_{1}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{c_{3}c_{1}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}}\\ \frac{c_{1}c_{2}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{-(c_{3}c_{3}^{*}+c_{1}c_{1}^{*})}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{c_{3}c_{2}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}}\\ \frac{c_{1}c_{3}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{c_{2}c_{3}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} & \frac{-(c_{2}c_{2}^{*}+c_{1}c_{1}^{*})}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} \end{bmatrix}\left(\begin{array}{c} k_{1}\\ k_{2}\\ k_{3} \end{array}\right)$$ $$=\frac{1}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}}\begin{bmatrix}-k_{1}(c_{3}c_{3}^{*}+c_{2}c_{2}^{*})+k_{2}(c_{2}c_{1}^{*})+k_{3}(c_{3}c_{1}^{*})\\ k_{1}(c_{1}c_{2}^{*})-k_{2}(c_{3}c_{3}^{*}+c_{1}c_{1}^{*})+k_{3}(c_{3}c_{2}^{*})\\ k_{1}(c_{1}c_{3}^{*})+k_{2}(c_{2}c_{3}^{*})-k_{3}(c_{2}c_{2}^{*}+c_{1}c_{1}^{*}) \end{bmatrix}$$ I'm just going to decide I want $k=(1,1,1)$ so $$\begin{bmatrix}\frac{-c_{3}c_{3}^{*}-c_{2}c_{2}^{*}+c_{2}c_{1}^{*}+c_{3}c_{1}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}}\\ \frac{c_{1}c_{2}^{*}-c_{3}c_{3}^{*}-c_{1}c_{1}^{*}+c_{3}c_{2}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}}\\ \frac{c_{1}c_{3}^{*}+c_{2}c_{3}^{*}-c_{2}c_{2}^{*}-c_{1}c_{1}^{*}}{c_{1}c_{1}^{*}+c_{2}c_{2}^{*}+c_{3}c_{3}^{*}} \end{bmatrix}$$ .. and now I'm stuck again

2

There are 2 best solutions below

4
On

The new system is much more manageable. This is what I have come up with.

Assuming $\boldsymbol m$ is a vector of constants, we can rewrite the left hand side $$\boldsymbol\nabla\times\boldsymbol mv(\boldsymbol r)=\begin{bmatrix}(m_3\partial_2-m_2\partial_3)v\\(m_1\partial_3-m_3\partial_1)v\\(m_2\partial_1-m_1\partial_2)v\end{bmatrix}=\begin{bmatrix}0&-\partial_3v&\partial_2v\\\partial_3v&0&-\partial_1v\\-\partial_2v&\partial_1v&0\end{bmatrix}\begin{bmatrix}m_1\\m_2\\m_3\end{bmatrix}=\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times\boldsymbol m$$ where $\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times$ is the notation for a skew-symmetric matrix. This now reduces the original problem to a linear problem that can be solved using any method desired. Analytically, the matrix $\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times$ is not invertible, but we can use something called the pseudoinverse. $$\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times\boldsymbol m=\boldsymbol kv(\boldsymbol r) \implies\boldsymbol m=\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times^+\boldsymbol kv(\boldsymbol r)$$ where $\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times^+$ is the pseudoinverse. Here we have successfully solved for $\boldsymbol m$.

The pseudoinverse for this particular matrix is simple to find, $$\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times^+=-\frac{1}{|\partial_1v|+|\partial_2v|+|\partial_3v|}\left[\boldsymbol\nabla v(\boldsymbol r)\right]_\times^\dagger$$ where $^\dagger$ is the conjugate transpose and $|\partial_xv|=\partial_xv\,\partial_xv^*$ with $^*$ indicating the complex conjugate.

All that is left to do is to compute the partial derivatives of $v$ and plug them into the pseudoinverse, then do the matrix operations. But I am too lazy to do all of this computation.

Of course, if you have numbers to plug in, then you can use any appropriate numerical method you desire to solve the system as opposed to doing this computation by hand.

3
On

For generic values of the parameters $\boldsymbol{m}$ can't be a nonzero constant. If it were, it would have to satisfy $$\boldsymbol{\nabla}(\ln v(\boldsymbol{r}))\times\boldsymbol{m}=\boldsymbol{k}\text{,}$$ i.e., $$2\mathsf{b}^{-1}\boldsymbol{r}\times\boldsymbol{m}=(\mathrm{i}\boldsymbol{q}+2\mathsf{b}^{-1}\boldsymbol{a})\times\boldsymbol{m}-\boldsymbol{k} $$ for all $\boldsymbol{r}$—the left side is a nonzero linear function of $\boldsymbol{r}$ but the right side is a constant.

Therefore $\boldsymbol{m}$ is a vector field. The equation given is of the form

$$\boldsymbol{\nabla\times H}=\boldsymbol{J}\text{.}$$

(It might be helpful to review the theory of Helmholtz decomposition.) For "good" boundary conditions, this equation determines $\boldsymbol{H}$ only up to addition of a quantity of the form $\boldsymbol{\nabla}\psi$. Therefore assume that $\boldsymbol{\nabla\cdot H}=\boldsymbol{0}$. Then $\boldsymbol{H}=\boldsymbol{\nabla\times A}$ for some $\boldsymbol{A}$ which WLOG we can assume to satisfy $\boldsymbol{\nabla\cdot A}=\boldsymbol{0}$. Then the equation reduces to $$-\nabla^2 \boldsymbol{A}=\boldsymbol{k}v\text{.}$$ Since $\boldsymbol{k}$ is a constant, for "good" boundary conditions the components of $\boldsymbol{A}$ orthogonal to $\boldsymbol{k}$ vanish; write $\boldsymbol{A}=A\boldsymbol{\hat{k}}$. Then

$$\begin{align}-\nabla^2 A&=kv&\boldsymbol{H}=-\boldsymbol{\hat{k}\times\nabla}A\text{.}\end{align}$$

According to Wikipedia, the solution to the inhomogeneous Laplace equation with Gaussian source $$-\nabla^2\phi=\mathrm{e}^{-\pi r^2}$$ and "good" boundary conditions is $$\phi(\boldsymbol{r})=\frac{1}{4\pi r}\mathrm{erf}\left(\sqrt{\pi}r\right)\text{.}$$ If $\mathsf{b}$ is a rotation-invariant tensor, then I expect this result to generalize well to your $v(\boldsymbol{r})$ by shifting, rescaling, and completing the square.