Obtaining positive eigenvalues of the matrix $A$?

170 Views Asked by At

Let us consider the matrix $A$ which has three parameters $R,C1,C3$. This is from the Ikeda map in real form.

It is defined as $$x \rightarrow R+(x \cos(\tau)-y \sin(\tau))$$ $$y \rightarrow x\sin(\tau)+y\cos(\tau)$$

The Jacobain matrix is given by:

\begin{equation*} A = \begin{bmatrix} \cos(\tau) + x \frac{\partial}{\partial x} \cos \tau - y\frac{\partial}{\partial x}\sin\tau & x\frac{\partial}{\partial y} \cos \tau - \sin \tau - y \frac{\partial}{\partial y} \sin \tau\\ \sin\tau + x \frac{\partial}{\partial x} \sin \tau + y \frac{\partial}{\partial x} \cos \tau & x \frac{\partial}{\partial y}\sin\tau + \cos \tau + y\frac{\partial}{\partial y}\cos \tau \end{bmatrix} \end{equation*} where $$\tau = C_{1} - \frac{C_{3}} {1+x^2+y^2}$$

$x,y$ are solutions of the non-linear equation

\begin{equation} R+x \cos \tau - y \sin \tau = x\\ x\sin \tau + y \cos \tau = y \end{equation}

After calculating the determinant of the matrix $A$, we get $det(A)=1$(so product of eigenvalues is 1) using $$\frac{\partial \tau}{\partial x} = \frac{2C_{3}x}{(1+x^2+y^2)^2}$$ $$\frac{\partial \tau}{\partial y} = \frac{2C_{3}y}{(1+x^2 + y^2)^2}$$

I am wondering for which values of $R,C_{1},C_{3}$ I can obtain positive eigenvalues? as I see after trying many values I get complex eigenvalues or negative eigenvalues.

I am thinking whether the above matrix can have any positive eigenvalues at all?

Any sharp hawk eye observations to this?

EDIT -

If suppose $R=0$, then we see that $x=0,y=0$ satisfies the non linear equation and if we obtain the trace of the matrix at $(0,0)$ we get the trace as $2\cos \tau$ and now for the eigen values to be real and positive we need $\cos \tau > 1$ which is not possible so we can eliminate the $R=0$ case. Now I am thinking whether if for $R \neq 0$, can we have positive eigenvalues for the Jacobian matrix?

1

There are 1 best solutions below

0
On

tl;dr: There is a fixed point with both eigenvalues positive iff there are values of $u$ solving \begin{gather*} \cos{\!(C_3u-C_1)}=1-\frac{R^2}{2}\frac{u}{1-u} \tag{0} \\ -C_3\sin{\!(C_3u-C_1)}>\frac{R^2}{2(1-u)^2} \tag{1} \\ 0<u<\min{\!\left(\left(1+\left(\frac{R}{2}\right)^2\right)^{-1},1-\frac{|R|}{\sqrt{2|C_3|}}\right)} \tag{2} \end{gather*} Depending on the values of $(R,C_3,C_1)$, this can have no solutions ($(1,0,0)$), one solution ($(1,2,\pi/3)$), or multiple solutions ($(1,3\pi,\frac{\pi}{6})$).

Some special cases are:

  • $R=0$: no solutions.
  • $2|C_3|<R^2$: no solutions.
  • $|R|\gg1$ and $2|C_3|\in[R^2,R^2+2-O(R^{-2}))$: at most one solution.
  • $|C_3|\gtrsim|R|^2\gg1$: at least $\frac{2|C_3|}{\pi R^2}-10$ solutions. (I didn't write this up carefully; with a little more care, you might be able to subtract less.)

I leave the other parameter regimes to you.


As ancient mathematician suggested in the comments, it is easier to stay with the original complex formulation of the problem. That is: write $z_n=x_n+y_ni$; then $$z_{n+1}=R+z_ne^{C_1i-\frac{C_3i}{1+|z_n|^2}}$$ Call the right-hand side $T(z_n)$. $T$ is not holomorphic, but it is the composition of a holomorphic function with a real-differentiable one.

Any $a+bi\in\mathbb{C}$ defines a linear transformation $\theta(a+bi)$ on the $2$-dimensional real vector space $\mathbb{C}$ by multiplication. In the standard basis $\{1,i\}$, that transformation has matrix $$\begin{bmatrix}a&-b\\b&a\end{bmatrix}$$ (You already know this, I think, but it's worth spelling out.)

Now identify $a+bi$ with the corresponding transformation (homothety). By the chain rule, \begin{align*} \mathcal{D}(T)(z_n)&=e^{C_1i-\frac{C_3i}{1+|z_n|^2}}+z_ne^{C_1i-\frac{C_3i}{1+|z_n|^2}}\left(-\frac{C_3i}{(1+|z_n|^2)^2}\right)\mathcal{D}(|z_n|^2) \\ &=e^{C_1i-\frac{C_3i}{1+|z_n|^2}}\left(1-\frac{C_3iz_n}{(1+|z_n|^2)^2}\begin{bmatrix}2x_n&2y_n\\0&0\end{bmatrix}\right) \end{align*}

It is easier to work with the transpose matrix, which has the same eigenvalues. For the homotheties, transposition corresponds to taking complex conjugates; for the other term, $$\begin{bmatrix}2x_n&2y_n\\0&0\end{bmatrix}^{\mathsf{T}}=\begin{bmatrix}2x_n&0\\2y_n&0\end{bmatrix}=2z_n\Re{}$$ where $\Re{}$ denotes the real-part operator. Thus $$\mathcal{D}(T)(z_n)^{\mathsf{T}}=e^{-C_1i+\frac{C_3i}{1+|z_n|^2}}\left(1+\frac{2C_3i|z_n|^2}{(1+|z_n|^2)^2}\Re{}\right)$$ If $u=\frac{1}{1+|z_n|^2}$, then we can simply write $$\mathcal{D}(T)(z_n)^{\mathsf{T}}=e^{(C_3u-C_1)i}(1+2C_3u(1-u)i\Re{})$$

The above computations already give a clean proof that $\mathcal{D}(T)(z_n)^{\mathsf{T}}$ has determinant $1$. For any complex $a$, $$\det{\!(a)}=|a|^2\text{;}$$ thus $\det{\!(e^{(C_3u-C_1)i})}=1$. And $1+2C_3u(1-u)i\Re{}$ has matrix $$\begin{bmatrix}1&0\\2C_3u(1-u)&1\end{bmatrix}$$ for which the determinant can be computed by hand. To conclude, apply the homomorphism property of determinants.

The eigenvalues of a $2\times2$ matrix $M$ are both positive iff $\det{\!(M)}>0$ and $\mathrm{tr}{(M)}>2\sqrt{\det{\!(M)}}>0$. We already know $\det{\!(\mathcal{D}(T)(z_n)^{\mathsf{T}})}=1$. For the trace, first compute \begin{align*} \Im{(\mathcal{D}(T)(z_n)^{\mathsf{T}}i)}&=\Im{(e^{(C_3u-C_1)i}(1\cdot i+0))} \\ &=\cos{\!(C_3u-C_1)} \end{align*} and \begin{align*} 2\Re{(\mathcal{D}(T)(z_n)^{\mathsf{T}}1)}&=\mathcal{D}(T)(z_n)^{\mathsf{T}}1+\overline{\mathcal{D}(T)(z_n)^{\mathsf{T}}1} \\ &=e^{(C_3u-C_1)i}(1+2C_3u(1-u)i)+e^{-(C_3u-C_1)i}(1-2C_3u(1-u)i) \\ &=2\cos{\!(C_3u-C_1)}-4C_3u(1-u)\sin{\!(C_3u-C_1)} \end{align*} Thus \begin{align*} \mathrm{tr}(\mathcal{D}(T)(z_n)^{\mathsf{T}})&=\Re{(\mathcal{D}(T)(z_n)^{\mathsf{T}}1)}+\Im{(\mathcal{D}(T)(z_n)^{\mathsf{T}}i)} \\ &=2\cos{\!(C_3u-C_1)}-2C_3u(1-u)\sin{\!(C_3u-C_1)} \end{align*}

We are interested in the value of the trace at a fixed point, for which $z_{n+1}=z_n=z$ and so \begin{align*} R&=z(1-e^{-(C_3u-C_1)i}) \\ R^2&=|z|^2\cdot2(1-\cos{\!(C_3u-C_1)}) \\ &=\left(\frac{1-u}{u}\right)\cdot2(1-\cos{\!(C_3u-C_1)}) \end{align*} Conversely, given a value of $u\in(0,1)$ solving that last equation, one can always (uniquely!) choose a phase of $z$ to generate a fixed point. Solving for the cosine, $$\cos{\!(C_3u-C_1)}=1-\frac{R^2}{2}\frac{u}{1-u}$$

This gives an upper bound on $u$: since $1-\frac{R^2}{2}\frac{u}{1-u}$ is decreasing in $u$ and equals $-1$ when $\left(1+\left(\frac{R}{2}\right)^2\right)^{-1}$, we must have $0<u<\left(1+\left(\frac{R}{2}\right)^2\right)^{-1}$.

At the fixed point, the trace should exceed $2$. Halving that inequality and substituting the known cosine, \begin{gather*} -C_3u(1-u)\sin{\!(C_3u-C_1)}>\frac{R^2}{2}\frac{u}{1-u} \\ -C_3\sin{\!(C_3u-C_1)}>\frac{R^2}{2(1-u)^2} \end{gather*} This, too, gives an upper bound on $u$: $$1>|\sin{\!(C_3u-C_1)}|>\frac{R^2}{2|C_3|(1-u)^2}$$ Rearranging, $$u<1-\frac{|R|}{\sqrt{2|C_3|}}$$

Putting all the equations above together then gives the immediate claim.


Some special cases are tractable: obviously $|R|=0$ has no solutions, as does $C_3<\frac{R^2}{2}$. For the more complicated cases below, let $f_R(u)$ be the right-hand side of (0), $|C_3|g_R(u)$ the right-hand side of (1), and $h(R,C_3)$ the minimum in (2).

First, I want to work with only one trig function. Since $g_R(u)>0$, a solution to (0) and (1) must satisfy \begin{gather*} f_R(u)=\cos{\!(C_3u-C_1)} \\ |\cos{\!(C_3u-C_1)}|>\sqrt{1-g_R(u)^2}\tag{3} \end{gather*} The converse is not true — each solution to (3) is (depending on the sign of $C_3$) pre- or post-ceded by a spurious solution in which $-C_3\sin{\!(C_3u-C_1)}<0<|C_3|g_R(u)$. But this gives a bijection between solutions to (0&1) and pairs of solutions to (3), up to possibly the first or last solution, which is enough for a counting argument with error $\leq1$.

So, when can (3) hold? For $u\ll1$, we have $f_R(u)>\sqrt{1-g_R(u)^2}$. This state of affairs persists as long as $$f_R(u)^2>1-g_R(u)^2$$ or until an endpoint of (2) (whichever comes first). For the latter, substitute… $$1+\left(\frac{R^2}{2}\frac{u}{1-u}\right)^2-2\cdot\frac{R^2}{2}\frac{u}{1-u}>1-\frac{1}{C_3^2}\left(\frac{R^2}{2(1-u)^2}\right)^2$$ …and rearrange… $$\left(\frac{R^2}{2}\frac{u}{1-u}\right)^2+\frac{1}{C_3^2}\left(\frac{R^2}{2(1-u)^2}\right)^2>2\cdot\frac{R^2}{2}\frac{u}{1-u}$$ …multiply by $(1-u)^4\left(\frac{2}{R^2}\right)^2$$$u^2(1-u)^2+C_3^{-2}>2\cdot\frac{2}{R^2}u(1-u)^3$$ …and rearrange… $$C_3^{-2}>u(1-u)^2\left(\left(\frac{2}{R}\right)^2(1-u)-u\right)$$ …and lastly multiply by $\left(\frac{R}{2}\right)^2$ to obtain $$\left(1-\left(1+\left(\frac{R}{2}\right)^2\right)u\right)(1-u)^2u<\left(\frac{R}{2C_3}\right)^2\tag{4}$$ For large $R$, the maximum value of the left-hand side is $(4+R^2)^{-1}+O(R^{-2})$. Thus if $R$ is large and $\frac{1}{4+R^2}<\left(\frac{R}{2C_3}\right)^2$, the necessary ordering persists until an endpoint of (2).

Suppose such is the case — that is, $R\gg1$ and $$\frac{R^2}{2}\leq|C_3|<|R|\sqrt{1+\left(\frac{R}{2}\right)^2}\tag{4}$$ Then solutions to (0) come in pairs, surrounded by a solution to the square of (1). By periodicity, there are $\left\lfloor\frac{C_3h(R,C_3)}{\pi}\right\rfloor$-many such solutions to (1), and thus $\left\lfloor\frac{C_3h(R,C_3)}{2\pi}\right\rfloor$-many such solutions to both (1) and (2). Now note that (4) is a very small range of values: we must have $C_3=\frac{R^2}{2}\left(1+\frac{a}{R^2}+O(R^{-3})\right)$ with $a\in(0,2)$. Taylor-expanding $h(R,C_3)$, our bound is $$\frac{1}{\pi}\min{\!\left(1+O(R^{-2}),\frac{a}{8}+O(R^{-2})\right)}$$ Since $a<2$, the right-hand term is smaller and rounds down to $0$. Since we may have dropped a solution in our pairing procedure, there is at most one solution.

Repeating the same sort of argument gives the remaining $|C_3|\gtrsim|R|^2$ case. Either solutions to (0) are bracketed between solutions to (1) or vice versa; we miscount whenever they flip. In particular, the counting argument can lose or gain at most $10$ solutions (probably fewer):

  • $2$ for each boundary;
  • $2$ for each transition between $f_R>\sqrt{1-g_R}$ and $f_R<\sqrt{1-g_R}$ or $f_R>-\sqrt{1-g_R}$ and $f_R<-\sqrt{1-g_R}$; and
  • $2$ for the transition between $f_R>0$ and $f_R<0$.

Since $|C_3|\gtrsim R^2$, $$h(R,C_3)=\left(1+\left(\frac{R}{2}\right)^2\right)^{-1}\approx R^{-2}(4+O(|R|^{-1}))$$ which corresponds to about $\frac{2C_3}{\pi R^2}$ oscillations. We double that to get intersections, but then halve it because of the counting argument, so that count is also of candidate solutions. We then subtract ten for safety to obtain the claim.