How to multiply certain analytic matrices so they behave like the identity near $0$?

174 Views Asked by At

I have the following parametrized matrix: $$ A(u,t) = \begin{bmatrix} \cos(t(u+s)) & \frac{\sin(t(u+s))}{u+s} \\ -(u+s)\sin(t(u+s)) & \cos(t(u+s))\end{bmatrix} \quad u \in \Bbb R\setminus \{0\} \text{ and } t > 0 $$ which is clearly entire in $s$. For any given $N$, my goal is to find, or at least prove the existence of, a sequence $\{(u_i,t_i)\}_{i=1}^M$ s.t. $$\prod_{i=1}^M A(u_i,t_i) =I+O(s^N)$$ I have been playing around with this problem and I have found numerical solutions up to $N=8$. My hope is that there is some pattern I can recursively exploit to generate new solutions from old ones. It is easy to get rid of odd powers of $s$ because of the parity of their coefficients, but the even powers are more troublesome. By Taylor expanding, I can tell that this problem would be equivalent to solving a system of equations involving powers and trig functions, but I do not know enough about such systems to find my required sequence of $(u_i,t_i)$. I looked online and it seems there is a link to the Matrix Membership problem, which is in general undecidable, but perhaps in this case, with such an explicit form for $A$, something more can be said. Are there any references that I might find useful?

Edit For example, for $N=3$, note that, to second order, $$A(-1,t_1)A(1,4\pi)A(-1,4\pi-t_1) = I + \begin{bmatrix}-8\pi\sin\left(2t_1\right)&-8\pi\cos\left(2t_1\right)\\-8\pi\cos\left(2t_1\right)&8\pi\sin\left(2t_1\right) \end{bmatrix}s^2=I+B(t_1)s^2$$ and $B(t_1+\pi/2)=-B(t_1)$, so by choosing $t_1=\pi$, we have $$A(-1,\pi)A(1,4\pi)A(-1,3\pi)A(-1,3\pi/2)A(1,4\pi)A(-1,5\pi/2)=I+O(s^3)$$ I believe that in general, using $u_i$'s with fixed absolute value should be possible, but I cannot prove it. By choosing $|u_i|=1$ and looking at the zeroth order term of the general product of $A(u_i,t_i)$'s, it is easy to see that $\sum_i t_i \in 2\pi \Bbb N$. My guess is that similar constraints apply to higher order terms, but getting explicit expressions for those is harder.

2

There are 2 best solutions below

10
On

I'm not sure I understand the question perfectly, but it seems to me that $\forall u,t$, we have $A(u,t)*A(u,-t) = I$. Does that solve your problem (with $M=2, N\in\mathbb{N}$)?

EDIT : New attempt, with the better understanding of the constraints :

Let's denote $w = u+s$, so that $| |u|-|s|| \le | w| \le | u | +| s|$. In particular, choosing $u=|s|$ ensures $w = O(s)$.

Then, we have :

$$ A(u,t) = I + t\begin{pmatrix} 0 & 1\\ -w^2 & 0 \end{pmatrix} + O((tw)^2)$$

Choosing $t = |s|^N$ and $u = |s|$, we have : $ A(u,t) = I + O(s^N)$.

0
On

Here is a very brief outline of the proof. It suffices to put $u_i = (-1)^{i+1}$ choose $t_i \in \frac \pi 2 \Bbb N$. Put $\vec t = (t_1, \dots, t_M)$ and $P(\vec t,s) = \prod_{i = 1}^M A((-1)^{i+1}, t_i)$. If we constrain ourselves to the planes $\sum_i t_i \in 2\pi \Bbb N$, then $P(\vec t,s) = I + M_jO(s^j)$. Furthermore, it is straightforward to prove by induction that the coefficients in the power series of $P$ have the form $$ P(\vec t, s) = I+\sum_{i \ge 1} C_i(\vec t)s^i = I + \sum_{i \ge 1}\begin{bmatrix} a_i & b_i \\ (-1)^i b_i & (-1)^i a_i\end{bmatrix} s^i $$ Moreoever, using the expression for $A$, it is easy to see that $Q(\vec t,s) = \prod_{i=1}^M A((-1)^{i}, t_i) = I + \sum_{i \ge 0} (-1)^iC_i(\vec t)s^i$. Now, if $j = \min\{i > 1 : C_i \neq 0\}$ is odd, we have $P(\vec t, s) Q(\vec t, s) = I - C_j^2 s^{2j} + o(s^j)$, so it suffices to assume $j$ is even. In this case, observe that define $\hbox{flip}(\vec t)_i = t_{M-i}$ and $$ \hbox{cycle}(\vec t) = \begin{cases} ( t_n, t_1,\dots, t_{n-1}) & \text{if $M$ is even} \\ ( t_M + t_1, t_2, \dots, t_{M-1}) & \text{if $M$ is odd} \end{cases} $$ These operations are useful when $t_M$ is an odd multiple of $\frac \pi 2$. Indeed, in that case, we have $$ \begin{align} P(\hbox{cycle}(\vec t), s) &= A((-1)^{M+1},t_M)P(\vec t, s) A((-1)^{M+1}, t_M)^{-1} \\ &= I + \begin{bmatrix} a_j & -b_j \\ -b_j & a_j\end{bmatrix} s^j + o(s^j) \end{align} $$ and remarking that $P(\vec t, s) = I + C_j s^j + o(s^j) \Rightarrow P(\vec t, s)^{-1} = I - P(\vec t, s)^{-1}C_j s^j + o(s^j) = I - C_j s^j + o(s^j)$ also results in $$ \begin{align} P(\hbox{flip}(\vec t), (-1)^{M+1}s) &=\begin{bmatrix} -1 & 0 \\ 0 & 1\end{bmatrix} P(\vec t, s)^{-1}\begin{bmatrix} -1 & 0 \\ 0 & 1\end{bmatrix} \\ &= I - \begin{bmatrix} a_j & -b_j \\ -b_j & a_j\end{bmatrix} s^j + o(s^j) \end{align} $$ since $P(\hbox{flip}(\vec t), (-1)^{M+1}s) = P(\hbox{flip}(\vec t), s)$ when $M$ is odd and $Q(\hbox{flip}(\vec t), s)$ otherwise, it follows that we can always find $u_i$ and $t_i$ which flip the sign of $C_j(\hbox{cycle}(\vec t))$ so long as $t_M \in \frac \pi 2(2\Bbb N + 1)$. To finish, we define concatenation of vectors in a way that respects the products $P$ and $Q$. For $\vec t$ and $\vec r$ of lengths $K$ and $M$, let $$ \vec t * \vec s = \begin{cases} (t_1,\dots, t_K, r_1, \dots, r_M) & \text{if $K$ is odd} \\ ( t_1,\dots, t_{K-1}, t_K + r_1, r_2, \dots, r_M) & \text{if $K$ is even} \end{cases} $$ which ensures that $P(\vec t * \vec r, s) = P(\vec t, s) Q(\vec r, s)$. Finally, we proceed by induction. Let $\vec t(1) = (1, 3)\frac \pi 2$ and $j_m = \{i > 1 : C_i(\vec t(m)) \neq 0\}$. If $j_m$ is odd, define $\vec t(m+1) = \vec t(m) * \vec t(m)$ so that by the remarks made earlier, $j_{m+1} > j_m$ and the last element of $\vec t(m+1)$ equals the last element of $\vec t(m)$ and hence is an odd multiple of $\frac \pi 2$. If $j_m$ is even, let $\vec s = \hbox{cycle}(\vec t(m)) * \vec t(m)$ so that $C_{j_m}(\vec s) = a_{j_m}I$ and $\vec t(m+1) = \hbox{flip}(\vec s) * \vec s$ so $P(\vec t(m+1), s) = I - a_{j_m}^2s^{2j_m} + o(s^{j_m + 1})$. Consequently, $j_{m+1} > j_m$ and the last element of $\vec t(m+1)$ clearly belongs to $\frac \pi 2 (2\Bbb N + 1)$. This completes the proof, as $\prod_{i=1}^M A((-1)^{i+1}, \vec t(m)_i) = I + o(s^{j_m})$ and $\{j_m\}$ is, by construction, strictly increasing.