I have a matrix $A(\operatorname{Im}(\lambda))$ whos terms $a_{ij}$ depends on the imaginary part of $\lambda$ with a relationship that is experimentally assessed (to give a bit more context, I'm talking about the Scanlan's flutter derivatives for the aerodynamic effects on a bridge).
I need to find the (complex) eigenvalues and the eigenvectors of this matrix, but since the dependency is not trivial, I struggle to find a method in literature.
Most methods (source) rely indeed on the approximation of $A$ as a polinomial or rational sums of matrices. However, the coefficients of my matrix are functions of parameters that come from experimental data. The dependency of the matrix elements from these experimental coefficients, however, is not straight forward and computing derivatives would be complicated.
I'm therefore looking for a method/strategy/algorithm to solve this problem numerically. Does anything comes to your mind?
Is there any package (I'm working with Python, but other languages could also work) that implements a numerical methods suitable for this problem?
My solution
Also, (as a proof I did my homeworks and as a starting point), here is how I approached the problem:
- I take a trial $\operatorname{Im}(\bar{\lambda})$ and compute $A(\operatorname{Im}(\bar{\lambda}))$
- Compute the eigenvalues $\lambda_i$ of $A(\bar{\lambda})$
- Check if any $\lambda_i = \bar{\lambda}$. If this is the case, I know that $\lambda_i$ is one of the eigenvalues of my problem.
- Change $\bar{\lambda}$ and repeat.
In reality step 3 is always impossible without setting a tollerance. As an alternative I do the following:
- Compute all $\operatorname{Im}(\lambda_i)$ for a given $\operatorname{Im}(\bar{\lambda})$
- Change $\operatorname{Im}(\bar{\lambda})$ up to a maximum value
- Plot $\lambda_i$ vs $\bar{\lambda}$
- Find the intersections with the line $\lambda_i = \bar{\lambda}$
This problem also is not perfect as sometimes the order the order of two (or more) eigenvalues changes. When this happen, I need to "chase" the mode-shape to avoid my lines to do sudden jumps that could cause false intersections with my $\lambda_i = \bar{\lambda}$ line. Also, I need to compute the eigenvalues of $A$ for a lot of values of $\bar{\lambda}$ which makes the code inefficient.
I'm therefore looking for a better and less computationally intensive way to solve this problem
This is an optimization problem, or at least it can be framed as one.
To solve these kind of problems numerically you first have to decide on a target function, or "score" (call it $S$). That is, a way to decide if your proposed solution is better or worse than a previous one. You are quite free to choose the function, but various optimization methods expect different technical conditions. The two main conditions are typically:
1.the function should be minimal when the target is achieved, and 2.the function is smooth (at least twice differentiable) in the parameters.
there is no need to explicitly define the function, just to be able to compute it for every given parameter set (= $A$ matrix in you case). Naively, you could just set $$\mathrm{score}(A(\bar{\lambda}))= \min_i ||\mathrm{Im}\bar\lambda - \mathrm{Im}\bar\lambda_i(A)||^2$$ that is the $L_2$ distance to the closest e.v., but the minimum function is not smooth. As an alternative, I propose a trick, inspired by thermodynamics: define $$\epsilon_i = ||\mathrm{Im}\bar\lambda - \mathrm{Im}\bar\lambda_i(A)||^2 $$ and consider the score function $$S=-\log(\sum e^{-\beta\epsilon_i})$$ with $\beta$ some parameter. Notice, that for $\beta >>1$ the sum is dominated by the smallest $\epsilon_i$ in the sum, that is $$S\approx \beta \min_i ||\mathrm{Im}\bar\lambda - \mathrm{Im}\bar\lambda_i(A)||^2,$$ yet it is without the pesky break points of the $\min$ function.
Once you have this, you can plug it into a numerical optimizer and hope for the best. Note that the value of $\beta$ needs to be chosen by trial and error and you probably will run into a lot of technical problems on the way.