Given a positive objective function $f$ that acts on a real-valued matrix $A$, I am interested in the following problem
$$\underset{A \in \mathbb{R}^{n \times n}}{\text{minimize}} \quad f(A) \quad \text{subject to} \quad \rho \left(A^{-1}\right) \leq 1-\epsilon, \quad \rho(A) \leq 1+\epsilon \ ,$$
where $\rho(\cdot)$ is the spectral radius. What optimization methods are available for approaching this task?
I know there are several techniques for bounding the eigenvalues of symmetric matrices, but I wonder what happens in the non-symmetric case (my $A$ is not necessarily symmetric).
You could formulate the spectral radius condition using the 2-norm, for example, to obtain an upper bound: $$ \rho(A)\leq ||A||_2$$ You can then formulate $ \rho(A)\leq1+\epsilon$ as $$A^T A \preceq (1+\epsilon)^2 I \Leftrightarrow \begin{bmatrix}I & A \\ A^T &(1+\epsilon)^2 I \end{bmatrix}\succeq 0,$$ where I used the Schur complement in the second step. Similarly, for $ \rho(A^{-1})\leq1-\epsilon$ you obtain $$\begin{bmatrix}I & A^{-1} \\ A^{-T} &(1-\epsilon)^2 I \end{bmatrix}\succeq 0 \Leftrightarrow \begin{bmatrix}A A^T & I \\ I &(1-\epsilon)^2 I \end{bmatrix}\succeq 0,$$ where I pre- and post-multiplied with diag$(A, I)$ and its transpose, respectively.
If your objective function is convex, then with the first constraint you will have a convex optimization problem. The second constraint, however, is bilinear. You can model the problem using e.g. YALMIP (free) and solve it using e.g. PENlab (free).
Check Linear Matrix Inequalities in System and Control Theory.
Note that you might get a better result if you use $ \rho(A)\leq \min_D||D^{-1} A D||_2$, where $D$ is any invertible matrix, but usually chosen to be diagonal.