Suppose we have a controllable (stabilizable) pair $(A, b)$ (Definition can be found here and it is equivalent to $(b, Ab, A^2b, \dots, A^n b)$ having full rank), where $A \in \mathbb R^{n \times n}$ and $b \in \mathbb R^{n}$. I am interested in determining whether the following set
$$\{ k \in \mathbb R^{n} : \rho(A-b k^T) < 1 \}$$
where $\rho(\cdot)$ denotes the spectral radius, is convex. In control theory, this set corresponds to stabilizing feedback controllers and have many real applications.
In general, there is no reason to believe the set of stabilizing feedback controllers to be convex. As pointed in Appendix B of this paper, if $A \in \mathbb R^{3 \times 3}$ and $B \in \mathbb R^{3 \times 3}$ we can easily construct an example to show the corresponding stable feedback controller set is not convex.
But here I am considering a modest case, where $b$ is restricted to be $\mathbb R^{n}$. The reason for me to believe the set is probably to be convex is simulation result. Indeed, in all my randomly generated $(A, b)$, the set of $k$ forms a convex set for $n=2, 3, 4$ (I grid the space of $\mathbb R^{n}$ and check the eigenvalues for each point). For higher dimensions, it takes much longer to simulate. So either the set is indeed convex or I am not sampling the space fine enough.
As commented by @loup blanc, the question was initially formulated for the specific case $n=2$. Before it drew too much attention, I did more simulations to see the shape of $k$ and then I changed the formulation since I am not sure it is OK to start a new question. If this brought inconvenience, I sincerely apologize.
I'm not aware of any specific convexity results for the single-input $b \in \mathbb{R}^n$ case you mentioned, but for the general case of multiple inputs and $B \in \mathbb{R}^{n \times m}$, there exists a very useful convexifying change of variables that is often used for controller design. This means that the nonconvexity of the set of stabilising $K$ is not usually an issue and state feedback controllers can be designed using convex optimisation.
A discrete time linear system is stabilised by a state feedback $K$ if and only if $A + BK$ admits a quadratic Lyapunov function, so that for some symmetric matrix $P$ we have $$ x^T P x > 0, \qquad x^T P x - x^T (A+BK)^T P (A+BK) x < 0 $$ for all $x \neq 0$. Factorising the LHS of the second inequality gives $$ x^T P x > 0, \qquad x^T \left( P - (A+BK)^T P (A+BK) \right) x < 0 $$ for all $x \neq 0$, which is equivalent to the matrix inequalities: $$ P \succ 0, \qquad - P + (A+BK)^T P (A+BK) \succ 0 $$ The second inequality is jointly nonconvex in $P$ and $K$, but we can make it convex by pre-and post- multiplying by $Q = P^{-1}$ (noting that $Q$ is positive definite iff $P$ is) $$ Q \succ 0, \qquad - Q + Q (A+BK)^T Q^{-1} (A+BK) Q \succ 0 $$ then applying the Schur Complement Lemma to give $$ \left[ \begin{array}{cc} -Q & Q (A+BK)^T \\ (A+BK)Q & -Q\end{array} \right] \succ 0 $$ and then considering a change of variables $Y = K Q$ to yield $$ \left[ \begin{array}{cc} -Q & Q A^T+Y^T B^T \\ AQ+BY & -Q\end{array} \right] \succ 0 $$ which is a Linear Matrix Inequality (LMI) in the variables $Q$ and $Y$ and therefore define a convex set (a spectrahedron). This parameterises the whole set of stabilising state feedback controllers, in the sense that for any $Q$ and $Y$ satisfying the LMI, $K = Y Q ^{-1}$ gives a stabilising state feedback and if the LMI is infeasible then no such $K$ exists.
I suggest taking a look at the book by Boyd and co-workers on LMIs in control, which is available online at https://web.stanford.edu/~boyd/lmibook/. Chapter 7 in particular contains many results about state feedback synthesis in continuous-time for various system types (LTI, LDI, LFT), and some discrete-time results are available in Chapter 9 (for the more general case of stochastic systems).