In their book, Numerical Optimization, Nocedal & Wright present the following example (Example 12.8) to illustrate the second-order conditions in constrained optimization:
$\min -0.1(x_1-4)^2+x_2^2\quad \text{s.t} \quad x_1^2+x_2^2\geq 1$,
they also provide the gradient of the Lagrangian function and its Hessian:
$\nabla_x\mathcal{L}(x,\lambda) = \begin{pmatrix}-0.2(x_1-4)-2\lambda x_1 \\ 2x_2-2\lambda x_2\end{pmatrix}, \quad \nabla_{xx}\mathcal{L}(x,\lambda) = \begin{pmatrix}-0.2-2\lambda & 0 \\ 0 & 2-2\lambda\end{pmatrix}$.
They are able to conclude that the point $(1,0)^T$ with $\lambda=0.3$ is a strict local solution since it satisfies the second-order sufficient conditions.
By doing the exercise by myself, I found that the point $x^{\ast}=(\frac{4}{11},\frac{\sqrt{105}}{11})^T$ with $\lambda^{\ast}=1$ also satisfies the KKT conditions. For the second-order condition, I need the gradient of the constraint at $x^{\ast}$ which is $\nabla c_1(x^{\ast}) = \begin{pmatrix}\frac{8}{11} \\ \frac{2\sqrt{105}}{11}\end{pmatrix}$. The space $F_2(\lambda^{\ast})$ is then defined by
$F_2(\lambda^{\ast})=\{w\quad|\quad w^T\nabla c_1(x^{\ast})=0\} = \{(\frac{-\sqrt{105}}{11}w_2,\frac{4}{11}w_2)^T\quad |\quad w_2\in\mathbb{R}\}$.
For any $w\in F_2$ with $w\neq 0$, we have that
$w^T\nabla_{xx}\mathcal{L}(x^{\ast},\lambda^{\ast})w = \begin{pmatrix}\frac{-\sqrt{105}}{11}w_2\\ \frac{4}{11}w_2\end{pmatrix}^T\begin{pmatrix}-2.2 & 0 \\ 0 & 0\end{pmatrix}\begin{pmatrix}\frac{-\sqrt{105}}{11}w_2\\ \frac{4}{11}w_2\end{pmatrix}=-2.2\frac{105}{121}w_2^2<0$. Therefore, I could conclude that $x^{\ast}=(\frac{4}{11},\frac{\sqrt{105}}{11})^T$ is a strict local maximum.
However, it is not the case (I just checked by plotting the graph, or see the explanation by Ian below).
My question is: what am I doing wrong? Thank you for your help!
As a way to check your work, try to solve the problem in a rather naive way, as follows:
Minimize $g(r)=\min_{x,y : x^2+y^2=r^2} f(x,y)$ where $f(x,y)=-0.1(x-4)^2+y^2$ and $g$ is defined on $[1,\infty)$.
Then I find by Lagrange multipliers that
$$g(r)=\min \left \{ f \left ( \frac{4}{11},\sqrt{r^2-\frac{16}{121}} \right ),f \left ( \frac{4}{11},-\sqrt{r^2-\frac{16}{121}} \right ),f(r,0),f(-r,0) \right \}.$$
Just for completeness, I'll sketch how the algebra to get here goes, though I'm pretty sure you know perfectly well how to do it. The ambiguity comes about because the equation $2y=2\lambda y$ has two solutions, either $\lambda=1$ or $y=0$. In the latter case, the constraint only has two solutions $(\pm r,0)$. In the former case the Lagrange multiplier equation for $x$ has one solution and then the constraint has two solutions. Thus you wind up with four candidates for local extrema on each circle.
One can see that the first element of this set is an increasing function of $r$. Consequently the first point with $r=1$, which is to say the point you found, might be a local maximum on the circle $r=1$ perhaps, but it cannot possibly be a local maximum of the problem itself, since one can find a larger value of $f$ by moving to $\left ( \frac{4}{11},\sqrt{r^2-\frac{16}{121}} \right )$ for any larger value of $r$. Your condition for the negative definiteness of the Hessian restricted to the directions perpendicular to the gradient of the constraint only tells you that this point is a local maximum on that circle. It doesn't tell you what happens to $f$ if you move to a different, nearby circle within the domain.
Looking at Nocedal/Wright, I conclude that your mistake is that in a maximization problem, their restriction of the constraints in the definition of $F_2$ to require $\lambda_i>0$ gets flipped, so that you restrict attention to constraints with $\lambda_i<0$ instead. Thus in this situation $F_2=F_1$. (Another alternative is to change the sign in the Lagrangian itself for a maximization problem, in which case you still want $\lambda_i>0$ but in the procedure you would have gotten a negative value for $\lambda$.)