Edit: When $0 < q < 1$, $\lVert {\bf x} \rVert_q$ fails the triangle inequality, and so its not a norm, and calling it such was a mistake. However, my problem with the geometric intuition would still remain I think (also, why the process works when $q = 1$ but not when $q < 1$.)
Let ${\bf x} \in \mathbb R^n$, ${\bf a} \in \mathbb R^n$, and $q > 0$. For simplicity assume that each $x_j > 0$ and $a_j > 0$. I'm interested in the following constrained optimization problem, for $k>0$ considered fixed, \begin{align*} \text{minimize }\quad &-{\bf a}\cdot{\bf x}\,+ \,\frac{1}{2}\,{\bf x} \cdot {\bf x}\\ \text{subject to }\quad &\lVert{\bf x}\rVert^q_q = k^q \end{align*}
where $\lVert{\bf x}^q_q\rVert = \sum^n_{j = 1} |x_j|^q = \sum^n_{j = 1} x_j^q$ (since $x_j > 0$) is the $\ell_q$ norm of ${\bf x}$. Looking at the geometry of the problem, our objective defines concentric ellipsoids in $\mathbb R^n$, and our goal is to find the point closest to the center of such ellipsoids that is along the perimeter of the $\ell_q$ ball of "radius" $k$.
We might try to use the Lagrangian multiplier method of solving our problem by solving \begin{align*} {\bf 0} &= \frac{\partial}{\partial {\bf x}} \left(-{\bf a}\cdot{\bf x}\,+ \,\frac{1}{2}\,{\bf x} \cdot {\bf x} + \lambda \lVert {\bf x} \rVert^q_q \right) \\ &= \frac{\partial}{\partial {\bf x}} \left(-{\bf a}\cdot{\bf x}\,+ \,\frac{1}{2}\,{\bf x} \cdot {\bf x} + \lambda \sum^n_{j = 1} x_j^q\right) \\ &= -{\bf a} + {\bf x} + \lambda q \left[ x^{q-1}_j \right]^n_{j = 1} \end{align*}
Fortunately for us, each of our $n$ equations are independent of each other and so its sufficient to solve (for $x_j$) \begin{equation} 0 = -a_j + x_j + \lambda q x^{q-1}_j \end{equation}
For $q \geq 1$ this makes sense. If we visualize $0 = -a_j + x_j + q x^{q-1}_j$ (taking $\lambda = 1$ for visualization purposes), plotting our equation on the $(a, x)$ plane for a few values of $q \geq 1$:



This provides us with the (unsurprising) result that as our $\ell_q$ norm shrinks (as $k \to 0$), the minimal $x$ continuously shrinks towards 0. However, for $q \in (0,1)$ we find the following figures (we take $q = \frac{1}{2}$, but analogous figures appear for any $q \in (0, 1)$)
This case is a bit more interesting. Firstly, we see that we have two solutions for the minimal $x$ for each $a$. We can see why this is so by taking a look at our original Lagrangian $-{\bf a}\cdot{\bf x}\,+ \,\frac{1}{2}\,{\bf x} \cdot {\bf x} + \lambda \lVert {\bf x} \rVert^q_q$. Considering the one-dimensional case we see that when $q \geq 1$ we find \begin{equation*} \frac{\partial}{\partial x} \left(-ax + \frac{1}{2}x^2 + \lambda x^q\right) = 1 + \lambda q (q - 1) x^{q-2} \geq 0 \end{equation*}
Which implies that our function is convex. However, when $q \in (0, 1)$, we are no longer given convexity. In fact, once again using visualization as an aid, plotting $f(x) = -ax + \frac{1}{2}x^2 + \lambda x^q$ we see that we have a single minimum when $q \geq 1$ but a (global) minimum and a (local) maximum when $q \in (0, 1)$
So, this tells us why there's two solutions. This is not such a huge deal since we can discard the maxima in favor of the minima. However, another issue, and the reason why I'm seeking some insight, is that there are also some values of $a$ where there are no solutions. Now, we can see that the fact that there are no solutions is borne out of the fact that the function $f(x) = -ax + \frac{1}{2}x^2 + \lambda x^q$ has no point when its derivative is zero (when $\lambda$ is large or $a$ is small).
My question is how does this make any sense? When framing the problem we had a geometric picture of finding a point along the $\ell_q$ ball of radius $k$ nearest to the center of our ellipsoids. How can there be no point? Certainly, when the ball is large enough, the value of ${\bf x}$ will just be the center of the ellipsoids, and when the ball gets small enough, the value of ${\bf x}$ will be zero, but it shouldn't not exist?
My intuition is that this is something regarding convexity, but I haven't been able to pin down exactly why this would generate no solutions (instead of simply multiple solutions).
This is just to explain to what what extent the Lagrange multipliers can be used here. I don't have a decent solution to this minimization problem yet.
I'll rescale a bit to make life easier. So we'll have $k=1$.
First of all, there are two cases to consider: $a\in B_{\ell_q}$ and $a\notin B_{\ell_q}$. In both cases you may try to employ $G_\lambda(x)=\frac 12\|x-a\|_2^2+\lambda\|x\|_q^q-\frac 12\|a\|_2^2$ and observe that for your functional $F(x)$ to minimize we have $$ F(x)\ge-\lambda+\min_{x\in \mathbb R^n}G_\lambda(x) $$ on the boundary of the unit ball in $\ell^q$. Moreover, if the minimizer of $G_\lambda$ has $\ell_q$-norm $1$, the story is over: you have a point and you have a witness of the minimality. Now the minimizer of $G_\lambda$ is at $a$ if $\lambda=0$, tends to $0$ if $\lambda\to+\infty$ and tends to $\infty$ if $\lambda\to-\infty$. So it is smart to choose the sign of $\lambda$ that guarantees the crossing of the unit sphere in $\ell_q$, which is $+$ when $\|a\|_q>1$ and $-$ otherwise. Now you can indeed switch to one-dimensional problems. Each of them is $$ \frac 12(x-a)^2+\lambda x^q\to \min, \qquad x\ge 0 $$ The case $\lambda<0$ (inner approach) presents no problem because then we are interested in $x>a$ and the derivative $x-a+\frac{\lambda q}{x^{1-q}}$ is increasing on $[a,+\infty)$, so the crossing is unique, depends on $\lambda$ continuously, etc.
So, let's look at $\lambda>0$, when $x\in [0,a]$.
The equation for a critical point is $$a-x=\frac{q\lambda}{x^{1-q}}$$ and we can have 2 crossings, one touch, or no solutions at all. It is not hard to realize that when we have two crossings, the left is a local maximum and the right is a local minimum, so when you start at $0$ and slowly bring $\lambda$ up, everything looks fine for a while.
The critical moment, however, will happen sooner or later, and it will be not losing the roots but getting the value at the right root equal to the value at $0$. This, above all, will mean losing the continuity of the minimizer (one coordinate will suddenly just annihilate), which may mean that you suddenly go from the outside of the unit ball in $\ell^q$ to the inside and never reach anything on the boundary. So the technique seems to fail and, indeed, in this case there will be no witness of minimality of the form $-\lambda+\min_{x\in \mathbb R^n} G_\lambda(x)$. Whether it will really happen or not is a matter of luck (perhaps, the boundary crossing will occur in the continuous regime) but you want your minimizer in all cases, so we need some way out.