I have a linear system $Ax = b$ where $A$ is an $m \times n$ integral matrix and I know there exists at least one solution. I want an upper bound on the $\ell_2$ norm $\|x\|$ such that $\|x\|$ is minimized over all possible solutions. To state the problem differently I would like to find an $x$ such that $\|x\|$ has a nice upper bound.
Preferably, I would like the bound to be in terms of $m,n,b$ and properties of $A$ such as its singular values or determinants of square submatrices. However, I'd be interested in seeing bounds involving other values.
(I don't have enough reputation to comment, so here's one short example offered as an answer: Suppose A is a matrix that projects 3d space onto the x-axis. $\text{diag}\,(1,0,0)$, if you will. Say $b=(5,0,0)^T$. Then any $x$ of the form $(5,p,q)^T$ will satisfy your equation, no matter how huge $p$ and $q$ are. The solutions seem to be unbounded. You can probably formalize this with the rank-nullity theorem or something. If $A$'s range has the same dimension as the dimension of $x$, then there's precisely one solution, which is boring. Otherwise, if $A$ is 'lossy' in some sense, then there are some subspaces of the domain that $A$ is collapsing to zero, and $x$ can have as big of a component it wants in these subspaces.)
Aight, ignore all that, I think I got it!
So basically, if we want $x$ to be as small as possible, let's make sure it has no components in $A$'s nullspace (because any such component would add to $x$'s size unnecessarily). So, $x$ should be orthogonal to the nullspace of $A$, i.e. in the rowspace of $A$, i.e. i the columnspace of $A^T$.
Now, consider all $y$ that satisfy $Ay=b$. If we project such a $y$ down to the columnspace of $A^T$, we are done. Fortunately, we have a simple tool to project stuff to columnspaces of matrices - this is where the Moore-Penrose pseudoinverse comes in. The wikipedia page on projection matrices has some nice intuition on this.
Okay, so we get that $x=A^T(AA^T)^{-1}b$. How to find an upper bound on size? $$\|x\|^2=b^T(AA^T)^{-1^T}A\,A^T(AA^T)^{-1}b=b^T(AA^T)^{-1^T}b=\langle b, (AA^T)^{-1^T}b\rangle$$ So by Cauchy-Schwarz we get $$\|x\|^2\leq\|b\|\,\|(AA^T)^{-1^T}b\|=\|b\|^2\,\|(AA^T)^{-1^T}\hat{b}\|$$ Where $\hat{b}=b/\|b\|$. Now notice that in the end we're basically looking at the operator norm for $(AA^T)^{-1^T}$, which is bounded above by the biggest singular value. And we also know that transposing doesn't change the singular values, and taking the inverse also inverts the singular values, so $$\|x\|^2\leq\|b\|^2\cdot\frac{1}{\text{smallest singular value of }AA^T}$$