I've got an integer linear program of the form $$ \begin{aligned} \text{Minimize}&& c &= x_1 + \cdots + x_m \\ \text{subject to}&& A\mathbf{x} &\geq \mathbf{b} \\ \text{where} && \mathbf{x} &\in \mathbb{Z}^m,\ \mathbf{x}\geq\mathbf{0},\ \\ && \mathbf{b} &\in \mathbb{R}^n,\ \mathbf{b}\geq\mathbf{0},\ \\ && A&\in \mathbb{R}^{n\times m}\ . \end{aligned} $$ So, most importantly, both my coefficient vector $\mathbf{x}$ and $\mathbf{b}$ are always nonnegative in every component, and there is no "$\leq$" limit on anything. $A$ is an arbitrarily (finite) real-valued matrix. This is solvable in a straightforward manner in any standard LP solver. But what I've been trying to figure out is if there is a simpler way to just determine whether the given problem has any solution at all – not worrying about which solution is the optimal one.
In other words, the question is whether there exists an $\mathbf{x}\in\mathbb{Z}^m$, $\mathbf{x}\geq\mathbf{0}$, so that $A\mathbf{x}\geq\mathbf{b}$ (with $\mathbf{b}\geq\mathbf{0}$).
Mike's contributions below have been quite valuable, and it still seems as though there is no simpler way to determine this – but we're still not quite sure, so I'm leaving this question open a little longer.
I do think, however, that in my special case one thing holds: Given the conditions above, if a solution exists for the relaxed problem (i.e., $\mathbf{x}\in\mathbb{R}^m$, $\mathbf{x}\geq\mathbf{0}$), then an integral solution exists as well. I have placed this in a separate question. But assuming it holds, maybe the relaxed version does have a simpler way to check for solvability?
Do you happen to have a direction in which to point me, or an idea of how to go about this? Or, maybe, is Simplex the only way to find out?
Thanks in advance for any hints!
Unfortunately, in general there's no quick method. In some special cases (e.g., the origin is feasible) determining whether a linear program has a solution is quite easy, but absent something like that the problem is as computationally difficult as finding an optimal solution. So, yes, in general simplex is really the only way to find out.
For example, if the origin isn't feasible the simplex method is required to use some method for finding an initial feasible solution. The most common methods (e.g., Big-M method, Phase I of a two-phase method, dual simplex) all entail solving linear programs themselves. So, in general, finding a feasible solution is equivalent to solving some linear program to optimality.
With more information on the structure of $A$, though, we might be able to say something. For instance, the dual of your problem (as an LP, without the integer restriction) is
\begin{align} \text{Minimize}&& {\bf b}^T {\bf y} \\ \text{subject to}&& A^T {\bf y} &\leq \mathbf{1} \\ && {\bf y} &\geq 0 . \end{align}
The origin is feasible for the dual. With more information on the entries in $A$, if you can prove that the dual is bounded then strong duality implies that your original problem (again, as an LP) has a feasible solution.
Everything I've said so far is for the LP case. The integer program case is even worse, as integer programming is so much harder than linear programming. In fact, the general problem of finding whether an integer program has a feasible solution is an ongoing research area. There's a nice summary of the history of this problem in the introduction of this document "Towards Faster Integer Programming" (which appears to be someone's thesis proposal).
The OP asks in the comments for an example of a system $A {\bf x} \geq {\bf b}$, with ${\bf x} \geq {\bf 0}$, ${\bf b} \geq {\bf 0}$, and real entries in $A$, but which does not have an unbounded solution set. Such a system is
\begin{align} x - y & \geq 1 \\ y - x &\geq 1 \\ x, y &\geq 0. \end{align} Not only is the solution set not unbounded, it's empty.