I was always puzzled by these conjectures which can be stated quite simply, yet finding a proper proof is a monumental task even for the most brilliant mathematicians . Consider the following examples:
Collatz. The conjecture is that no matter what number you plug in the function defined here, you will always eventually reach 1.
$f(x) = \begin{cases}x/2&\text{$x$ is even,}\\ 3x+1&\text{$x$ is odd.}\end{cases}$
Goldbach. Every even integer greater than 2 can be expressed as the sum of two primes.
Riemann Hypothesis.
The non-trivial zeros of the Riemann $\zeta$ function all have real part 1/2.
What I wanna know, not necessarily regarding those particular problems, is there some kind of mathematical or logical procedure to determine if a proof can be found for such conjectures? Is there a way a mathematician can run away from dedicating his life to an unsolvable problem ?
I also am not really clear anymore with what unsolvable, undecidable actually mean, but I'm not going to ask anything about this in the same post.
The answer to the question is subtle. There are cases in which an algorithmic means to proof exists. For example, determining small Ramsey numbers of graphs. It is known that $43\leq R(5,5)<49$. There are only finitely many graphs on these vertices. So in principle, one could generate all the graphs, compute their graph properties and determine the answer. The practical problem though is that this "algorithm" is too ridiculously expensive to employ in practice. So while there is an algorithmic approach, it is unfeasible to apply the algorithmic approach.
So, perhaps, a better question is: is there an algorithmic approach that can be "reasonably" applied to solve a problem? The answer to that question is: if there is an algorithmic approach, then mathematicians are more interested in the algorithmic approach than the result (because the result is a "trivial consequence" of the algorithmic approach). If such an approach exists, then it's "derivable" consequences are almost certainly already known.
The real, interesting problems, are those which nobody can find (or has yet found) and usable, algorithmic approach to solve. The way for a mathematician to avoid spending his life on an "unsolvable" problem, is to not focus on a single problem. In Tim Gowers's blog [1] he says: "and if it fails (as the majority of my individual research projects do)". Essentially, I take this to mean that he is successful at solving so many problems because he considers many many problems.
[1] https://gowers.wordpress.com/2009/01/27/is-massively-collaborative-mathematics-possible/ (see the comment to Terry).