This is only based on empirical testing, but it seems like if you select a random $a^2$ and a random $b$ in a comparable range, you'll find that $a^2+b^2$ is significantly (by a margin of 25-30%) more likely to be prime than $a^2+b$. I assume there's a good number-theoretical reason for this, but it doesn't jump out at me, aside from karmic balance for the fact that $a^2-b^2$ is almost never prime.
Came by to post this addendum:
My position can be better defined by saying that there are strictly more primes of form $a^2+b^2$ than $a^2+b$, where $a>b$ and you tabulate your prime count for all valid $a,b\in\mathbb N$ pairs up through $a=17$ or greater.
Taking a random $a$ and $b$, the probability that $a^2+b^2$ is a multiple of $3$ is about $1/9$, since this happens only if $a$ and $b$ are both multiples of $3$. But the probability of $a^2+b$ being divisible by $3$ is about $1/3$, since no matter what $a$ is there is a roughly $1/3$ chance that $b\equiv -a^2\pmod 3$.
You will get a similar effect for any prime $p\equiv 3\pmod 4$, since for these primes $-1$ is not a quadratic residue, which implies there is no solution to $a^2+b^2\equiv 0\pmod p$ except $a\equiv 0,b\equiv 0$.