It seems to my uneducated mind that if I have $\frac{n}{k}$ subproblems, each of size $\Omega (k \log k)$, that my overall problem must be at least of size $\Omega (n \log k)$, by the reasoning that if I have $\frac{n}{k}$ things, each of which is at least $k \log k$, I should be able to multiply those together to get a lower bound for their sum.
However, my TA seems to disagree with me, stating that addition is not defined on $\Omega (f(x)) + \Omega (g(x))$, and therefore I can't sum or multiply the subproblems in this fashion.
I was wondering if someone could explain the error in my reasoning.
Your original reasoning is indeed problematic, since you're adding together a variable number of things. Suppose you have $n$ subproblems, each of which takes constant time; can you conclude that the whole thing is $O(n)$, or $\Omega(n)$? No, since the constants may vary: the $i$th subproblem may take time $1/i$, or $i^3$; each is still a constant (independent of $n$), but their sum could be a constant, or grow with $n$ - you can't tell. It depends on the distribution of the constants, and that's something the notation $O$ and $\Omega$ conceals (deliberately, of course).
However, as Moron pointed out, adding together just two (or any constant number) of functions is fine. Thus I'm not sure I understand the TA's claim.