I get that $$\int 0 \ {dx}= C$$
but came across this argument:
$$\int 0\,dx = \int 0 \cdot 1 \,dx = 0 \int 1 \,dx = 0x = 0$$
from https://math.stackexchange.com/a/287079/955696
I didn't understand the explanation on why this is false
This gives two conflicting answers. The question is far more complicated that you would first think. But when you say $\int f dx$ and the interval over which you're integrating isn't obvious or defined, what you really mean is "the class of functions that when derived with respect to $x$ produce $f$". The rule stated only applies for definite integrals. That is: $$\int_a^b\alpha f\,dx = \alpha \int_a^bf \,dx$$
I've always been taught that a constant can be 'pulled' out of the integral regardless of whether it was definite or indefinite. Integrating 0, though, has led to a special case.
Can someone help explain what went wrong?
Let's be very clear. The notation
$$\int f(x) dx = F(x) + C$$
represents an entire class of functions, in other words it represents the set:
$$\{G(x) : G'(x) = f(x)\}$$
and, as it so happens, if $F(x)$ is one function that belongs to this set, then we can equivalently write the set as
$$\{F(x) + C : C \in \mathbb{R}\}$$
because we know that all of the functions that represent possible antiderivatives of $f$ will differ by some constant $C$.
The sneaky thing is that when we get into expressions like $a \int f(x) dx$, we actually wind up matching the second definition - i.e. this is actually the set $\{a F(x) + C \}$, rather than $\{a(F(x) + C)\}$. And this only makes a difference when $a = 0$ - in other words, $0 \int 1 dx$ represents the set $\{ 0 F(x) + C : F'(x) = 1\}$, which turns out to just be the set of all possible constants $C$.
Why do we do this? Mostly because it lets us write the identity $\int a f(x) dx = a \int f(x) dx$ without needing to specify that $a$ cannot be zero.