Take the convention of spherical coordinates for mathematics defined here. It's convention that $\phi$ should range at most from $[0,\pi]$ and $\theta$ from $[0,2\pi]$. Suppose I wanted to integrate over the upper unit hemisphere. The conventional way of doing so would be to use the bounds $\theta \in [0, 2\pi], \phi \in [0, \pi/2]$, which yields for the function $\sin\phi$,
$$\int_{0}^{\frac{\pi}{2}}\int_{0}^{2\pi}\sin\phi\ d\theta d\phi = 2\pi$$
Suppose I am a rebel and want to integrate over the same region in a different way. My understanding of the range convention on $\phi$, $[0,\pi]$, is to avoid accidentally counting the same points on a surface twice. I'll attempt to avoid this mistake by only using the range of $\theta$ to be $[-\pi/2, \pi/2]$, and I want $\phi \in [-\pi/2, \pi/2]$.
$$\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\sin\phi\ d\theta d\phi = 0$$
Given that the integral over these different bounds (which I think should still yield the upper unit hemisphere) is a different answer, I'm led to believe there must be a better reason for the $\phi$ constraint.
Why do these integrals yield different answers?
The area element on the unit sphere is $|\sin\phi| d\theta d\phi$, not $\sin\phi \, d\theta d\phi$. But you don't have to worry about the absolute value signs if you only use $\phi \in [0,\pi]$.