Let $\omega$ be an $n$-form on an oriented $n$-manifold $M$. To integrate $\omega$, we choose an atlas $(O_\alpha, (x^1_\alpha,\dots, x^n_\alpha))_\alpha$ for $M$ and a partition of unity $\phi_\alpha$ subordinate to the atlas. Then we write $\omega|_{O_\alpha} = f_\alpha \mathrm{d}x^1 \wedge \dots \wedge \mathrm{d}x^n$ and define $\int_M \omega = \sum_\alpha \int_{O_\alpha} \phi_\alpha f_\alpha dx^1\cdots dx^n$, where now the "d"'s represent the Lebesgue measure rather than the exterior derivative of differential forms. Then we show that the result doesn't depend on the choice of atlas or partition of unity.
Is there an alternate definition that avoids the coordinates? It seems to me that one should be able to define integration of a differential form in a coordinate-independent way and then derive the above formula as a consequence.
It's not actually the partition of unity that bugs me the most. What really puzzles me is the way we use coordinates to "magically" transform our differential form into a measure. This transformation doesn't depend on a choice of coordinates, so why should we have to use coordinates to describe it?
I've got a partial answer that in particular addresses Zhen Lin's objections. It requires relating integration in different dimensions in two different ways.
We rely on two principles:
Suppose we've gotten as far as agreeing that for 1-dimensional integrals, $\int_{\Omega} f \mathrm{d}x = c\int_{\Omega} f dx$ for some scalar $c$. Then consider the area of the unit square $\int_{I \times I} \mathrm{d}x \wedge \mathrm{d}y$.
So we have $c^2 = c$, and so $c = 0$ or $c=1$. Of course $c = 0$ can be eliminated as a degenerate case. From this we can conclude that $\int f \mathrm{d}x^1 \wedge \cdots \wedge \mathrm{d}x^n = \int f dx^1\cdots dx^n$ by approximating $f$ by polynomials, say (since the integral of a monomial can be integrated by the external product rule, and so the integral of a polynomial can be calculated by linearity from there. Some continuity principle is needed.), and using pullback and linearity principles we can derive the value of the integral in general, say by the partition of unity argument.
Edit Here's a way to derive the condition $\int f\mathrm{d}x = c\int f dx$ from a weaker assumption. Assume that $\int_{\Omega} f\mathrm{d}x = \int_{\Omega} fg dx$ for some function $g$, where $\Omega \subseteq \mathrm{R}^n$ is the closure of a bounded open subdomain of $\mathbb{R}^n$, although we only need subdomains of $I=[0,1]$, the unit interval. (Plausibly this can be concluded from some general continuity and naturality conditions on the integration operator.) Consider the integral $\int_{[0,t]} \mathrm{d}x$:
By the assumption, $\int_{[0,t]} \mathrm{d}x = \int_{[0,t]} g(x) dx = G(t)$ where $G$ is the antiderivative of $g$ (with $G(0) = 0$), using the ordinary fundamental theorem of calculus.
Let $\phi: I \to [0,t]$ be the multiply-by-$t$ map. By pullback, $\int_{[0,t]} \mathrm{d}x = \int_I \phi^*(\mathrm{d}x) = \int_I t\mathrm{d}x = t G(1)$.
So $G(t) = tc$ where $c = G(1)$. By differentiating, $g(t) = c$, and we have $\int f \mathrm{d}x = c \int f dx$ as desired.