The following is a line of reasoning you'll often see in physics textbooks. Newton's second law can be formulated in terms of momentum, which yields the following fundamental statement:
$$\frac{d \vec{p}}{dt} = \vec{F},$$
which in simple terms states that the the time rate of change of the linear momentum of a particle is equal to the net force acting on the particle. This statement is often rewritten as
$$d\vec{p}=\vec{F}dt$$
and integrated to yield a new quantity, referred to as impulse. Now, here's where things start getting confusing. Many authors will integrate with respect to $d\vec{p}$ using vector bounds:
$$\int_{\vec{p_1}}^{\vec{p_2}} d\vec{p}=\int_{t_1}^{t_2} \vec{F} dt $$
which yields
$$\vec{p_2}-\vec{p_1} = \int_{t_1}^{t_2} \vec{F} dt$$
or, abbreviated to:
$$\vec{p_2}-\vec{p_1} = J$$
where $J$ is then defined to be the impulse. The right-hand side is not problematic here, it's the left-hand side that's bugging me. That brings me to my questions:
What does it mean to integrate with respect to a vector (though, not in the usual line integral sense where there's a dot product that can be resolved into a scalar quantity)? Is there a way to visualize what's going on here?
What does it mean to have vectors as the limits of integration?
Does all of this imply that you could have functions of vectors as the integrand? I.e., with regular integration, you integrate with respect to $x$, and your integrand is some function of $x$ (say, $\frac{2x}{x^3 +3x + 2}$).
Do all the regular rules of calculus apply to vectors - in this particular sense?
Now, mind you, I'm aware that you could express the integral in terms of a variable transformation as $\int_{t_1}^{t_2} \frac{d\vec{p}}{dt} dt$ using the vector differential definition ($d\vec{p} = \frac{d\vec{p}}{dt} dt$), but I'd like to emphasize that that's not my question as I've seen many authors do this explicitly without the use of a variable transformation.
1 & 3. The idea of Riemann sum certainly allows us to define the integral of the form
$$ \int_{\gamma} f(\vec{v}) \, \mathrm{d}\vec{v} = \lim_{n\to\infty}\sum_{i=1}^{n}f(\gamma(t_i))[\gamma(t_i) - \gamma(t_{i-1})] \tag{*} $$
where $\gamma : [a, b] \to U$ is a path in a region $U$ of $\mathbb{R}^3$ and $f : U \to \mathbb{R}$ is a scalar function defined on $U$. Note that integral is essentially "infinitesimal quantities added up", so anything that fits into this scheme is capable of giving rise to an integral.
4. Note that this integral is essentially the same as the vector
\begin{align*} \int_{\gamma} f(\vec{v}) \, \mathrm{d}\vec{v} &= \int_{a}^{b} f(\gamma(t)) \gamma'(t) \, \mathrm{d}t \\ &= \left( \int_{\gamma} f(x, y, z) \, \mathrm{d}x, \int_{\gamma} f(x, y, z) \, \mathrm{d}y, \int_{\gamma} f(x, y, z) \, \mathrm{d}z \right). \end{align*}
where each component is the ordinary line integral. Consequently, it inherits the properties of line integrals with due modifications.
2. If the value of the integral $\int_{\gamma} f(\vec{v}) \, \mathrm{d}\vec{v}$ depends only on the endpoints of $\gamma$ and not on particular choices of the path joining them (i.e., if it has "path-independence" property), then the notation
$$ \int_{\vec{v}_1}^{\vec{v}_2} f(\vec{v}) \, \mathrm{d}\vec{v} = \left[ \int_{\gamma} f(\vec{v}) \, \mathrm{d}\vec{v} \text{ for any path $\gamma$ from $\vec{v}_1$ to $\vec{v}_2$} \right] $$
is well-defined without ambiguity.