I know that a subspace of a vector space can be mapped isomorphically to a vector space. For example any 2 dimensional subspace of $\mathbb{R}^3$ can be mapped isomorphically to $\mathbb{R}^2$. This means all the algebraic properties of a vector space are satisfied by a subspace. This is typically used as justification for saying that a subspace is a vector space.
But if our universe of consideration is $\mathbb{R}^n$, then all subspaces have vectors which conform to vectors in $\mathbb{R}^n$. That is, we can add elements of a subspace to elements of $\mathbb{R}^n$. In other words, an element of an $m-$dimensional proper subspace can be expressed as a linear combination of a basis spanning $\mathbb{R}^{n},$ with some of the coordinates $=0$.
We cannot, however, add elements of the vector spaces $\mathbb{R}^{m\ne n}$ and $\mathbb{R}^{n}$ without some agreement as to which components should be considered collinear.
So proper subspaces seem to have at least one property which distinguishes them from a vector space (which is not defined as a subspace).
As another example: In $\mathbb{R}^n$ with $k<n$ a k-dimensional interval whose determinant is nonzero has zero volume. The defining edges of the interval provide a spanning basis for a k-dimensional proper subspace of $\mathbb{R}^n$. The image of the interval under a 1-1 mapping which takes any orothonormal basis of that subspace onto an orthonormal basis of $\mathbb{R}^k$ has a volume equal to the determinant of the interval.
The answer is that proper subspaces are true vector spaces, however, this means the zero vector in every vector space is the same abstract object. Or at least that is one resolution to the following apparent paradox.
I may not have proven my proposition by appealing to Nering's discussion(below), and it really is more complicated than my objection to the common concept of each vector space having its own zero vector. I worked with that assumption for decades, but occasionally encountered reasons to question it.
The assertion that the zero vector of each vector space specific to that space is really a consequence of a broader assumption, which is that every element of an n-dimensional vector space has n-components.
The point of my original question is this: if we say that every vector in a 3-dimensional vector space has 3 components, and that the span of three linearly independent elements of a 5-dimensional vector space forms a 3-dimensional subspace, which is itself a 3-dimensional vector space, then we are saying the same vectors have three components, and five components. The zero vector stands out as the most problematic, since we are effectively saying that a 5-component vector is equal to a 3-component vector. But by the rules of the useful fiction that every vector in an n-dimensional vector space has n components, we are not allowed to compare 3-component vectors to 5-component vectors.
The following is from Evar Nering's Linear Algebra and Matrix Theory, Second Edition.
Page 9: example (8)
Pages: 17,18
This presents a problem when we say that the zero vector in an $m-$dimensional proper subspace $\mathcal{V}_{m}$ of the $n-$dimensional vector space $\mathcal{U}_{n}$ is equal to the zero vector of $\mathcal{U}_{n}$. It appears to be primarily a syntactical issue. The rules of matrix algebra do not permit us to compare $n-$tuples to $m-$tuples when $m\ne n$. For example, to use language of computer science:
$$\begin{bmatrix}0\\ 0\\ 0 \end{bmatrix}=\begin{bmatrix}0\\ 0 \end{bmatrix}$$
is a syntax error. Suppose $m=2$ and $n=5$, one representation of $\mathcal{V}_m$ is
$$\vec{v}=\begin{bmatrix}x\\ y \end{bmatrix}\mapsto\begin{bmatrix}x\\ y\\ 0\\ 0\\ 0 \end{bmatrix}.$$
When
$$\vec{v}=\begin{bmatrix}x\\ y \end{bmatrix}\ne\begin{bmatrix}0\\ 0 \end{bmatrix}$$
there will never be a case in which $\vec{v}=\vec{\bar{v}},$ where $\vec{\bar{v}}$ is an element of the orthogonal complement $\vec{\bar{v}}\in\mathcal{V}_{n-m}^{\perp}.$ But when $\vec{v}=\vec{0}\in\mathcal{V}_m$ we have $\vec{v}=\vec{0}\in\mathcal{U}_n$ and $\vec{v}=\vec{0}\in\mathcal{V}_{n-m}^{\perp}.$
So here we have a case of $\vec{v}=\vec{\bar{v}},$ which in terms of tuples appears to be saying
$$\begin{bmatrix}0\\ 0\\ 0 \end{bmatrix}=\begin{bmatrix}0\\ 0 \end{bmatrix}.$$
I interpret Qi Zhu's comment on Are a vector subspace and its orthogonal complement disjoint sets, or do they share a zero vector? to mean that we just declare that all zero vectors are equal, regardless of where they live.