For back-ground, I'm not well-schooled in higher-dimensional geometry, but I'm currently learning statistical data-science in which many methods rely on the properties of higher-dimensional space.
In Support-Vector Machine (SVM) clustering, the goal is to find a hyper-plane which will cleanly divide the data-points (vectors of n-dimensions) into labelled groups, and to use that dividing line to predict future data. SVM is especially good for binary classification.
However, it might be the case that no such hyper-plane exists in n-dimensions (especially common if n is low, like 2 or 3). One interesting technique used to address this problem is to randomly assign each point a value in an additional dimension (e.g. in 2-d space, assign each data-point a random z-value, transforming the 2-d space into a new 3-d space in which it is more likely that a hyper-plane exists which will cleanly divide the data in two). I was wondering how effective this process is, and after thinking for awhile, I concluded that projecting the data into K+2 dimensions, where K is the size of the largest cluster of data-points, should guarantee that a hyperplane can cleanly divide the data in 2.
My reasoning is this:
Given n points in n+1 dimensional space, and that no set of 3 or more of the points are colinear (or I guess that no set of n+1 points are co-planar in n-dimensions), those points define a unique n-dimensional hyperplane.
Given a set of n hyperplanes with <K dimensions embedded in K+2 space, the probability of those hyperplanes intersecting is 0.
Both of these claims make sense to me intuitively, but I have no idea how to even begin to check the reasoning. The only support I can give for these claims is intuition and trial and error (e.g. it seems like the probability of two points intersecting in 2-d space is 0, and the probability of two lines intersecting in 3d space is 0). I'm annoyed because this reasoning (and my intuition) is not generalizable to higher-dimensions.
So yeah, can any of you prove (or disprove) this or perhaps tell me about what branch of geometry it would fall into? Or just explain how to formally state this.
It's nice to think of codimension for this sort of thing. The codimension of a subspace of $\mathbb R^n$ is defined to be $n$ minus the dimension of the subspace. So a line in 3-space has co-dimension 2, and a plane in 4-space has dimension 2 and codimension 2.
The great theorem is that if you have subspaces $U$ and $V$, the codimension of $U \cap V$ is the sum of the codimensions of $U$ and $V$ (in the "generic" case, i.e., avoiding things like the case in 3-space where $U$ is a plane and $V$ is a line that just happens to be completely contained in $U$ -- that's the non-generic case). [Special case: if the codimension is larger than the dimension of the ambient space, the intersection can still be the 0-dimensional subspace consisting of just the origin.] The big idea of non-generic cases is that they have probability zero under a uniform distribution on the (compact) set of $k$-planes in $n$ space. (The more general theorem is that in all cases, the codimension of $U \cap V$ is at most the sum of their codimensions.)
In your problem, you have $n$ codimension-two subspaces in $\mathbb R^K$, and you're assuming the "generic" case. So their intersection has codimension at least $2n$. If $n$ is at least $\lceil K/2 \rceil$, then the codimension of the intersection is at least $K$, so the intersection has dimension at most $0$, i.e., it consists of the single point $0$.
If $n$ is less than that number, then the codimension is no more than $2n$, so there's some nontrivial intersection in general.