Let $K:\mathbb{R}^2\times\mathbb{R}^2\mapsto\mathbb{R}$ be a positive-type symmetric function such that $K(x,y)=(x\cdot y + c)^2$ where $c\in\mathbb{R}$. We can write
$$K(x,y)=(x_1y_1+x_2y_2+c)^2= \begin{bmatrix}x_1^2\\x_2^2\\\sqrt{2}x_1x_2\\\sqrt{2c}x_1\\\sqrt{2c}x_2\\c\end{bmatrix} \cdot \begin{bmatrix}y_1^2\\y_2^2\\\sqrt{2}y_1y_2\\\sqrt{2c}y_1\\\sqrt{2c}y_2\\c\end{bmatrix} $$
Hence, I reason that $K(x,.)$ can be regarded as a function from $\lbrace1,2,3,4,5,6\rbrace$ to $\lbrace x_1^2,x_2^2, \sqrt{2}x_1x_2,\sqrt{2c}x_1,\sqrt{2c}x_2,c\rbrace$, and we can generate (and complete) an inner product space from $x\in\mathbb{R}^2$.
On the other hand, one can also write
$$K(x,y)=(x_1y_1+x_2y_2+c)^2= \begin{bmatrix}x_1^2\\x_2^2\\x_1x_2\\x_1x_2\\\sqrt{2c}x_1\\\sqrt{2c}x_2\\c\end{bmatrix} \cdot \begin{bmatrix}y_1^2\\y_2^2\\y_1y_2\\y_1y_2\\\sqrt{2c}y_1\\\sqrt{2c}y_2\\c\end{bmatrix} $$
Although this is a slightly more boring mapping (and there might be a clue there), similar reasoning to the above permits one to regard $K(x,.)$ as a function outside the aforementioned space, since a sequence of length $7$ is not in the span of $K(x,.)$ as previously defined.
By the Moore-Aronszajn Theorem, $K(x,.)$ is a reproducing kernel in exactly one Hilbert space. However, since a sequence cannot be evaluated at $x\in\mathbb{R}^2$, it cannot be a reproducing kernel in either of the Hilbert spaces implied above. Which Hilbert space is it actually a reproducing kernel on?
This sentence doesn't make much sense, I assume you meant that $K(x,\cdot)$ can be regarded as a function from $\mathbb R^6$ to $\mathbb R^6$, right ? If that's what you meant, then it is wrong.
In both cases, what you've shown is that there exist two distinct feature maps $\phi : \mathbb R^2 \to \mathbb R^6$ and $\varphi : \mathbb R^2 \to \mathbb R^7$ such that for all $x,y\in\mathbb R^2$ : $$K(x,y) = \langle\phi(x),\phi(y)\rangle = \langle\varphi(x),\varphi(y)\rangle $$ As you've just shown, feature maps need not be unique (in fact you could split the terms in the expression of $K(x,y)$ further and come up with feature mappings to arbitrarily large dimension), but the point is that regardless, $K$ is still defined on $\mathbb R^2$, and therefore the RKHS associated with $K$ is given by the completion of $\text{span}\lbrace K(x,\cdot)\mid x\in \mathbb R^2\rbrace $ which is a Hilbert space of functions defined on $\mathbb R^2$, and there is no contradiction with Moore–Aronszajn theorem.
The existence of feature maps is what make kernel methods so popular in practical applications : they allow us to implicitly map our low dimensional data to a much higher dimensional space where the problem is easier to solve, with no extra computational expense.