I have a question about polynomial vectors that why dimension of the polynomial vector space have a dimension = $n+1$ ($n$ = degree). I mean why $P_2(\mathbb{R})=3$, but $P_2(\mathbb{R}^3)$? I think my question is most about the terminology. Like, if you have a basis that is $[p_1(x),p_2(x),p_3(x)]$, this basis should belongs to $\mathbb{R}^3$, but why you write $P_2(\mathbb{R})$. Or more specifically is there any intuition about the degree of polynomial and $\mathbb{R}$?
Update: Can I say the degree of polynomials has nothing to do with the the dimension of a vector space? For example, can a basis $[p_1(x),p_2(x)]$ is in $\mathbb{R}^2$
Forgive the bluntness, but your understanding of this topic is broken. You need to rebuild your understanding from the ground up. Unfortunately, this is not really the best forum for this, so I would advise finding a tutor (IRL) to help you understand this stuff. I'm going to do the best I can to try to set you right, but just know that there's only so much I can do over the internet.
We say that a vector space $V$ has dimension $n$ if it has a basis (a finite, linearly independent, spanning set) with $n$ elements. If a vector space has a basis of $n$ vectors, then every basis of the space has $n$ elements. If the vector space is over the scalar field $F$, then this means that the space is isomorphic to $F^n$ (the set of $n$-tuples of scalars), meaning that there is an invertible linear map $T : F^n \to V$.
The existence of such a linear map is very useful. It essentially means that we can look at $V$ in a different way. We get a perfect analogy for $V$ in the vector space $F^n$. It doesn't matter how abstract the objects in $V$ are, how computationally difficult the vector addition $+$ is, etc, we can "understand" the vector space $V$ by looking at corresponding operations in $F^n$.
The polynomials are a great example. Here we have $V = P_2(\Bbb{R})$, which is a set of functions from $\Bbb{R}$ to $\Bbb{R}$ that take the form $f(x) = a_0 + a_1 x + a_2 x^2$ for some scalars $a_0, a_1, a_2$. This is not a triple in $\Bbb{R}^3$; this is a function. You can substitute any real number you like into it, to get another real number. That said, it's still a vector space over $\Bbb{R}$; it satisfies all of the vector axioms (for the usual function addition and scalar multiplication).
It has a basis $\{x \mapsto 1, x \mapsto x, x \mapsto x^2\}$. To establish this, we need to show that the set is spanning and linearly independent. It's spanning basically by definition of $P_2(\Bbb{R})$; every element of $V$ can be written as a function $x \mapsto a_0 + a_1 x + a_2 x^2$, which is a linear combination: $$a_0(x \mapsto 1) + a_1(x \mapsto x) + a_2(x \mapsto x^2).$$ Linear independence requires proof too. If we assume that we have a linear combination equal to the $0$ vector (the function $x \mapsto 0$), then $$a_0(x \mapsto 1) + a_1(x \mapsto x) + a_2(x \mapsto x^2) = x \mapsto 0.$$ Equivalently, for any $x \in \Bbb{R}$, $$a_0 + a_1 x + a_2 x^2 = 0$$ Trying $x = -1, 0, 1$, we get a system of linear equations, $$\begin{matrix} a_0 & - & a_1 & + & a_2 & = & 0 \\ a_0 & & & & & = & 0 \\ a_0 & + & a_1 & + & a_2 & = & 0, \end{matrix}$$ which has only one solution: $a_0 = a_1 = a_2 = 0$. Thus, the set is linearly independent. We have a basis of three vectors, making $V$ isomorphic to $\Bbb{R}^3$.
What would be an example of an isomorphism between the two spaces? All you need to do is take a basis $\{v_0, v_1, v_2\}$ for $V$ and define $$T : F^3 \to V : (a_0, a_1, a_2) \mapsto a_0 v_0 + a_1 v_1 + a_2 v_2.$$ In this case, using this particular basis, $$T : \Bbb{R}^3 \to P_2(\Bbb{R}) : (a_0, a_1, a_2) \mapsto (x \mapsto a_0 + a_1x + a_2x^2).$$
This means that we can understand polynomials via vectors of the coefficients of the various monomials. In this way, we build an analogy for $P_2(\Bbb{R})$ in terms of the more concrete $\Bbb{R}^3$. We don't have to worry about the various intricacies of a polynomial like $x \mapsto (x - 2)^2$ (its graph, its roots, its stationary points, its value at $\pi$, etc.) when we can just look at the analogous point $(1, -4, 4) \in \Bbb{R}^3$. That ordered triple contains all the information we need about the function in order to do vector arithmetic on it.
Think about how a computer might do vector arithmetic with polynomials. It has no capacity to store the infinitely many values that a function might take over the infinitely many values in $\Bbb{R}$. However, it can easily remember ordered triples of coefficients $(a_0, a_1, a_2)$, and add it to other ordered triples of coefficients. The computer doesn't have to store an entire function; it just remembers the ordered triple of coefficients, and adds/scalar multiplies the vectors as though they were points in $\Bbb{R}^3$.
However, as handy as the analogy is, remember: the spaces are not the same! One space helps us understand the other, but the two sets have not even a single element in common!
Hopefully that helps in some way.