Say there is a two dimensional vector space $\mathbf{V}$, when I represent a vector $x\in\mathbf{V}$ using a tuple $(1,2)$, then clearly it's not clear about which vector I really meant because it's actually
$$x=1\cdot u+2\cdot v,$$
which uses an ordered basis, $\{u,v\}$ which I haven't say what they are, of $\mathbf{V}$ in this describing. Now considering $\mathbf{R}^3$ as example, when I say
$$(1,2,3)\in\mathbf{R}^3,$$
which basis did I just implicitly use? It seems like I don't have to use the idea of basis to make it clear about which one of $\mathbf{R}^3$ I'm describing. Even you say I'm using the standard ordered basis $\{e_1,e_2,e_3\}$, then it is just because the tuple under this basis is also $(1,2,3)$! Which part did I miss, it seems like something is wrong in my reasoning?
Another example is that when I want to describe a vector in a two dimensional plane, I "have to" decompose it into that "x-component, y-component" to describe it. Once it's decomposed, I have to specify the basis I'm using, and yes implicitly I should mean the standard ordered basis. But any element of $\mathbf{R}^3$ is of its essence already be decomposed into three part, so no need the idea of basis when I just want to say "that $(1,2,3)$"?
More or less by definition, tuples are elements of $\mathbb{R}^n$. What's being glossed over is the following.
Selecting an ordered basis $\mathcal{B} = (u,v)$ for a two-dimensional vector space $V$ amounts to the same thing as choosing a linear isomorphism $T : \mathbb{R}^2 \to V$, defined by
$$ T(a,b) = a \cdot u + b \cdot v $$
When you say "the element of $V$ represented by $(1,2)$", what you really mean is "the vector $T(1,2)$".
Another notation people sometimes write is, for $x \in V$, to use the notation $[x]_\mathcal{B}$ to mean the value $T^{-1}(x)$, which is a tuple. This notation is meant to be read as "the coordinates of $x$ relative to $\mathcal{B}$". So the statement you make in the OP would be written
$$ [x]_\mathcal{B} = (1,2) $$
There's a similar notation for linear transformations; given a choice of bases for the input and output spaces $[A]_{\mathcal{B}'}^{\mathcal{B}}$ means the matrix whose entries are the coordinates of $A$ relative to the two choices of bases.
So, for example, using matrix arithmetic to compute linear transformations boils down to the identity
$$ [A]^\mathcal{B}_{\mathcal{B}'} \cdot [x]_\mathcal{B} = [A(x)]_{\mathcal{B}'}$$
As an aside, when interpreting vectors of $\mathbb{R}^n$ as matrices, there are reasons why it's most natural to consider them as $n \times 1$ matrices (i.e. "column vectors") rather than as $1 \times n$ matrices (i.e. "row vectors").