They say (here, for instance) that you can represent a vector, $\vec v$ as coordinate vector, $[v]_B$, in base, $B$,
$$\vec v = v_1 \vec b_1 + v_2 \vec b_2 + \cdots = \begin{bmatrix}\vec b_1 & \vec b_2 & \cdots \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ \vdots \end{bmatrix} = B\, [v]_B.$$
That is, $B^{-1}$ can serve as a coordinate map to translate a vector into coordinates:
$$[v]_B = B^{-1}\vec v.$$
Everything is fine: we had a vector and got its coordinates in basis B. There is only one thing that I do not understand: what is B? Is it a matrix or operator?
If abstract operator B is not a matrix and v is not a tuple then how do we get column of numbers, $[v]_B$, multiplying them? I know how to get a column of numbers as result of matrix multiplication only when multiplying a matrix of numbers with tuple of numbers. However, if B and $\vec v$ are matrices right away, then, we already have the coordinates of $\vec v$ and the question is why to muliply it with $B^{-1}$ ever then? To get just another coordinates of $\vec v$?
Because the way the topic is always exemplified, I suppose that $B$ and $\vec v$ are provided as matrices in some another basis. But what is that basis? Why not to use $[v]_{ANOTHERBASIS}$ instead of deceptive $\vec v$? Can this help me to answer the difference between components and coordinates?
Not sure I understood the question. Please ask for clarification if I can elaborate on something for you.
It is sometimes convenient to deal with other more abstract vector spaces (not just $\mathbb{R}^n$) like the space of all square-integrable functions $$ L^2[a,b] = \left\{f:[a,b] \to \mathbb{R} \left| \int_a^b f(x)^2 dx < \infty \right. \right\}, $$ and talk about bases and coordinates there with respect to these bases. You use the basis that is convenient for you (e.g. in such spaces, Fourier series can be viewed as coordinates for such a basis).