Is any multidimensional equivalent to scalar defined in math? I know that scalar is single value by definition for any-dimensional spaces, but maybe some analogical concept exists like n-component value for any >n dimensional space?
Multidimensional scalar
533 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
The block multiplication property of matrices allows for blocks of any square size be considered "scalars". And they are also, especially in the matrix representation theory of groups.
EDIT A common example is the $2 \times 2$ real matrix representation of the complex numbers: $${\bf M}_{a+bi} = \left[\begin{array}{rr}a&-b\\b&a\end{array}\right]$$ When multiplying or adding two such matrices you can verify it becomes the same as multiplying or adding the corresponding complex numbers.
A special case of this is for complex numbers on the unit circle when $a^2+b^2 = 1$, then we get the special orthogonal group SO2. Which have the famous rotation matrices as representation matrices:
$${\bf R}_{\phi} = \left[\begin{array}{rr}\cos(\phi)&-\sin(\phi)\\\sin(\phi)&\cos(\phi)\end{array}\right]$$
you answered in a comment: "I thought there is something more specific in scalar meaning then just minimal dimension count."
"Scalars" are usually understood as elements of a field (in the sense of rationality field where rational operations: addition, subtraction, multiplication, division (not by zero) can be performed (commutatively). So not in the sense of vector field).
"Dimension" as algebraic concept is a property of a vector space over some field. You cannot talk about a dimension without specifing the field over which the space is defined.
You say that a scalar is monodimensional. This should be restated: a real number can be thought of as a vector of a vector space over the reals and this vector space is monodimensional. But this is a fact of algebra: every field can be though of as a monodimensional vector space over itself. This pragmatically speaking means that every linear operaton on a monodimensional vector space are of the type "multiplication of a vector by a scalar": for whatever linear operator on the vector space there is a unique scalar that multiplicatively coincides with it.
As an example: in $\mathbb{R}^2$ thought simply as an abelian group where its elements can be commutatively added, it is possible to introduce a structure of vector space over the reals and in such a case it becomes a bidimensional vector space. Here you can find $\mathbb{R}$-linear operators that cannot be given by scalars (think of a general matrix), but only some of them are isomorphic to the field (that is, can be identified with the scalars, think of matrix like $aI$, where $a\in\mathbb{R}$ and $I$ is the identity matrix: $aIx=ax; (aI+bI)x=(a+b)x: aIbIx=abx$, where $x\in\mathbb{R}^2$). Here remember that by $\mathbb{R}$-linear operator is meant an operator that preserve the addition of vectors and the multiplication of a vector by a scalar: $A(x+y)=Ax+Ay$ and $Acx=cAx$
At the same time in the abelian additive group $\mathbb{R}^2$ can also be introduced the structure of vector space over a (rationality) field of operator on $\mathbb{R}^2$ that preserve the addition of vectors and the rotation of vectors: $B(x+y)=Bx+By$ and $B\Theta x=\Theta Bx$, where $\Theta=\begin{bmatrix}\cos \theta&-\sin\theta\\\sin\theta&\phantom-\cos \theta\end{bmatrix}$
You can easily prove that such operators are of the type $B=\begin{bmatrix}b_1&-b_2\\b_2&\phantom-b_1\end{bmatrix}=|B|\begin{bmatrix}\cos\beta&-\sin\beta\\\sin\beta&\phantom-\cos\beta\end{bmatrix}$ where $|B|=\sqrt{b_1^2+b_2^2}$ and $\beta=\arctan \frac{b_2}{b_1}$
In addition these operators form a rationality field: all rational operations can be performed (commutatively) giving always (with the exception of division by zero) and uniquely an operator in that same field. By multiplication it is meant the composition.
You now also see that such operators can be identified with the elements of $\mathbb{R}^2$: identifying the operator $B$ with the vector $b=(b_1, b_2)$, you can write $Bx=bx; (B+C)x=(b+c)x; BCx=bcx$. These formulae introduce the structure of field on $\mathbb{R}^2$: that is elements of $\mathbb{R}^2$ can be called scalars and they can be used in rational operations. In particular multiplying to elements of $\mathbb{R}^2$ means interpreting them as scalars, that is as operators that preserve additions and rotations, composing them, and reinterpreting the result as vector (analogously for the division).
You also notice that the multiplication by a scalar is preserved by such operators, because $B(Cx)=C(Bx)$. So they are linear operator on $\mathbb{R}^2$ over the new field. So this new structure of vector space we are introducing in $\mathbb{R}^2$ over itself is monodimensional, even though scalars are identifiable with vectors (as happend with the real line) but these have two real components.
Usually this particular field (whose elements have 2 real components) representing only those $\mathbb{R}$-linear operators that preserve the rotations is called complex field and its elements complex numbers, and the special $\mathbb{R}$-linear operators they represent are called $\mathbb{C}$-linear operators.
EDIT: (answering your question in the comment section)
Scalars let define which, among the operators that preserve additions of the elements of the abelian group (what is going to be a vector space), are linear (that is, also preserve multiplication by scalars). So linearity depends on scalars. After that, scalars can be (field-homomorphically) identified with some linear operators: these are all those linear operators that commute with all the linear operators. On the other hand, an "ordered set of components of a vector" (meaning a numerical representation of the vector, or "numerical vector") gives under (in general non-linear) operator the vector itself: this operator is called "coordinate system" and is usually defined on a subset of the cartesian power of a field (usually the same field over which the vector space is defined, but it is not a need) into the vector space. A component of a vector happens to be a "scalar" only for a fortuitous case. More exactly, it is an element of the field on which the space is defined but NOT a scalar, that is, it does not represent a linear operator even though it is in the field, that is, it does change while the scalar does not change under coordinate-system changes (I can give a proof here). So it should not be called scalar to avoid confusion even though it in the the same field over which the space is defined.
(do not say that a scalar has a dimension. Vector spaces have dimension that depends on the scalars. The set of scalars (a field) is always a monodimensional vector space over itself and a monodimensional subspace of whatever vector space over such a field. This is true even if scalar are "complex" entities not simple reals, made up of more than one "component". The fact that a vector, or in general an element of a set, has $n$ component does not mean that it is an element of a vector space of dimension $n$: in this reasoning no mention has been made of the field of scalar, so $n$ cannot be said a dimension. Moreover coordinate systems need not be injective: you can have more numerical vectors representing one and the same vector. You can have "redundant" components. Or you can have components that are not from the same set of the scalars of the vector space. In the example made of the vector space $\mathbb{R}^2$ over the complex scalars, a vector of it has $2$ real components where it is chosen a real cartesian coordinate system, or it has only one complex component if a complex cartesian coordinate system is chosen, or it can have more than $2$ real components in a real non injective (that is, "redundant") coordinate system, etc. etc.)