Is there any correlation between column/row entry number with respect to covariant/contravariant subscript/superscript?
Consider M, a matrix which is a matrix product of matrices A & B.
So consider, $ M_{i} ^{l} =A_{ij} B^{jl} $ ……(1)
Now the above equation is correct,I know.
But consider these:
$ M_{i} ^{l} =A_{ij} B_{j} ^{l} $ ….(2)
$ M_{il} =A_{ij} B_{jl} $ …..(3)
Are (2) & (3) wrong? I mean do the indices each have to be covariant & contravariant in order to get cancelled? (as this is the pattern we encounter when doing tensor analysis)
For matrix multiplication to be feasible, the key necessity is that the number of columns of 1st matrix should be equal to number of rows of 2nd matrix.
I found a pattern in which the subscript index which is the covariant index also represents the no of columns and the superscript index which is the contravariant index represents the no of rows. But this doesn't hold true all the time.For eg, consider $A_{ij}$ in the equations. i & j are covariant indices. That's clear. But just because these are subscripts does it mean both of them represent number of columns? So there are no rows in $A_{ij}$? Now this is ridiculous as j is also an index of B which could mean that j represents the number of columns for A & number of rows for B in order for the operation of matrix multiplication to work between $A_{ij}$ & $B_{jl}$ or between $A_{ij}$ & $B_{j} ^{l}$ as in (2) or (3) respectively. So it follows that in $A_{ij}$, i is number of rows & j is number of columns.
So does this mean there is no correlation between whether an index is written above or below the matrix alphabet and the number of rows or columns respectively? If this is correct then equations (2) & (3) would become correct. And so matrices like $A_{ij}$ can be represented as a table of rows & columns as there is no correlation between rows/columns with contravariant/covariant indices (just like we can for $A_{i} ^{j}$).
Or is it that there is a correlation & equations (2) & (3) are simply wrong?( i.e. j should be put as superscript for B in (2) & (3)) But that would mean contravariance is associated with rows & covariance with columns & would lead into a result suggesting that matrices like $A_{ij}$ shouldn't exist because j comes in the subscript & also the fact that we cant represent/write them on a paper as a table of rows & columns in a bracket (unlike we can for $A_{i} ^{j}$).
Columns/rows are related with arrays or matrices. Covariant/contravariant indices are associated with tensors. When you do the operation of matrix multiplication between two matrices, the no of columns of first matrix should be equal to no of rows of the second one. So if you represent those matrices in Einstein summation notation as tensors it would look like $A_{ij} B^j _k =C^{j} _{i}$
Please note that the operation between $A_{ij}$ and $B^j _k$ can be Matrix multiplication or Tensor product or Kronecker product. Its true for all these operations.
But when you consider, $A_{ij} B_{jk}$ and if you try to do operation of matrix multiplication between $A_{ij}$ and $B_{jk}$ it's not going to work. Because you need the rule - "the no of columns of first matrix should be equal to no of rows of the second one". Here $A_{ij}$ has no columns. So you can't.
But you can do tensor product or even Kronecker product between $A_{ij}$ and $B_{jk}$ as they don't require the rule we have for matrix multiplication. So it would look like $A_{ij} B_{jk} = C_{ijk}$.
You can do tensor product or kronecker product operation on $A_{ij}$ and $B_{kl}$ and it would look like $$A_{ij} B_{kl} = C_{ijkl}\quad\text{or}\quad A_{ij} \otimes B_{kl} = C_{ijkl}$$
So isn't there a correlation between column/row entry index with covariant/contravariant index? Well, there is. For instance, consider the metric tensor in $ds^{2}=g_{ij}dx^{i} dy^{j}$. The metric tensor is a special kind of bilinear form. Lets try to write that equation in matrix form assuming that we can write down on a paper $A_{ij}$ as an array of columns and rows. By doing so we are scrapping temporarily the idea that covariant/contravariant indices are related with column/row entry indices. $$ ds^2=\left(\begin{matrix} g_{11} & g_{21} & g_{31} & \cdots\\ g_{12} & g_{22} & g_{32} & \cdots\\ g_{13} & g_{23} & \ddots & \cdots\\ \vdots & \vdots & \vdots & \vdots\\ \end{matrix}\right) \left(\begin{matrix} x^1 \\ x^2 \\ x^3 \\ \vdots\\ \end{matrix}\right) \left(\begin{matrix} x^1 \\ x^2 \\ x^3 \\ \vdots\\ \end{matrix}\right) $$
Is the above equation correct? No. Because you can't do matrix multiplication when you represent the metric tensor array in that fashion with both rows and columns. We can't either do Kronecker product too here to finally get a scalar $ds^2$. You need some kind of an operator in between those matrices to exist for the equation to hold true. But $ds^{2}=g_{ij}dx^{i} dy^{j}$ is indeed a correct equation in Einstein notation form. The operation acting on $g_{ij}$,$dx^{i}$,$ dy^{j}$ in that equation is the tensor product. Kronecker product is same as Tensor product except for the fact that Kronecker product operates on arrays while tensor product operates on tensors represented by Einstein notations. Not all arrays are tensors. And therefore not all two dimensional arrays (or matrices) are second rank tensors. So what could this mean when $ds^{2}=g_{ij}dx^{i} dy^{j}$ is a correct equation while the above matrix equation is not? It could mean that our assumption that $A_{ij}$ can be represented on a paper as a matrix such that there wasn't a relation between covariant/contravariant indices with no of rows/columns, wasn't correct. i.e. it does matter! And what we could draw out of this is that $A_{ij}$ doesn't have columns and therefore should not be represented as we did for $g_{ij}$ above. Lets try to convert the above wrong matrix equation into an array equation such that atleast the Kronecker product would work. Because if Kronecker product works, then tensor product too does if the involved arrays are given to be tensors. So lets represent $ds^{2}=g_{ij}dx^{i} dy^{j}$ as following: $$ ds^2=\left[\begin{matrix} (g_{11} & g_{21} & g_{31} & \cdots) & (g_{12} & g_{22} & g_{32} & \cdots) & (g_{13} & g_{23} & g_{33} & \cdots) & \cdots \\ \end{matrix}\right] \left(\begin{matrix} x^1 \\ x^2 \\ x^3 \\ \vdots\\ \end{matrix}\right) \left(\begin{matrix} x^1 \\ x^2 \\ x^3 \\ \vdots\\ \end{matrix}\right) $$
This will work out to give $ds^2$ but what is the operation (or operator) acting in between these arrays? There should be some operator, right? Yes, and it's the Kronecker product. You can't still do matrix multiplication operation here because of the rule it requires. So we have successfully converted the Einstein notation way of writing those tensors into an array form. Just like matrix is a special kind of array (a two dimensional array) we can think of matrix multiplication is a special kind of Kronecker product. So, writing $ds^{2}=g_{ij}dx^{i} dy^{j}$ in a correct array form was possible when we let go of our that assumption.
So from the above argument, I conclude that covariant or contravariant index does mean column or row entry index respectively. So one should ideally represent covariant indices no matter how many are they as columns and contravariant indices no matter how many are they as rows. By doing this way, we can represent even tensors with very high no of indices on a paper as a two dimensional array.
Consider $T^{ijk}$ with $i = 1,2,3$ and $j=1,2,3$ and $k=1,2,3$. I found a YouTube video representing it as follows:
But I don't think it as the correct way of representation given my argument above. The way I would represent it would be:
$$\left( \begin{matrix}\left( \begin{matrix}\left( \begin{matrix} T^{111} \\ T^{211} \\ T^{311} \\ \end{matrix}\right)\\ \left( \begin{matrix} T^{121} \\ T^{221} \\ T^{321} \\ \end{matrix}\right)\\ \left( \begin{matrix} T^{131} \\ T^{231} \\ T^{331} \\ \end{matrix}\right) \end{matrix}\right)\\ \left( \begin{matrix}\left( \begin{matrix} T^{112} \\ T^{212} \\ T^{312} \\ \end{matrix}\right)\\ \left( \begin{matrix} T^{122} \\ T^{222} \\ T^{322} \\ \end{matrix}\right)\\ \left( \begin{matrix} T^{132} \\ T^{232} \\ T^{332} \\ \end{matrix}\right) \end{matrix}\right)\\ \left( \begin{matrix}\left( \begin{matrix} T^{113} \\ T^{213} \\ T^{313} \\ \end{matrix}\right)\\ \left( \begin{matrix} T^{123} \\ T^{223} \\ T^{323} \\ \end{matrix}\right)\\ \left( \begin{matrix} T^{133} \\ T^{233} \\ T^{333} \\ \end{matrix}\right) \end{matrix}\right) \end{matrix}\right) $$
What about $T_i ^{jk}$? A YouTube video represented it as:
But again I don't think it as ideally correct. I would do so as:
$$\left( \begin{matrix} T_{1} ^{11} & T_{2} ^{11} & T_{3} ^{11} \\ T_{1} ^{21} & T_{2} ^{21} & T_{3} ^{21} \\ T_{1} ^{31} & T_{2} ^{31} & T_{3} ^{31} \\ T_{1} ^{12} & T_{2} ^{12} & T_{3} ^{12} \\ T_{1} ^{22} & T_{2} ^{22} & T_{3} ^{22} \\ T_{1} ^{32} & T_{2} ^{32} & T_{3} ^{32} \\ T_{1} ^{13} & T_{2} ^{13} & T_{3} ^{13} \\ T_{1} ^{23} & T_{2} ^{23} & T_{3} ^{23} \\ T_{1} ^{33} & T_{2} ^{33} & T_{3} ^{33} \\ \end{matrix}\right)$$
That YouTube video is representing each components corresponding to an index (irrespective of whether the index is covariant or contravariant) along separate dimensions. i.e a single dimension for a single index. But in the way of representation I mentioned above, I am attributing the horizontal dimension to contravariant indices and vertical dimension to covariant indices. i.e. a single dimension in this circumstance can be either for all covariant indices or for all contravariant indices.
So for eg you have $x,y,z$ coordinate orthogonal axes. You have a tensor/array $T_{ijkl}^{mnop}$. Let's say each of these indices run from one to two. According to that YouTube video, it would represent say, $i,j,k$ indices along $x,y,z$ axes. But for representing other indices it needs higher dimensions. Note that it could represent say, $i,m,n$ along $x,y,z$ axes. One can randomly choose any three indices out of the indices $i,j,k,l,m,n,o,p$ to place them along $x,y,z$ axes in the array representation form. But that would not be the complete picture of the array. The array is actually a $8$-dimensional array. But if we go by the method I said above, we can represent the tensor/array as a two dimensional array and hence can be written on a paper. The indices $i,j,k,l$ vary along $x$-dimension and indices $m,n,o,p$ vary along $y$-dimension. So you have two large columns for $l=1,2$ along $x$ direction. Then you have two smaller columns within a column of $l$.These columns are columns of $k$. Within each column of $k$, you have two columns of $j$. And finally you have two columns of $i$ within a column of $j$. Same goes for rows.
Another important thing in this context is to differentiate between an index and the values it can take. i.e. say, $i$ and $i=1,2,$etc. According to that YouTube video, the values represent the no of times a given index repeats itself over a single particular dimension. In this, we have a single index for each dimension. i.e. more than one index can't occupy a given dimension. So the more no of values an index takes the more length the array spans along a given dimension. According to the method I said above, the values represent the no of times a given covariant index repeats itself over the vertical dimension or the no of times a contravariant index repeats itself over a horizontal dimension. In this we have all covariant indices aligning along the vertical direction and all contravariant indices lining up along the horizontal axis. So the more no of values an index takes as well as the no of indices the more length the array spans along a given dimension. In Matrix algebra that is being taught, the values an index takes is related to the dimension of the matrix unlike both views I described here. That's because there, dimension of a matrix/array is not defined to be same as dimension of the space in which we can write a matrix/array. The dimension (or order) of a matrix is no of rows times no of columns. The dimension of space in which the matrix/array is being written is equal to the dimension of the domain set of the linear map array/matrix or simply the no of columns in the array/matrix. The dimension of the domain set of array is also the minimum no of basis vectors that can be written in linear combination to represent any vector in the domain set. The rank of a matrix is the dimension of range/image set (or column space) or row space of the linear map matrix/array. The dimension of space in which the matrix/array is being written (or the dimension of the domain set of the linear map array/matrix) is equal to sum of the rank of a linear map matrix/array plus nullity, where nullity is the dimension of the kernel (or null space) of the linear map matrix/array. Also note that range/image set (or column space) is not equal to row space. However, their dimensions are equal.
So in short yes, there is a correlation between covariant/contravariant indices and column/row entry indices respectively given the condition that the array (with which column/row entry indices are associated with), is given to be already a tensor (with which covariant/contravariant indices are associated with).