I was wondering if thinking about matrixes of the form $M_{\aleph\times \aleph}$ and assigning properties to them like matrix multiplication makes sense and useful in some way.
note: $\aleph$ is the power of the real numbers.
I was wondering if thinking about matrixes of the form $M_{\aleph\times \aleph}$ and assigning properties to them like matrix multiplication makes sense and useful in some way.
note: $\aleph$ is the power of the real numbers.
On
If you define such concept, then, as mentioned in the comment, how do you multiply such matrices?
But this type of matrices will conveniently useful for some concepts. For example, for the proof of $\Bbb N \times \Bbb N$ is countable, we consider this set as a $\infty \times \infty$ matrix and walk along the diagonal to count $\Bbb N \times \Bbb N$ $$\begin{pmatrix} \color{red}{(1,1)}&\color{blue}{(1,2)}&\color{green}{(1,3)} &\cdots\\\color{blue}{(2,1)}&\color{green}{(2,2)}&(2,3) &\cdots\\ \color{green}{(3,1)}&(3,2)&(3,3) &\cdots\\\vdots&\vdots&\cdots& \vdots\end{pmatrix}_{\infty \times \infty}$$
On
Roughly speaking, there are two reasons for generalizing a mathematical construct - in your case, matrices. One is that the generalization might show that two seemingly separate ideas are related or simpler or new ways to prove theorems.
The other is that you are curious about how far you can stretch a construction or an argument, with no particular use in mind. That seems more what you are asking. The posted answers point out that you will have trouble generalizing the notion of the product of matrices.
In a sense you can consider any function of two variables as a matrix, thinking of $f(x,y)$ as the entry in row $x$ column $y$. That construction makes sense whatever the domain and codomain, which might be the (uncountable) real numbers.
On
Consider two vector spaces $V,W$ over some field $k$, and a linear map $\Phi: V \longrightarrow W$. Fix bases $(e_i)_{i \in I}$ and $(f_j)_{j \in J}$ of $V$ and $W$ respectively. For $(i,j) \in I \times J$, let $\Phi_{i,j}$ denote the coordinate of $\Phi(e_i)$ in $f_j$. Then you can see $\Phi$ as the matrix $(\Phi_{i,j})_{(i,j) \in I \times J}$ indexed by $I\times J$.
Moreover, if $\Psi\equiv (\Psi_{j,l})_{(j,l) \in J \times L}$ is a linear map $W\longrightarrow X$ and a basis is fixed for $X$, then the formula $\forall (i,l) \in I \times L,(\Psi \circ \Phi)_{i,l}=\sum \limits_{j \in J} \Psi_{j,l} \Phi_{i,j}$ still works. (For $i\in I$, the set $\{j \in J :\Phi_{i,j}\neq 0\}$ is finite, so the sums have finite support.)
One difference with the finite dimensional case is that $I,J$ do not carry a structure which makes it easy to represent the matrix (although we could see them as ordinals). More importantly, the set of linear maps $\xi_{k,l}:V\longrightarrow W$ whose matrix with respect to $(e,f)$ is the family $(\delta_{k,i} \delta_{l,j})_{i,j}$ (where those are Kronecker symbols) is not a basis of the space of maps $V \rightarrow W$ (unless $W$ has finite dimension). In other words, the family $(\Phi_{i,j})_{i,j}$ may not have finite support. In fact I don't think that there is a natural way to form a basis of $W^V$ using $(e,f)$.
Hardly, at least from a traditional point of view.
The pupose of a matrix (traditionally) is to encode the information of a linear transformation i.e. the transformation matrix $D_{\phi}$ of a linear transformation $\phi: V \rightarrow W$ between finite dimensional $k$-vector spaces is given by the action of $\phi$ on a basis of $V$. i.e. the $ij$-th entry of $D_{\phi}$ is $a_{ij} \in k$ with $\phi(v_i) = \sum_{j=1}^m a_{ij} w_j$ with $i \in \{1, \ldots, n\}$ and $n,m$ the dimension of $V,W$ respectively.
Now if you were to allow an infinite/uncountable number of entries, the question becomes what $\sum_{j=1}^m a_{ij} w_j$ is or what the definition of that matrix should be.
If you just want to play around with what could make sense, consider that if you allow for countably infinitely many entries, you can think of $\sum_{j=1}^m a_{ij} w_j$ as a series and assign some value via some notion of convergence.