I'm studying Suslin's proof of Bott Periodicity for the infinite unitary group $U$ which I currently understand to be $$\bigcup_{n\in\mathbb{N}}U(n)$$ where U(n) denotes the group of $n\times n$ unitary matrices.
Suslin defines a Gamma space (as described by Graeme Segal here https://ncatlab.org/nlab/files/SegalCategoriesAndCohomologyTheories.pdf) $$X(\textbf{n})=\{(V_1,...,V_n:V_i \text{ are pairwise orthogonal finite dimensional subspaces of } \mathbb{C}^\infty\}$$
where $\textbf{n}=\{1,...,n\}$. We are then given an isomorphism $$\bigsqcup_{n\geq0}(U(1)^n\times X(\textbf{n}))/\sim\text{ }\cong U $$ which to $(\lambda_1,...,\lambda_n)\times (V_1,...,V_n)$ assigns the matrix which acts on $V_i$ by multiplication by $\lambda_i$ and is the identity map on $(\bigcup_{i=1}^n V_i)^\perp$ where the expression on the LHS is the second term in the spectrum that $X$ gives rise to, ~ is a suitable equivalence relation arising from the Gamma space structure.
I'm trying to show that this is indeed an isomorphism but the definition suggests to me that each element is mapped to an 'infinite' matrix which doesn't fit well with my current understanding of $U$. I've thought about this and it would seem natural to me that an element in $U(n)$ could be extended to an infinite matrix in $U$ by placing $1$'s on the diagonal and $0$'s elsewhere (which would induce some sort of equivalence relation).
Could someone help me to understand precisely what the elements of $U$ are?
Choose some (orthonormal) basis $(e_1, e_2, \dots)$ of $\mathbb{C}^\infty$ (depending on how you define $\mathbb{C}^\infty$ there's probably even a canonical such basis). Then given $(V_1, \dots, V_n) \in X(\mathbf{n})$, since all the $V_i$ are finite dimensional there is some $N$ such that $V_1 + \dots + V_n \subset \operatorname{Vect}(e_1, \dots, e_N)$ (to see this you can simply express all the vectors of a basis of $V_1 + \dots + V_n$ in the basis $(e_i)$).
Now the matrix associated to $(\lambda_1, \dots, \lambda_n, V_1, \dots, V_n)$ is indeed finite, because "outside" of the first $N$ rows and columns, there are only $1$'s on the diagonal and $0$'s outside the diagonal. Inside the first $N \times N$ block, the matrix is given by the linear transformation that's equal to multiplication by $\lambda_i$ on $V_i$ and on the identity on the orthogonal complement. This is a unitary matrix.
To give an explicit example, imagine $V_1 = \operatorname{Vect}(e_1,e_2)$ and $V_2 = \operatorname{Vect}(e_3,e_4,e_5)$. Then the associated matrix is: $$\begin{pmatrix} \lambda_1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \dots \\ 0 & \lambda_1 & 0 & 0 & 0 & 0 & 0 & 0 & \dots \\ 0 & 0 & \lambda_2 & 0 & 0 & 0 & 0 & 0 & \dots \\ 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 & 0 & \dots \\ 0 & 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 & \dots \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & \dots \\ \vdots &\vdots &\vdots &\vdots &\vdots & 0 & 1 & 0 & \dots \\ \vdots &\vdots &\vdots &\vdots &\vdots & \vdots & 0 & \ddots & \vdots \end{pmatrix}$$
It's possible for this to give a nondiagonal matrix: suppose that $V_1 = \operatorname{Vect}(e_1 + e_2)$ ($n=1$), so $N = 2$. The operator sends $e_1 + e_2$ to $i(e_1 + e_2)$ and sends $e_1 - e_2$ to itself (and it's also the identity on the rest of $V_1^\perp = \operatorname{Vect}(e_1 - e_2, e_3, e_4, \dots)$). So the first block of the matrix is: $$\begin{pmatrix} \frac{1+i}{2} & \frac{1-i}{2} \\ \frac{1-i}{2} & \frac{1+i}{2} \end{pmatrix}$$ (you can check that this is unitary) and is the identity outside of this first block. (I hope I didn't make a mistake in my computations.)