Find the eigenvalues and eigenvectors of $\alpha: Y \rightarrow Y \cap B.$

70 Views Asked by At

Let $A$ is a nonempty set and Let V be the collection of all subsets of $A$, which is a vector space over $GF(2)$. Let $B$ be a fixed subset of $A$ and let $\alpha : V \rightarrow V$ be the endomorphism defined by $\alpha : Y \rightarrow Y\cap B.$ Find the eigenvalues of $\alpha$ and, for each eigenvalue, Find the associated eigenspace.

Could anyone help me in answering this question?

2

There are 2 best solutions below

6
On BEST ANSWER

Ok, let's start with simple things.


The field $GF(2)$ is the set $\{0,1\}$ along with the addition $1+1 = 0+0= 0$ and $1+0 = 0+1 = 1$, and multiplication given by $0 \cdot 0 = 1 \cdot 0 = 0 \cdot 1 = 0$ and $1 \cdot 1 = 1$.

Now, how is $V$ a vector space over $GF(2)$? It follows from what Jean Marie mentioned : each subset of $V$ can be attached to a vector of $0$s and $1$s, given by which element lies in it and which does not. If $A$ has $n$ elements, we can identify $V$ with $GF(2)^n$.

For example, $A = \{a,b,c\}$ , then the element of $V$ corresponding to $\{a\}$ is $(1,0,0)$ because $a \in \{a\}$ but the others are not. Similarly, $\{b,c\}$ corresponds to $\{0,1,1\}$. Now, under componentwise addition, we get $(1,0,0) + (0,1,1) = (1,1,1)$ so $\{a\} + \{b,c\} = \{a,b,c\}$.

Now, it is easy to see that as set theoretic operations, the addition/scalar multiplication on $V$ is given by : $$ S + T = (S \cup T) \setminus (S \cap T) = (S \cap T^c) \cup (T \cap S^c)\\ $$ (the union in the last one is disjoint) and $cS = \phi$ if $c =0$ and $S$ if $c = 1$.

So, we have a vector space structure on $V$. (It remains to verify that the addition follows the vector space axioms, but that follows from the identification with $GF(2)^n$).

So to give an example, $\{a,b\} + \{b,c\} = \{a,c\}$ (because we remove the common element).


Now, with this, it remains to examine the given operators.

For example, let us look for fixed $B$, and then look at $\alpha(Y) = Y \cap B$. Why is this a linear transformation?

We need to check two things : one, that $\alpha(cY) = c \alpha(Y)$. This is obvious by separating into $c=0$ and $c = 1$.

The second is that $\alpha(S + T) = \alpha(S) + \alpha(T)$. $$ \alpha(S) + \alpha(T) = ((S\cap B) \cup (T\cap B)) \setminus ((S \cap B)\cap (T \cap B)) = ((S \cup T) \cap B) \setminus (S \cap T \cap B) \\ = [(S \cup T) \setminus (S \cap T)] \cap B = \alpha(S+T) $$

where the third equality can be checked in the usual way we check set containment.

Thus, we have a linear transformation. Now, only one of two eigenvalues can occur : either $c=0$ or $c= 1$ is an eigenvalue.

But eigenvalues are very easy to spot.

For example, if $\alpha(Y) = 0Y = \phi$, then $Y \cap B = \phi$. Conversely, if $Y \cap B = \phi$ then of course $\alpha(Y) = 0$. Thus, it follows that the eigenspace of $0$ is given by the subspace of $V$, generated by all subsets of $B^c$.

Similarly, suppose $\alpha(Y) = 1Y = Y$, then $Y \cap B = Y$ so $Y \subset B$. Conversely, if $Y \subset B$ then $\alpha(Y) = Y$. It follows that the eigenspace of $1$ is given by the subspace of $V$ generated by all subsets of $B$.

Take an example to clarify : Let $A = \{a,b,c\}$. The sets $\{a\},\{b\},\{c\}$ form a basis of $V$ under addition(really easy to see, because the addition becomes union for disjoint sets). so to form the matrix of $\alpha$ with say $B = \{b,c\}$ we need to look at $\alpha(\{a\}) = \phi, \alpha(\{b\}) = \{b\}, \alpha(\{c\}) = \{c\}$. Thus the matrix of $\alpha$ is : $$ \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} $$

You can see that our $\alpha$ is a diagonal matrix, with a $1$ if that element is in $b$, else not. It is super easy to see that $\alpha$ will always look diagonal in this kind of basis for any choice of $B,A$. Then it is very easy to see the eigenvalues/eigenspaces as well. This outlook helps in seeing $\alpha$ in a new light.

0
On

Let $A=\{a_1,a_2,...a_n\}$

A subset $B$ of $A$ is "represented by/identified to" the vector $(v_1,v_2,...,v_n) \in (GF(2))^n$ defined by :

$$v_k=1 \ \iff \ a_k \in B, \ \text{otherwise} \ v_k=0$$

In particular the null vector corresponds to $\emptyset$.

For example, if $A=\{a,b,c\}$, subset $\{a,c\}$ is represented by vector $(1,0,1)$.

Let us now consider the two cases that can occur:

  • eigenvalue $0$ : $Y \cap B = 0 B \ \ \iff \ \ Y \cap B = \emptyset$, the corresponding eigenspace being constituted by all the sets $Y$ that are disjunct from $B$.

  • eigenvalue $1$ : $Y \cap B = 1 B = B$. I let you end the reasoning...