edit: from my exchange with Travis, I am clarifying the question (edits are in bold).
Suppose we define matrices $\alpha, \beta$ with these properties:
$$ \alpha=\alpha^\dagger\\ \beta=\beta^\dagger\\ \alpha^2=AI\\ \beta^2=BI\\ \alpha\beta+\beta\alpha=CI $$
where $I$ is the identity, where $A,B,C$ are elements of $\mathbb{R}$. In the general case $A,B,C$ are independent of each other.
Given prescribed values of A, B and C, my goal is to find the matrices $\alpha, \beta$.
This can be extended to $n$ matrices, but I am only interested in case where $n\leq 4$. For example, the case with 3 matrices would be
$$ \alpha=\alpha^\dagger\\ \beta=\beta^\dagger\\ \gamma=\gamma^\dagger\\ \alpha^2=AI\\ \beta^2=BI\\ \gamma^2=CI\\ \alpha\beta+\beta\alpha=DI \\ \alpha\gamma+\gamma\alpha=EI\\ \beta\gamma+\gamma\beta=FI $$
where $I$ is the identity, where $A,B,C,D,E,F$ are elements of $\mathbb{R}$. In the general case $A,B,C,D,E,F$ are independent of each other.
My goal here again is to find the matrices $\alpha, \beta, \gamma$ given some prescribed values of A,B,C,D,E,F.
The case for four matrices would be.
$$ \alpha=\alpha^\dagger\\ \beta=\beta^\dagger\\ \gamma=\gamma^\dagger\\ \lambda=\lambda^\dagger\\ \alpha^2=AI\\ \beta^2=BI\\ \gamma^2=CI\\ \lambda^2=DI\\ \alpha\beta+\beta\alpha=EI \\ \alpha\gamma+\gamma\alpha=FI\\ \alpha\lambda+\lambda\alpha=GI\\ \beta\gamma+\gamma\beta=HI\\ \beta\lambda+\beta\lambda=JI\\ \gamma\lambda+\lambda\gamma=KI $$
How would I go about characterizing the set of solutions?
The condition $\alpha = \alpha^\dagger$ is are exactly that the matrices are Hermitian, so $\alpha$ has real eigenvalues and is unitarily diagonalizable. Then, the conditions $\alpha^2 \in \Bbb R \cdot I$ imply that those eigenvalues are all $\pm q$ for some $q$ (in fact $A \geq 0$ and $q = \sqrt{A}$ but we don't need this fact). Thus, there is some unitary matrix $P$ such that $$P \alpha P^{-1} = q \pmatrix{ I_{n - k} & \cdot \\\cdot&- I_{k}} .$$ Now, if we denote $\beta' := P \beta P^{-1}$, the condition $\alpha \beta + \beta \alpha = C I$ is equivalent to $$\pmatrix{ I_{n - k} & \cdot \\\cdot&- I_k} \beta' + \beta' \pmatrix{ I_{n - k} & \cdot \\\cdot&- I_{k}} = rI$$ for some $r \in \Bbb R$. If we then denote $\beta' = \pmatrix{\lambda&T^\dagger\\T&\mu}$, then this condition is equivalent to $\lambda = s I$ and $\mu = -s I$ for some $F \in \Bbb R$.
In summary, the condition forces the existence of a $k \in \{0, \ldots, n\}$, a unitary matrix $P$, and a $k \times (n - k)$ matrix T such that such $$\alpha = q P^{-1} \pmatrix{ I_{n - k} & \cdot \\\cdot&- I_k} P, \qquad \beta = P^{-1} \pmatrix{s I_{n - k} & T^{\dagger}\\T & -s I_k} P .$$
So, in the case $n = 2$, the cases $k = 0, 2$ both imply that $\alpha$ and $\beta$ are multiples of the identity. In the more interesting case $k = 1$, using the usual parameterization of $U(2)$ gives a parameterization of all of the remaining solutions $(\alpha, \beta)$.