$$A = 3 - \cfrac{2}{3 - \cfrac {2}{3 - \cfrac {2}{3 - \cfrac {2}{...}}}}$$
My answer is:
$$\begin{align} &A = 3 - \frac {2}{A}\\ \implies &\frac {A^2-3A+2}{A}=0\\ \implies &A^2-3A+2=0\\ \implies &(A-1)\cdot(A-2)=0\\ \implies &A=1\;\text{ or }\; A=2 \end{align}$$
I should note that I'm not sure if the above answer is true. Because I expected just one answer for A (A is a numeric expression), but I found two, $1$ and $2$. This seems to be a paradox.
Let us define two series. The first is \begin{align} a_1 &= 3 \\ a_2 &= 3 - \frac{2}{3} \\ a_3 &= 3 - \frac{2}{3 - \frac{2}{3}} \\ a_4 &= 3- \frac{2}{3 - \frac{2}{3 - \frac{2}{3}}} \\ &\vdots \\ a_{n+1} &= 3 - \frac{2}{a_n} \quad (*) \end{align} and \begin{align} b_1 &= 3 - 2 \\ b_2 &= 3 - \frac{2}{3-2} \\ b_3 &= 3 - \frac{2}{3 - \frac{2}{3-2}} \\ b_4 &= 3 - \frac{2}{3 - \frac{2}{3 - \frac{2}{3-2}}} \\ &\vdots \\ b_{n+1} &= 3 - \frac{2}{b_n} \quad (**) \\ \end{align}
Note: This is the same recurrence relation $(*)$ or $(**)$ but with different start value $a_1 = 3$ and $b_1 = 1$.
For convergence we need $a_{n+1} - a_{n} \to 0$ or $a_n \to a$.
This way (in case of convergence) equations $(*)$ and $(**)$ have a limit $$ a = 3 - \frac{2}{a} \quad (\#) $$ which has indeed the solutions $a = 1$ and $a = 2$.
However that means we could also try $$ c_n = 3 - \frac{2}{c_{n+1}} $$ or $$ c_{n+1} = \frac{2}{3 - c_n} \quad (\#\#) $$ because it has the limit form $(\#)$.
Note that equation $(\#\#)$ is quite different from equation $(*)$ (see image below).
And indeed this recurrence relation $(\#\#)$ works too. Using $c_1 = 1$ will give $c_n \to 1$, Using $c_1 = 2$ will give $c_n \to 2$. Using $c_1 = 1000$ will give $c_n \to 1$.
So why is this? Still two solutions and the start value decides the limit.
Here is an image:
The green graph is related to $(*)$: $$ f(x) = 3-\frac{2}{x} $$ the blue graph is related to $(\#\#)$: $$ g(x) = \frac{2}{3-x} $$ and the red graph is the identity function: $$ \mbox{id}(x) = x $$
We see that both $f$ and $g$ hit the identity at $x=1$ and $x=2$. Those points are fixed points of $f$ and $g$: \begin{align} x^* &= f(x^*) \\ x^* &= g(x^*) \end{align} And one could now try to apply the theory of fixed points, esp. properties of fixed point iterations. \begin{align} x_{n+1} &= f(x_n) \quad (\$) \\ x_{n+1} &= g(x_n) \end{align}
The fixed point iteration of $f$ is like the the iteration of original continued fractions (compare $(\$)$ with $(*)$ or $(**)$).
The theory behind can now help with statements about convergence and the dependency of start values.