Suppose there is a partial differential equations on $\Omega=[0,2a]\times[b,c]$ satisfying $$(x-a)u_x + y^2u_{yy}=f,$$ where the boundary conditions at $y=b$ and $y=c$ are given. To uniquely determine the soltuion to this special equation, do we need a boundary condition in $x$ direction?
My first thinking of the answer is no because I can numerically solve this equation with only boundary conditions at $y=b$ and $y=c$.
For example, when $x=a$, we have $y^2u_{yy}=f$ with boundary conditions at $y=b$ and $y=c$ so that we can solve it numericially at this special line. Then based on this result, we use backward finite difference scheme in $x$ direction on domain $[0,a]\times [b,c]$ and forward finite difference scheme in $x$ direction on domain $[a,2a]\times[b,c]$ to solve the equation numercially.
Is that true theorically?
TLDR: yes
The question addresses both analytical procedures, so I will divide my answer into three parts, 1) analytical solutions to the initial value problem, 2) continuous analytical solutions, 3) numerical solutions.
Analytical solutions to the initial value problem
The IVP is $$ (x-a)u_x + y^2u_{yy}=f(x,y) $$ for some function $u(x,y)$ for $b \leq y \leq c$, $x \geq 0$ with boundary conditions $u(x,b)=g_b(x)$, $u(x,c)=g_c(x)$ and initial condition $u(0,y) = h(y)$. We can rearrange the governing equation to $$ u_x =\frac{f(x,y) - y^2u_{yy}}{x-a} $$ and we see that, as we advance through $x$ from our initial condition to $x=a$ the denominator goes to zero, and thus unless the numerator also goes to zero the $x$ derivative of the solution will diverge, and the solution will not exist beyond $x=a$. The condition of vanishing numerator is very strict, and will almost never be the case (in the measure theoretic sense). We thus conclude that solutions to the IVP are almost certainly only defined for $x<a$.
This property of divergent derivative comes from the fact the equation is singular, the coefficient of one of the derivatives goes to zero, and thus this derivative will go to infinity, unless the rest of the equation goes to zero.
The condition that the rest of the equation goes to zero can be stated in the problem formulation, however this is a strong condition and acts as our boundary condition in $x$.
Continuous analytical solutions
The problem is $$ (x-a)u_x + y^2u_{yy}=f(x,y) $$ to be solved for some continuous function $u(x,y)$ for $b \leq y \leq c$, $x \geq 0$ with boundary conditions $u(x,b)=g_b(x)$, $u(x,c)=g_c(x)$. The condition that $u$ is continuous implies that $$ \lim_{x \rightarrow a} u(x,y) = u(a,y) $$ and, at $x=a$ $$ y^2u_{yy}=f(x,y) $$ to be solved for $u(a,b)=g_b(a)$, $u(a,c)=g_c(a)$. However, to continue we need to figure out what $u_x$ is so that we can evolve the equation forward and backward in $x$. To do this let us assume that $u$ is analytic in $x$ local to $x=a$, such a solution may not exist, and may not be unique, but lets find out. Local to $x=a$ we have from Taylor expansions $$ u_x(x,y) = u_x(a,y) + u_{xx}(a,y) (x-a) + O((x-a)^2) $$ $$ u_{yy} (x,y) = u_{yy} (a,y) + u_{xyy}(a,y) (x-a) + O((x-a)^2) $$ thus $$ (x-a)(u_x(a,y)) + y^2(u_{yy} (a,y) + u_{xyy}(a,y) (x-a)) + O((x-a)^2)=f(x,y) $$ using our equation for $u_{yy}(a,y)$ $$ (x-a)(u_x(a,y)) + y^2(u_{xyy}(a,y) (x-a)) + O((x-a)^2)=0 $$ limiting $x \rightarrow a$ $$ u_x(a,y) + y^2u_{xyy}(a,y)=0 $$ with $u_x(a,b)=(g_b)_x(a)$, $u_x(a,c)=(g_c)_x(a)$. This equation can be solved with $u_x$ as the unknown function. Similar equations can be found for $u_{xx}(a,y)$, $u_{xxx}(a,y)$, etc. From these we can reconstruct a solution in an arbitrarily large region around $x=a$, (provided the solution does not diverge for finite $x$).
Numerical solutions
The OP proposed numerical methods initiating at $x=a$ as a way of constructing solutions to this equation. Of course this is possible, but care must be taken. Many numerical methods (explicit methods) require the derivative of the function at a point in space, and then use this to project the solution to another point in space. This means that starting at $x=a$ we require $u_x(a,y)$ to find the solution at $x=a \pm \Delta x$. $u_x(a,y)$ cannot be found from the original equation, but can be found from the result in 'Continuous analytical solutions' above, thus to use an explicit method we must first find $u(a,y)$ and $u_x(a,y)$ and use these to take the initial time step. For small $\Delta x$ we may find that there are still issues in using the general equation for $u_x$, computers can sometimes struggle with equations of the form (big-big)/tiny, so it may be necessary to use some of the higher derivatives at $x=a$ to project further away at high enough accuracy.