In revised simplex method, the basis matrix should never be singular so we can inverse it. But in real programming cases, it's often the case that the selected basis matrix is singular (after crashing for initial basis or after a pivot action).
Is there a way to effectively detect and fix such singularity in a relatively low time complexity? Or is it possible to forbid it from happening in the first place?
When you perform pivoting steps, you will never end up choosing a set of columns that don't form a basis. (This is true for both the ordinary simplex method and the revised simplex method: they pivot in the same way, the only difference is that the revised simplex method avoids some unnecessary bookkeeping.)
Say the constraints for your linear program are $A\mathbf x = \mathbf b$ and $\mathbf x \ge \mathbf 0$. If the old basis consists of some set of variables $\mathcal B$; and the entering variable is $x_i$, then the relevant columns of the simplex tableau $\widetilde{A} = A_{\mathcal B}^{-1} A$ look like: \begin{array}{cccc|c} & & \mathbf x_{\mathcal B} & & x_i \\ \hline 1 & 0 & \cdots & 0 & \widetilde{a}_{1i} \\ 0 & 1 & \cdots & 0 & \widetilde{a}_{2i} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & \widetilde{a}_{ni} \end{array} The pivoting rule tells us that when $x_i$ enters the basis, the pivot entry is an entry $\widetilde{a}_{ji}$ chosen among all of the positive entries in this column to minimize some ratio, well, you know the drill. When this happens, $x_j$ leaves the basis.
In particular, the pivot entry $\widetilde{a}_{ji}$ is nonzero. This is all we need to make sure that the new choice of columns forms a basis, because it's possible to row-reduce: divide the $j^{\text{th}}$ row by $\widetilde{a}_{ji}$, and subtract multiples of that row from the other rows. When we do so, we get an identity matrix in the columns corresponding to the new basic columns, which is the proof that those columns are linearly independent.
(At work is the following characterization: $n$ columns of $A$ are linearly independent exactly when we can row-reduce $A$ to make those columns contain an identity matrix.)
Maybe the simplex method tells us to choose a pivot from a column that has no positive entries. But this happens exactly when the linear program is unbounded, and so in that case, we don't need to get a new basis.
Now, choosing the initial columns to be linearly independent isn't automatic. But we can make it happen anyway, depending on what our linear program looks like.
A common case is that we have constraints $A\mathbf x \le \mathbf b$ with $\mathbf b \ge \mathbf 0$. In this case, when we add slack variables to put the linear program into equational form, the $n$ slack variables are our initial basis. Their columns form an identity matrix, so they're always linearly independent.
When we don't have this, then we're going to be using the two-phase simplex method. In this case, a similar situation saves us: we add artificial variables, and the artificial variable columns are always going to be linearly independent, so they form our initial basis.