I pasted more than I refer, hoping to be more clear.
Look at the claim of the theorem: it states we can change coordinates untill we reach a "good" form for the equation of $r$, which defines the hypersurface $M$.
So let's consider the equation $$ r=y_1+\sum_{j=2}^{s^++1}|z_j|^2-\sum_{j=s^++2}^{s^++s^-+1}|z_j|^2 + o^2(x_1,z') $$ if we take the Levi form of $r$ around $0$, looking at the previous equation, we MUST have an eigenvalue equal to $0$. Do you agree? It seems a little bit strange: we are saying that the Levi form of every $r:\Bbb C^n\to\Bbb R$ twice differentiable has at least one eigenvalue equal to zero. Taking $r=|z_1|^2+\cdots+|z_n|^2$ we'd have all the eigenvalues equal to $1$. No eigenvalue vanishes.
Where is the problem?

The Levi form is defined as $$L_M(z)=L_r(z)\vert_{T^{\mathbb{C}}_zM}$$ i.e. as the complex hessian of $r$ restricted to the maximal complex subspace of the tangent to $M$.
If you put $M$ in the form $$y_1=Q(z_2,\ldots, z_n)+h.o.t.$$ locally around $0$, then the tangent hyperplane in $0$ is $y_1=0$, so the complex tangent is $z_1=0$, i.e. $$T^{\mathbb{C}}_0M=\mathrm{Span}\{\partial_{z_2}, \ldots, \partial_{z_n}\}\;.$$ If you denote by $A$ the matrix associated with the quadratic form $Q$, then $$L_r(0)=\begin{pmatrix}0&0&\cdots& 0\\0& & &\\\vdots& &A& \\0& & & \end{pmatrix}$$ with respect to the basis $\{\partial_{z_1},\ldots, \partial_{z_n}\}$ of $T_0\mathbb{C}^n$. Now, if you restrict this to $T^{\mathbb{C}}_0M$, you obtain the $(n-1)\times(n-1)$ matrix $A$, which is $L_M(0)$.
If $Q$ was positive definite, then so it is $L_M(0)$. The direction of the $0$ eigenvector is precisely the one you get rid of when restricting to the maximal complex subspace of $T_0M$.