Did I made a calculation error?
Say we had a simple one layer perceptron where: $f$ is the activation function, $w$ is the weights matrix, $b$ is the bias vector, $x$ is the input vector, $y$ is the output vector, $t$ is the target vector and $E$ is the loss function.
Then: $$y = f(wx + b)$$ $$ \text{Error} = E = \frac{\Vert y - t \Vert_2^2}{2} $$ $$\nabla_{w}E = (y - t)f'(wx + b)x^T$$ $$\nabla_w^2 E = ((y - t)f''(wx + b) + (f'(wx + b))^2)(xx^T)$$
Hence, since $((y - t)f''(wx + b) + (f'(wx + b))^2)$ could be considered a constant, we can then conclude that:
$$(\nabla_w^2 E)^{-1} = ((y - t)f''(wx + b) + (f'(wx + b))^2)^{-1}(xx^T)^{-1}$$
Which means we need to get the inverse of $(xx^T)$, but since $xx^T$ is an outer product its rank is $1$, so it doesn't has an inverse.
I feel like I did something wrong since I've seen other people talk about Newton's Method training models and those need the inverse of the Hessian.