Suppose we are trying to solve the following problem:
\begin{align} \text{minimize} & \hspace{8pt} f(x) \\ \text{subject to} & \hspace{8pt} g(x) = 0, \end{align} where $f$ and $g$ are both differentiable.
The method of Lagrange multipliers says that we should solve the following system of equations: \begin{align} \nabla f(x) &= \lambda \nabla g(x) \\ g(x) &= 0. \end{align}
If I form the Lagrangian, then I get this equation: $$L(x,\lambda) = f(x) - \lambda g(x).$$
If I take the gradient of the Lagrangian $L$ and set it equal to $0$, I find that I get the system of equations that I must solve for the method of Lagrange multipliers. The derivative with respect to $x$ gives me the first equation, and the derivative with respect to $\lambda$ gives me the second equation.
Why is this the case?
The connection comes from calculus.
Consider the problem:
\begin{align} \text{minimize} \quad & f(x) \\ x \in \quad & [a,b] \end{align}
If the point where $f$ attains its minimum is $x^*$, then from the extreme value theorem, $$\frac{\mathrm{d}f}{\mathrm{d}x}(x^*) = 0$$
Now add some equality constraints to this problem to get a general constrained problem.
\begin{align} \text{minimize} \quad & f(x) \\ \text{subject to} \quad & g(x) = 0 \\ x \in \quad & X \end{align}
The method of Lagrange multipliers forms a new function called the Lagrangian: $$\mathcal{L}(x, \lambda) = f(x) \ +\ \lambda g(x) $$
Under some assumptions, it can be proved that if $x^*$ is the point where the equality constrained problem above is minimised, then there is a unique $\lambda^*$ such that
$$\nabla \mathcal{L}(x^*, \lambda^*) = 0$$
This along with the original equality constraint simplifies to the system of equations in your question.