(For simplicity, say our domain $D \subset \Bbb R^2$, $\partial D$ is nice and smooth, and $\overline D$ is compact.)
Let $\lambda_1$ be the first eigenvalue of the problem $\Delta u + \lambda u = 0$ on $D$ with Dirichlet condition $u = 0$ on $\partial D$.
If I understand correctly: Rayleigh's theorem says that if $0 \ne f \in L^2 (D)$ has a weak gradient (in the Sobolev sense), it satisfies $$\lambda_1 \le \frac {||\,|\nabla f|\,||^2}{||f||^2}$$ where the norm is the $L^2(D)$ norm, and with equality if and only if $f$ is an eigenfunction of $\lambda_1$.
But then what stops you from doing the following: Take $\phi_1$ to be the first eigenfunction; take $G$ to be some subdomain of $D$ (nice boundary, not touching $\partial D$), and let $f$ equal $\phi_1$ inside $G$ and $0$ outside. Then $\nabla f = \nabla \phi_1$ in the weak sense inside $G$, and $0$ outside; by the divergence theorem, we get $||\, |\nabla f|\, ||^2 = \lambda_1 ||f||^2$; by Rayleigh's theorem, $f$ is an eigenfunction.
But this is absurd. Where is my mistake?
OK, I believe I found my mistake. It was a mix of misunderstanding the definition of a weak gradient, and misusing the divergence theorem.
The divergence theorem says $\int_\Omega \operatorname{div} X = 0$ when $X$ is a $C^1$ vector field compactly supported on $\Omega$.
Plugging in $X = fY$, we get $\int_\Omega (\nabla f \cdot Y + f \cdot \operatorname{div} Y) = 0$ for any $Y$ that is $C^1$ and compactly supported on $\Omega$. This is taken as the definition of a weak gradient, and of course a $C^1$ function has its gradient as a weak gradient.
Now suppose we take a nice smooth function $f$ and restrict it to some subdomain $\Omega'$, putting zero everywhere else, like I did in the question. Under what conditions does it have a weak gradient?
The only possible candidate for a weak gradient is: the original gradient in $\Omega'$, and zero outside. So we must check, using the divergence theorem, if this satisfies the requirement.
But then we run into trouble: $fY$ is not necessarily zero on $\partial \Omega'$! So the divergence theorem has an additional term, $\int_{\partial\Omega} fY\cdot\nu$, where $\nu$ is the normal vector - and this term is nonzero, so we don't have a weak gradient.
However, if I add the condition that $\Omega'$ is a nodal domain of the function (like in the proof of Courant's nodal domain theorem), then $fY=0$ on $\partial\Omega'$ and the usage of the divergence theorem is justified.