according to the Wikipedia: https://en.wikipedia.org/wiki/Score_(statistics), expected value of a score function should equals to zero and the proof is following:
\begin{equation} \begin{aligned} \mathbb{E}\left\{ \frac{ \partial }{ \partial \beta } \ln \mathcal{L}(\beta|X) \right\} &=\int^{\infty}_{-\infty} \frac{\frac{ \partial }{ \partial \beta } p(X|\beta)}{p(X|\beta)} p(X|\beta) dX \\ &= \frac{ \partial }{ \partial \beta }\int^{\infty}_{-\infty} p(X|\beta) dX = \frac{ \partial }{ \partial \beta } 100\% = 0 \end{aligned} \end{equation}
My question is why the probability density function of random variable $\frac{ \partial }{ \partial \beta } \ln p(X|\beta)$ is $p(X|\beta)$? Many thanks!!
$$\frac{\partial}{\partial \beta} \left[\log \mathcal L(\beta \mid \boldsymbol X)\right]$$ is a function of the sample $\boldsymbol X$, thus is a function on the joint density $p_{\boldsymbol X}(\boldsymbol x \mid \beta)$ of $\boldsymbol X$. So its expectation is naturally $$\int_{\boldsymbol x \in \Omega} \frac{\partial}{\partial \beta} \left[\log \mathcal L(\beta \mid \boldsymbol x)\right] p_{\boldsymbol X}(\boldsymbol x) \, d\boldsymbol x$$ where $\Omega$ is the support of $\boldsymbol X$. This is analogous to the much simpler "law of the unconscious statistician" $$\operatorname{E}[g(X)] = \int_{x \in \Omega} g(x) f_X(x) \, dx$$ in the univariate case.