My uni professor has taught us the following:
If the likelihood formed on the basis of a random sample from a distribution belongs to the regular exponential family, then the likelihood equation for finding the ML estimate of the parameter vector $\boldsymbol{\theta}$ is given by [equation 1]$$\mathop{\mathbb{E}}(\boldsymbol{T}(\boldsymbol{X_1}, ..., \boldsymbol{X_n}))=\boldsymbol{T}(\boldsymbol{x_1}, ..., \boldsymbol{x_n})$$ That is, the likelihood equation can be obtained by equating the expectation of $\boldsymbol{T}(\boldsymbol{X_1}, ..., \boldsymbol{X_n})$ equal to its observed value.
I am having some trouble interpretting the observed values.
If we take the normal distribution (unknown mean $\mu$, known variance $\sigma^2$) for example, we get that the sufficient statistic $\boldsymbol{T}(\boldsymbol{X_1}, ..., \boldsymbol{X_n})=\frac{x}{\sigma}$.
Calculating the LHS of equation 1: $\mathop{\mathbb{E}}(\boldsymbol{T}(\boldsymbol{X_1}, ..., \boldsymbol{X_n}))=\mathop{\mathbb{E}}(\frac{x}{\sigma})=\frac{1}{\sigma}\mathop{\mathbb{E}}(x)=\frac{\mu}{\sigma}$.
I'm not exactly sure how to calculate $\boldsymbol{T}(\boldsymbol{x_1}, ..., \boldsymbol{x_n})$.
Can anyone provide some guidance? (I haven't been able to find any resources that uses this result)
It is a bit unclear what you mean by $T(X_1,\dots,X_n)=\frac{x}{\sigma}$. I believe it should be $T(X_1,\dots,X_n)=\frac{1}{n\sigma} \sum_{i=1}^n X_i$, which is a sufficient statistic. It is then correct, that $$\mathbb{E}[T(X_1,\dots,X_n)] = \frac{\mu}{\sigma}.$$ The quantity $T(x_1,\dots,x_n)$ is computed simply as $T(x_1,\dots,x_n)=\frac{1}{n\sigma} \sum_{i=1}^n x_i$ (that is by replacing the stochastic variables $X_i$ with the observed values $x_i$) The likelihood equation can then be solved by solving $$T(x_1,\dots,x_n) = \frac{1}{n\sigma} \sum_{i=1}^n x_i = \frac{\mu}{\sigma},$$ which results in the wellknown maximum likelihood estimate $$\hat{\mu} = \frac{1}{n} \sum_{i=1}^n x_i$$ for normally distributed data.