I'm trying to solve this exercise:
But I can't get to the correct answer. This is what I did:
$$\log(p(X|\theta) = \sum_{i=1}^{d}(\log(\theta_i^{x_i}) + \log((1-\theta_i)^{1-x_i})) \\ = \sum_{i=1}^{d} (x_i \log(\theta_i) + (1-x_i) \log (1-\theta_i))$$
I look for the maximum, so:
$$\frac{\partial}{\partial \theta} \log(p(X|\theta) = \sum_{i=1}^{d} \frac{x_i}{{\theta}_i} + \sum_{i=1}^{d} \frac{1-x_i}{1-\theta_i}$$
And:
$$\sum_{i=1}^{d} \frac{x_i}{{\hat{\theta}}_i} + \sum_{i=1}^{d} \frac{1-x_i}{1-\hat{\theta}_i} = 0$$
But when I solve that I can't get the correct answer.

For samples $\mathbf x_1, \mathbf x_2, \ldots, \mathbf x_n$, each of which is $d$-dimensional vector, likelihood function is equal to $$ f(\theta; \mathbf x_1, \ldots, \mathbf x_n) = \prod_{k=1}^n\prod_{i=1}^d \theta_i^{\mathbf x_{k}(i)}(1-\theta_i)^{1-\mathbf x_{k}(i)}=\prod_{i=1}^d \theta_i^{\sum_{k=1}^n\mathbf x_{k}(i)}(1-\theta_i)^{n-\sum_{k=1}^n\mathbf x_{k}(i)}, $$ where $\mathbf x_{k}(i)$ is the $i$th coordinate of vector $\mathbf x_{k}$.
Log-likelihood function is equal to $$ L(\theta; \mathbf x_1, \ldots, \mathbf x_n) = \sum_{i=1}^d \left(\sum_{k=1}^n\mathbf x_{k}(i) \cdot\log(\theta_i)+\biggl(n-\sum_{k=1}^n\mathbf x_{k}(i)\biggr)\cdot\log(1-\theta_i)\right). $$ When differentiating with respect to $\theta_i$, all terms except that which containes $\theta_i$, disappear: for $i=1,\ldots,d$ $$ \frac{\partial}{\partial \theta_i}L(\theta; \mathbf x_1, \ldots, \mathbf x_n) = \frac{\sum_{k=1}^n\mathbf x_{k}(i)}{\theta_i}-\frac{n-\sum_{k=1}^n\mathbf x_{k}(i)}{1-\theta_i}=\frac{\sum_{k=1}^n\mathbf x_{k}(i)-n\theta_i}{\theta_i(1-\theta_i)}. $$ Please note that $\left(\log(1-x)\right)'=-\frac{1}{1-x}$.
For any $i$, MLE $\hat\theta_i$ is a solution of equation $$ \sum_{k=1}^n\mathbf x_{k}(i)-n\hat\theta_i =0, $$ $$ \hat\theta_i=\frac1n{\sum_{k=1}^n\mathbf x_{k}(i)} $$ We obtain MLE for vector $\theta$ as $$ \hat\theta=\frac1{n}\sum_{k=1}^n\mathbf x_{k}. $$