I have found various references describing Naive Bayes and they all demonstrated that it used MLE for the calculation. However, this is my understanding:
$P(y=c|x)$ $\propto$ $P(x|y=c)P(y=c)$
with $c$ is the class the model may classify $y$ as.
And that's all, we can infer $P(x|y=c)$ and $P(c)$ from the data. I don't see where the MLE shows its role.
The relationship is that the overall model is Naive Bayes, but there is an MLE estimate for parameters in this model. We have two kinds of parameters in a Naive Bayes model: $q(y)$ and $q_j(x_j|y)$ in the model $$p(y, x_1, \dots, x_d) = q(y)\prod q_j(x_j|y)$$ and each of them have an MLE given by $\hat{q(y)} = \frac{\text{count}(y)}{n}$ and $\hat{q_j} = \frac{\text{count}_j(x|y)}{\text{count(y)}}$.
More details can be found in these lecture notes.