For example, let $ r_i \sim \operatorname{Binomial} (n, p_i)$, where $i \in [1,m]$. Assume that $ r_i $ are independent and also, put an inequality, $ p_1 < p_2 < \cdots < p_m $.
How to find the maximum likelihood estimates of $ p_1, \ldots,p_m$ in such setting?
I know the joint likelihood can be written as,
$$ \sum_{i=1}^m \big(r_{i} \log p_{i} + (n-r_{i})\log(1-p_{i})\big)+ \text{constant}.$$
But how can I continue to derive the MLE with the constraint?
Constrained maximum-likelihood is just a particular application of constrained optimisation more generally, and so it falls within the field of nonlinear programming. There are many methods of solving nonlinear optimisation problems constrained by inequality constraints, including the Karush–Kuhn–Tucker method or simply transforming the problem into an unconstrained one.
An example applicable to your problem: In the present case, it is possible to transform the constrained vector $\boldsymbol{p} = (p_1, \ldots, p_m)$ from an unconstrained vector $\boldsymbol{u} = (u_1, \ldots, u_m) \in \mathbb{R}^m$. There are lots of ways to do this, but one way is to define the transformation using nested logistic functions as follows (the parameter $\alpha$ is a control parameter):
$$p_k = T_k(\boldsymbol{u}) \equiv \frac{1}{1 + \sum_{i=k}^m \exp(- \alpha u_i)} \quad \quad \text{for } k = 1,2,\ldots,m.$$
This yields a transformation $T: \mathbb{R}^m \times \mathbb{R} \rightarrow \{ \boldsymbol{p} \in \mathbb{R}^m \mid 0 < p_1 < \cdots < p_m < 1 \}$, which means that it maps an unconstrained parameter $\boldsymbol{u}$ to the constrained parameter $\boldsymbol{p}$. (I have also allowed for a control parameter $\alpha$ that lets you adjust the steepness of the logistic functions; you can remove this part by setting $\alpha = 1$ if you want to.)
Re-parameterising your optimisation problem into the new unconstrained parameter $\boldsymbol{u}$ yields the log-likelihood function (which also includes the logistic control parameter $\alpha$):
$$\ell_\boldsymbol{r}(\boldsymbol{u} \mid \alpha) = \sum_{k=1}^m \Big( (n_k-r_k) \log \Big( \sum_{i=k}^m \exp(- \alpha u_i) \Big) - n_k \log \Big(1 + \sum_{i=k}^m \exp(- \alpha u_i) \Big) \Big).$$
The maximum-likelihood estimator $\hat{\boldsymbol{u}}$ can be obtained by maximising this function over the unconstrained space $\boldsymbol{u} \in \mathbb{R}^m$, which requires numerical methods. If this MLE exists, we then obtain the corresponding MLE for your desired parameter vector as $\hat{\boldsymbol{p}} = T(\hat{\boldsymbol{u}}, \alpha)$.
Note: In your problem the strict inequality constraints mean that the MLE will not exist for most data vectors. If you change these to non-strict inequality constraints then the constrained MLE will exist. In this case the transformation does not allow values with equality between adjacent $p_k$ values, but it allows you to come arbitrarily close to this. Hence, standard numerical methods to maximise the above likelihood should still converge towards the MLE.