we are given four probabilities for an event. We shall adapt these probabilities in order to maximize the entropy given a constraint $4 = \sum_{i = 1}^4 2p_ii \Leftrightarrow 2 = \sum_{i = 1}^4 p_ii$.
The task asks also if we need additional constraints. So since we consider probabilities, the probabilities must be non negative and they must sum up to $1$.
My approach for the optimization problem would be
$ \begin{align} \underset{p}{\max}\qquad & H(p) = \sum_{i = 1}^4 p_i \log_2 p_i\\ \text{s.t.}\qquad & 2 = \sum_{i = 1}^4 p_ii\\ & 1 = \sum_{i = 1}^4p_i\\ & p_i \geq 0,\ i \in \{1, 2, 3, 4\}\\ \end{align} $
Do we really have these much constraints? In a further task we shall formulate the Langrangian of this problem and if I understood it right, we need one Langrangian multiplier per constraint, so in this case at least six multiplier. In fact we need some more multiplier, because our given "recipe" for formulating the Langrangian works only with inequalities.
Is it really that high dimensional?
Yep. Luckily though you're maximizing a concave function over a convex set, so it's not too hard to solve algorithmically. It's also easy to plug in the Karush–Kuhn–Tucker conditions. See https://davidrosenberg.github.io/ml2015/docs/convex-optimization.pdf .