Suppose we have a unit vector in 3D space whose orientation has some unknown distribution $p(\theta,\phi)$. All we know about this distribution is the average value of $cos(\theta)$:
$\int_0^{2\pi}d\phi \int_0^\pi cos(\theta)p(\theta,\phi) sin(\theta)d\theta$ = C , (1)
where $sin(\theta)$ is the Jacobian of the matrix of transformation from Cartesian to spherical coordinate system.
The maximum entropy principle tells us that out of all possible distributions $p(\theta,\phi)$ satisfying the condition (1) above, we should choose the one with the maximum entropy.
What would be the solution $p(\theta,\phi)$ maximizing the entropy and satisfying condition (1)?
If I understand correctly you have a continuous case with yet unknown distribution. A maximum entropy probability distribution is a probability distribution whose entropy is at least as great as that of all other members of a specified class of distributions.
First let you know inspirational that different classes are studied in lietrature and your case resembles one of them. For a continuous random variable $\theta_i$ distributed about the unit circle, the Von Mises distribution maximizes the entropy when given the real and imaginary parts of the first circular moment or, equivalently, the circular mean and circular variance. When given the mean and variance of the angles $\theta_i$ modulo $2\pi,$ the wrapped normal distribution maximizes the entropy.
Suppose $S$ is a closed subset of the real numbers $\Bbb R$ and we're given $n$ measurable functions $f_1,...,f_n$ and n numbers $a_1,...,a_n$. We consider the class $C$ of all continuous random variables which are supported on $S$ (i.e. whose density function is zero outside of $S$) and which satisfy the $n$ expected value conditions:
$${E}(f_j(X)) = a_j\quad\mbox{ (for } j=1,\ldots,n) \Rightarrow {E}(\theta,\phi) \sim \int_0^{2\pi}d\phi \int_0^\pi cos(\theta)p(\theta,\phi) sin(\theta)d\theta$$
If there is a member in $C$ whose density function is positive everywhere in $S$, and if there exists a maximal entropy distribution for $C$, then its probability density $p(x)$ has the following shape (Boltzmann theorem):
$$p(\theta,\phi)=C \exp\left(\sum_{j=1}^n \lambda_j f_j(\theta,\phi)\right)\quad \mbox{ for all } x\in S$$
where the constants $c$ and $λ_j$ have to be determined so that the integral of $p(x)$ over $S$ is $1$ (The Condition as explained in my former response, beware that this is commonly called as the core condition for maximum entropy and not to confuse with condition you have for the expectation values) and the above conditions for the expected values are satisfied. Conversely, if constants $c$ and $λ_j$ like this can be found, then $p(x)$ is indeed the density of the (unique) maximum entropy distribution for our class $C$. I can not identify explicitly $f_j(\theta,\phi)$ for you as the details of the construct of your observables / measurable functions not completely clear to me but I guess if I understood correct $f_j(\theta)\sim \cos (\theta)$.
Having the examples above and the two equations above with the identified parameters (Lagrange multipliers), you are well equiped to prove your case with the tools of ordinary calculus and Lagrange multipliers.