I'm novice in optimization and have a convex optimization function of form $\sum_{i,k} p_{k,i}*\log{p_{k,i}} $ to minimize with the following constraints:
$\forall i, a_i = \sum_{k=1}^{m} b_k. p_{k,i}$
$\forall k, k=\sum_{i=1}^{m} p_{k,i}$
$0\leq p_{k,i} \leq 1$
$1\leq i,k \leq m$
$0\leq a_i \leq 1$'s and $0 \leq b_k \leq 1$'s are known and $m=160$.
The values of $b_k$ and $a_i$ comes from my data set. I use CVX optimization tool and it finds a solution after 6,7 iterations. However in my actual problem I need to use approximated $b_k$ and $a_i$. Using the approximated values the solver immediately says its "unbounded"! Could someone help me why it happens? As far as I understand an optimization becomes unbounded when the optimal solution moves towards -Infinity, but I don't understand why it is the case here? Any chance to relax the constraints somehow to prevent this problem?
Clearly the problem is not unbounded, since ignoring all but the bound constraints on $p_{k,i}$ you have that the objective attains its maximum of 0 at $p_{k,i} \in \{0,1\}$ and its minimum of $-m^2e^{-1}$ at $p_{k,i}=e^{-1}$.
I'm not familiar with that particular software package, but one possible issue is that although the limit of $x\log x$ is well-defined from the right as $x\to 0$, a naive calculation will give
nan. You might try giving lower bounds on $p_{k,i}$ of $\epsilon > 0$ instead of zero and see if that solves the problem -- alternatively, you could try making the substitution $p_{k,i} = e^{q_{k,i}}$, which eliminates numerical issues with the objective (I haven't checked how nasty this makes your constraints, though.)