I am dealing with a machine learning problem in which a logistic regression model is trained with under-bound constrained optimization for model interpretability purposes.
In a simple 2-dimensional unconstrained case, let us assume that the loss function looks like below,
https://i.stack.imgur.com/NO1F5.jpg
Now, if we constrain this optimization to be an under-bound constrained optimization i.e., the weights can only be zero or positive, would this be equivalent to slicing the loss function at y=0 such that instead of a single global minimum, would we end up with a truncated structure like below,
https://i.stack.imgur.com/UD0Y5.jpg
If we try to find the minimum of this constrained loss function, can we expect to see different values for x (from -2 to +2) since all those values are resulting in a minimum value for the loss function?
If this is true, how would this affect the performance of this model with different sets of weights? Would different weights of x have no impact on the performance of the model?
I read in one of SO threads that non-negative constrained optimization makes the search space smaller. Can anyone help me understand this better?
P.S: Apologies for posting links, I don't have enough reputation to post images