Suppose I am interested in estimating the linear regression model
$$ Y_i = g(X_i)^T\beta + \epsilon_i $$
where $Y_i$ is a scalar outcome of interest, $X_i$ is a scalar covariate with support on the unit interval, $g(\cdot)$ is a $K$-dimensional vector of known functions that are not perfectly colinear, $\beta$ is a $K$-dimensional vector of parameters to be estimated, and $E(g(X_i) \epsilon_i) = 0$. Suppose I know that $\Pr(Y \geq 0) = 1$, so I'd like to impose the condition
$$ \beta^T g(x) \geq 0 \qquad \text{for all $x \in [0,1]$} $$
when estimating $\beta$. How would I go about doing this?
You may wish to solve the constrained Maximum likelihood problem: $$ \min_{\beta} \sum_{i=1}^n{(y_i-g(x_i)^T\beta)^2} \;\;\;s.t.\;\; \beta^Tg(x)\geq 0. $$ In the case that the dimensionality $K$ is fixed, the above estimator should be asymptotically efficient. How such estimators can be computed, of course, depend on the function $g$ and most importantly whether $\{\beta:g(x)^T\beta\geq 0\}$ is convex.