I'm trying to infer a smooth, non-negative function from some given data ($\vec{m},\vec{\alpha},\vec{\beta}$). That is, I want to solve (I think) $$ \mathop{\arg\!\min}_{g \in C^1((-1,1)\to(0,\infty))} J[g] := \sum_k \left(\Gamma_k[g] - m_k\log\Gamma_k[g]\right) + \frac{\lambda}{2}\int_{-1}^1 g'(x)^2\,dx, $$ where $\Gamma_k[g] = \sum_{i=1}^5 \alpha_{i,k} g(\beta_{i,k})$. The problem is the summation terms are not under an integral operator -- instead, the unknown function $g$ is being evaluated at a set of discrete points, rather than over an interval. (Hence the choice of the Tikhonov penalty, to promote smooth solutions.)
The only way I can think of to proceed is to first map $g(x) \to h(x)=\log{g(x)}$ so that $h:(-\infty,\infty)$, then write $h(x) = \sum_{i=1}^N a_i p_i(x)$ for some choice of orthogonal polynomials, and then simply minimize the objective over $\mathbb{R}^N$. In preliminary investigations, my solutions tended to be very sensitive to $N$, but maybe I simply wasn't choosing $N$ and $\lambda$ large enough.
Is this generally the right way to proceed? I would alternatively love an analytic approach, but I'm not sure how to introduce an integral over the sum.