I am creating a piecewise linear approximation for the following equation:
$$W = \frac{\theta \left(k \rho\right)^{k}}{2 k^{2}k! \rho \left( 1- \rho\right)^{2} \left(\frac{\left(k \rho\right)^{k}}{\left(1- \rho\right) k!} + \sum_{n=0}^{k - 1} \frac{\left(k \rho\right)^{n}}{n!}\right)}$$
as $W_{approx}=A+B\rho+C\theta+Dk$. In this equation, $\theta$, $\rho$, and $k$ are the variables, where $\theta,\rho\in \mathbb{R}^+$, $k\in \mathbb{Z}^{>0}$, and $0\leq\rho<1$. To do that multivariate linear regression, I want to minimize the norm: $$F = \int_a^b\int_c^d\int_e^f\left(A+B\rho+C\theta+Dk-W \right)^2$$ where $a\leq \rho\leq b$, $c\leq \theta \leq d$, and $e\leq k \leq f$. After integration, I need to find $\frac{\partial F}{\partial A}$,$\frac{\partial F}{\partial B}$, $\frac{\partial F}{\partial C}$, and $\frac{\partial F}{\partial D}$, set them equal to $0$, and solve for $A$, $B$, $C$, and $D$. I learned all the process in this post thanks to Claude Leibovici.
For a single instance of the piecewise approximation, let $a=0$, $b=0.1$, $c=0$, $d=10$, $e=1$, $f=3$, I used sympy module in Python to code the whole process as following:
from sympy import *
init_printing(True)
A, B, C, D = symbols('A B C D', real=True)
theta, rho = symbols('theta rho', real=True)
k = symbols('k', integer=True)
a = 0; b = 0.1; c = 0; d = 10; e=1; f=3
n = symbols('n', integer= True)
W = (theta*(k*rho)**k / (2*factorial(k)*k**2*rho*(1-rho)**2
*(Sum((k*rho)**n/factorial(n), (n,0,k-1))
+ (k*rho)**k/(factorial(k)*(1-rho))))).doit()
F = (A + B*rho+C*theta+D*k - W)**2
eq1 = diff(integrate(F, (rho, a, b), (theta, c, d), (k, e, f)), A)
eq2 = diff(integrate(F, (rho, a, b), (theta, c, d), (k, e, f)), B)
eq3 = diff(integrate(F, (rho, a, b), (theta, c, d), (k, e, f)), C)
eq4 = diff(integrate(F, (rho, a, b), (theta, c, d), (k, e, f)), D)
sol = solve([simplify(eq1),simplify(eq2),simplify(eq3), simplify(eq4)], [A,B,C,D])
If I could successfully find $A$,$B$,$C$,$D$ values, I was going to use the following to find an approximated value for the given:
rho = 0.05; theta = 2, k = 2
Wapprox = sol[A] + sol[B]*rho + sol[C]*theta + sol[D]*k
I realized that the issue resides in eq1 = diff(integrate(F, (rho, a, b), (theta, c, d), (k, e, f)), A), the multi-integration process. I ran the code more than an hour and could not get any results. I also tried eq1 = diff(integrate(simplify(F), (rho, a, b), (theta, c, d), (k, e, f)), A) as simplifying the feed function worked for a similar problem, but not this time. If you have any comments how to resolve the issue, I would really appreciate it.
Disclaimer: I'm not sure if you really should use a linear approximation in your application!
Anyway, here is a numerical method to find a solution for one piece. Since the approximation of $w$ by a linear function already introduces a big approximation error, I would argue, that numerical methods are faster and more stable in your case than symbolic tools. The advantage is, that we can compute the difficult integral using quadrature rules (which are simple are good for smooth functions) instead of using quite tricky symbolic algorithms.
Numerical method for fixed integer $k$:
To use the same notation as in my source code, I will denote $W$ by $h(\rho, \theta, k)$.
Using Gaussian quadratue, we can approximate $$F \approx \sum_{i=1}^n \sum_{j=1}^m w_i^{(\rho)} w_j^{(\theta)} ( c_1 \rho_i + c_2 \theta_j + c_3 k - h(\rho_i, \theta_j,k))^2.$$
Here $\rho_i$ denotes the quadratue points and $w_i^{(\rho)}$ denotes the associated weights for the Gauss-Legendre quadrature of a chosen degree, for example $\text{deg}=10$.
Now with some time and a piece of paper, it is possible to figure out matrices $W, A, H$ (see below) such that
$$ \sum_i \sum_j w_i^{(\rho)} w_j^{(\theta)} ( c_1 \rho_i + c_2 \theta_j + c_3 k - h(\rho_i, \theta_j,k))^2 = ||W^{\frac 1 2} A \mathbf c - W^{\frac 1 2} H||_2^2.$$ Hence $\min F$ is equal to the least-square problem $$\min || W^{\frac 1 2} A - W^{\frac 1 2} H ||.$$
The solution of this problem this is known to be $$\mathbf c = (B^T B)^{-1}B^TH, \quad \text{ with } B = W^{\frac 1 2} A.$$
Results
From my point of view, the results look great for each fixed $k$. This plot represents the solution for $k=4$. In my cases the convergence with respect to the chosen quadrature degree was very quick.
Extensions to variable $k$
You now could also apply the same approach with varying $k$, but the solutions do not behave well, therefore I would suggest to find coefficients for each individual $k$.
Source code
Quick-and-Dirty implementation
The matrices
Roughly $W^{\frac 1 2} \in \mathbb R^{n\cdot m \times n \cdot m}$ contains $\sqrt{w_i^{(\rho)} w_j^{(\theta)}}$ on the diagonal, $A \in \mathbb R^{n\cdot m \times 3}$ contains the quadrature points, i.e. all combinations of $(\rho_i, \theta_j)$ and $H \in \mathbb R^{n\cdot m \times 3}$ contains the values of $h$ at these points. Then each entry of $W^{\frac 1 2} A \mathbf c - W^{\frac 1 2} H$ represents one component of the double sum, i.e. $\sqrt{w_i^{(\rho)} w_j^{(\theta)}} ( c_1 \rho_i + c_2 \theta_j + c_3 k - h(\rho_i, \theta_j,k))$.