Suppose $L$ is a lower-triangular matrix whose row sum is $0$. I want to numerically solve the following matrix ODE
$$\dot L(t) = [L, \Pi_m (F \circ L)], \qquad L(0)=L_0$$
where
$[\cdot \,,\cdot]$ is the Lie bracket, i.e., $[A,B]=A B - B A$.
$L_0$ is lower-triangular and its row sum is also $0$.
$\circ$ is the entry-wise product, i.e., $(A \circ B)_{ij} = a_{ij} b_{ij}$.
$F_{ij}=-\frac{f(u_i)-f(u_j)}{u_i-u_j} $ for $i \ne j.$ $F_{ii}=-u(i)$.
$f(x)=\frac{x^2}{2}$ and $u=(u_1, u_2, \dots, u_n)$ is a sequence of some numbers, where $u_i<u_j.$ So, $F$ is a symmetric constant matrix.
$\Pi_m(F\circ L)=F\circ L- \mbox{diag}((F\circ L)*e),$ where $e=(1 ~1~...1)^T, $ a column vector. So, the row sum of $\Pi_m(F\circ L)$ is also zero.
I want to solve $L(t)$ numerically such that $L(t)$ preserves the property that the row sum is $0$.
For convenience, let $\Pi_m (F \circ L)=B(L(t))$.
By the forward Euler method, we have the following iteration equation:
$L_{n+1}=L_n+ h\dot L=L_n +h (L_nB_n-B_nL_n).$ It is easy to prove that the product of two matrices that their row sums are $0$ preserve the property. So, the row sums of $B_nL_n$ and $L_nB_n$ are $0$. Since the row sum of $L_0$ is $0$, then we have the row sum of $L_n$ is $0$ for any $n \ge 0$.
So, we can conclude that the forward Euler method preserves the property (row sum is zero).
However, I used the Matlab to implement the forward method, and the numerical result blew up for some initial condition. However, my professor told me that the analytic solution would not blow up because he proved it analytically.
My questions:
Do we have the convergence of $L_n$, i.e. $L_n$ converges to the exact solution $L(t)$?
Do we have other methods that converge to the exact solution and preserve the property if the forward Euler does not converge?
In general, suppose that the function $f(t)$ be well approximated by its truncated Taylor expansion in some interval, that wlog we can assume to be around $t=0$ $$ f(t) \approx f(0) + f'(0)\,t + \cdots + {1 \over {q!}}f^{\,\left( q \right)} (0)\,t^{\,q} \quad \left| {\;0 \leftarrow t \le T} \right. $$
We know that the approximation error can be expressed as $$ R_{\,q} (t) = {1 \over {\left( {q + 1} \right)!}}f^{\,\left( {q + 1} \right)} (\tau )\,t^{\,q + 1} < \varepsilon _{\,q} (T)\,t^{\,q + 1} \quad \left| {\;0 \le \tau \le T} \right. $$
Your ODE is of the form $$ f'(t)\, = g\left( {f(t)} \right)\, $$ where you know $g(x)$ and are going to estimate $f(t)$ by a 1st order approximation $$ f(t_{\,n} + \Delta t) = f(t_{\,n} ) + f'(t_{\,n} )\Delta t = f(t_{\,n} ) + g\left( {f(t_{\,n} )} \right)\Delta t $$ which is a feed-back mechanism in cumulating the errors.
It is clear that the critical point is in particular the function $g(x)$, which is the feed-back transfer function.
No wonder that it might cause your attempt to blow away from the right solution.
This function varies on the domain of the expected values for $f(t)$ and since it depends on the sequence of the $u$ values which are not specified in your post, I cannot be of further help.
Thus take care to analyze the expected influence of $g(x)$ on the errors, put bounds on it and determine the necessary degree in the quadrature approximation, and of course in the number of significant digits used.