One of the big achievements since the 80s is the solution of linear programs, e.g., by barrier method. For example, to solve
$$\begin{array}{ll} \text{minimize} & c^T x\\ \text{subject to} & Ax \leq b\end{array}$$
one instead minimizes
$$F(x) = tc^Tx - \sum_{k=1}^m \log([b-Ax]_k)$$
for given $t > 0$ to obtain the central path $x^*(t)$, and then you follow it as $t \to \infty$ by damped Newton steps. The computational complexity is well-understood, and loosely speaking, it will take $O(\sqrt{m}\log (m\epsilon))$ damped Newton steps to reach duality gap $O(\epsilon)$.
I want to replace all arithmetic operations with (very) approximate ones, because my $m$, and the dimension $n$ of $x$, are both mind-bogglingly large. The quantities $c^Tx$ and $\log(b-Ax)$ will be computed very roughly, and similarly for the gradient and Hessian of $F$. This is for high-dimensional PDEs, and I'm considering using hierarchical Tucker or tensor trains to approximate everything, so the errors are far from machine accuracy.
But nevermind the tensor stuff, do we have any idea what happens when arithmetic operations are (very) inexact in linear programming? Specifically, how does it affect convergence, and can we still estimate the number of (now approximate) Newton steps?
I've tried to google for various combinations of "linear programming inexact arithmetic" but I find nothing. At first blush, it seems to be a can of worms because even feasibility $Ax \leq b$ is hard to decide for given $x$.
Thanks.