It's common to read in textbooks that semidefinite programming solvers are inherently inaccurate.
Are the authors referring to the general machine inaccuracy (things like $10^{-16}=0$) or a special case around SDPs only?
If it is a special case, does this problem occur in interior point methods for linear programming as well? Why does it happen?
Is there some way to avoid this or to increase the maximum precision?
Edit: As Prof. Borchers requested, in Moments, Positive Polynomials and Their Applications, by Jean Lasserre, he mentions, in the Algebraic Certificates of Convexity section, that:
"[...] in practice such a certificate is numerical and so can be obtained only up to a machine precision, because of numerical inaccuracies inherent to semidefinite programming solvers."
The same is written in Modern Optimization Modelling Techniques by Comminetti, Facchinei and Lasserre. I suspect that I have read it somewhere else as well.
Thank you!
Both of the references are simply making the point that due to round-off errors, floating point numerical solutions to SDP are inherently inexact.
In the applications that Lasserre is interested in, we'd like to have exact certificates that (for example) a polynomial can be written as a sum of squares, but a numerical solution isn't really proof. There has been work on rounding numerical solutions to produce exact (in rational form involving integer numerators and denominators that might require many digits of precision) solutions in such cases.
There's nothing special about SDP with respect to this- similar issues occur with linear programming and other optimization problems.
It happens that some weird things can occur in SDP (such as lack of strict duality) that don't occur in LP, but that's a separate issue from the fact that floating point arithmetic isn't exact.