Say I have two regular matrices $A, A' \in \mathbb{R}^{n \times n}$, two right hand sides $b, b' \in \mathbb{R}^{n}$ and two vectors $c, c' \in \mathbb{R}^{n}$. In my algorithm I want to perform a certain step if $c^{T} x < c'^{T} x'$, where $x := A^{-1}b$ and $x' := A'^{-1} b'$ (obtained from factorizations).
Clearly, in floating point arithmetic, the answer to this question depends on how the matrices are conditioned. Intuitively, if the condition numbers $\kappa(A)$ and $\kappa(A')$ respectively are large, then the solutions $x$ and $x'$ are completely inaccurate. In this case, I would like to proceed as if the condition $c^{T} x < c'^{T} x'$ is not satisfied.
I have seen the rule of thumb that if the condition number of a matrix is $10^{k}$, then up to $k$ digit of accuracy are lost when solving systems. Consequently, I believe that $k$ digits of accuracy are lost when computing $c^{T} x$. Based on the initial accuracy of doubles (~14 digits I believe), this could be used to put error bars on both scalar products.
Is such an approach reasonable or is there a more mathematical rigorous way to obtain the errors of the scalar products?