I have a question about the key result from this paper :
Candès, Emmanuel J.; Tao, Terence, The power of convex relaxation: near-optimal matrix completion, IEEE Trans. Inf. Theory 56, No. 5, 2053-2080 (2010). ZBL1366.15021.
The question specifically is about theorems 1.1 and 1.2. I am trying to understand the significance of $C$ in equations I.11, I.12 and the sequel.
I understand $C \in \mathbb{R}^+$ is some constant that provides, informally speaking, a kind of asymptotic complexity for the minimal observation required for the high-probability recovery of the matrix $M$.
But what I fail to understand is the concrete implication of this result. Given a certain matrix with some known properties, say $M \in \mathbb{R}^{n_1 \times n_2}$ with a prior known rank $r$, how is one to arrive at a concrete estimate of the minimal number of observations required for a low-rank completion of this matrix? Is $C$ a property of the data that needs to be somehow 'calibrated' ?
My grasp of this subject is particularly remedial, please bear with me if this question sounds trifling. Thanks in advance.