I have two data sets: A & B with values: $a_1,a_2...a_n$ and $b_1,b_2...b_n$ that represent the values for the same elements ($x_1,x_2...x_n$). For instance, $x_1 = a_1$ in the first data set and $x_1 = b_1$ in the second data set.
These data sets have very different values, but its relative values should be the same ($\frac{a_i}{a_j}=\frac{b_i}{b_j}$). This is not the case because the data come from experiments. I would like to obtain a scaling constant to multiply data set B to match data set A with the least error.
What is the best method to do this?
Edit: Also, each value in B has an uncertainty measurement, how can I take into a account this effect? As I must be more focused on matching the values that have the least uncertainty.
Here is an example using data similar to yours. At a hospital, blood tests are routinely performed on newborn babies to determine whether too many red cells are present in the blood. Two methods of assaying blood cells are in common use: hematocrit (which determines the percent by volume of red cells) and hemoglobin (which is found by making a chemical determination of the amount of hemoglobin in the blood, expressed as grams per deciliter).
We have laboratory measurements of both, called
LabCritandLabHgbfor 43 newborn babies. A regression 'through the origin' ($0$ y-intercept), as suggested by @AdrianKeister (+1), gives the following result:Notes: (1) One reason for monitoring newborns in this way is that some babies are born with too many red cells, a potentially life-threatening condition, which is easily remedied if detected immediately. (2) It is well-known that hemoglobin (in g/dl) is about $1/3$ times hematocrit (in %), so our findings match what has been observed before. (3) The reason for this particular study was to determine the feasibility of using a new optical method to assay red blood cells (4) Data from Herzog and Felton: Hemoglobin screening for normal newborns: J. Perinatology, XIV, 4, July 1994.