I have a small dataset including some value in money, some number of people, and the average which is (total money/people).
When I graph these variables agains an outcome, the slope of the "total money" is negative, the slope of the "people" variable is positive but very very small, and the slope of the average (total money/people) is also negative, but steeper than the "total money". (Figure below.)
Intuitively it makes sense to me, but I cannot figure out why, in terms of theory. Could anyone point out the theory behind it, or recommend some material where I can understand this issue? enter image description here
Thanks!
If this is what you mean by the description of your graphs...
Now consider 2 points on each of these graphs, $ p1, p2, q1, q2$ and $r1, r2$, of form $(tm1, out1), (tm2, out2), (np1, out1), (np2, out2), (am1, out1), (am2, out2)$ respectively.
so having first two graphs How can you construct the third one:
Clearly if $Avg. Money = {Total Money\over No. Of people}$, each corresponding point for out1, out2 can be calculated directly by
Dividing corresponding y-coordinate of graph 1 to that of graph 2.
Now when considering the slops,
see the change you did in np1 and np2 will be very less(as the slop is very less, considering it's <1) while tm1 and tm2 will be large.
it follows $\vert{{{tm2 - tm1 \over (\Delta outcome)} \over (np2-np1) }}\vert > \vert{tm2 - tm1 \over (\Delta outcome)} \vert$
Or you can directly infer that you are Dividing a Negative number with a positive number which is less than 1. So magnitude would increase. But sign will remain same (negative).
Note: The linearity of graphs above can be replaced with curves as long as the slop constraints you wrote, are not violated.