Curve $a$ has a mean of $30$ points and a standard deviation of $6$ points.
Curve $b$ has a mean of $40$ points and a standard deviation of $3$ points.
What is the chance that a random value on curve $a$ will be larger than a random value on curve $b$ ?
I had this question on a statistics test a year back and can neither remember the name of this type of problem nor how to solve it. If you can give me a link to read more about this type of problem that would be greatly appreciated.
I'm not sure what this kind of problem is called, but I'll take a crack at it:
We'll call the curves $A$ and $B$. Then we have:
$$A\sim N(30, 6), B\sim N(40,3)$$
Assuming that the two variables are random and independent, then $X=B-A$ is also a normal random variable, where the mean is $\mu_B-\mu_A$ and the variance is $\sigma_B^2 + \sigma_A^2$. Thus:
$$B-A=X\sim N(10,\sqrt(45)$$
This essentially becomes a problem where $\Pr(X<0)$. Going to the standard normal distribution, $N(0,1)$, we have:
$$\Pr(X<0)=\Pr\Big(\frac{X-10}{\sqrt(45)}<\frac{-10}{\sqrt(45)}\Big)$$
Note that:
$$\frac{X-10}{\sqrt(45)}=Z\sim N(0,1)$$
Thus, we want the probability such that:
$$\Pr\Big(Z<\frac{-10}{\sqrt(45)}\Big)=\Pr(Z<-1.49071198...)$$
This is the same as $\Pr(Z>1.49071198...)$, which a Z-table tells me is roughly = .4319. Adjusting for the side of the distribution we're looking at, your answer is ~ 0.0681, or 6.81%.