In statistics, there is the $\mathrm{R}$ value for the product moment correlation coefficient and the $\mathrm{R}^2$ value for the coefficient of determination.
In both cases they are described as a scale of correlation, where $0$ is no correlation and $1$ is perfect correlation. However, for a given data set these values are different - for example when $\mathrm{R}=0.8$ the $\mathrm{R}^2=0.64$. How can this be so?
Another layer of confusion is that for the $\mathrm{R}^2$ value, it is said that the value represents the proportion of change in $y$ caused by changes in $x$. For instance, an $\mathrm{R}^2$ value of 0.7 means that 70% of the change in the dependent variable is explained by changes in the independent variable. Is this also true of the $\mathrm{R}$ value? Why or why not?
The sample correlation $r$ can take values in $(-1,1)$, where negative values indicate negative linear association between the two data vectors and positive values indicate positive linear association. The sample coefficient of determination $r^2$ takes values in $(0, 1),$ where larger values indicate increasing linear association. (The Wikipedia article on correlation has some nice examples to distinguish 'linear association' from 'association'.)
In software, especially for regression output, $r^2$ is often written as $R^2$ or
R-sq. Strictly speaking it just the square of the correlation, but is often written as a percentage.However, in simple linear regression, where we are trying to predict y values from x values, one tends to focus on the variability of y. Roughly speaking, in that context, one sometimes says that regression of y on x `explains R percent of the variability in y'.
This latter interpretation comes from the equation $$s_{y|x}^2 = \frac{n-1}{n-2} s_y^2(1 - r^2).$$
For example, if $y = b_0 + b_1 x\,$ exactly, then all $(x,y)$ points fall precisely on a line, $r = \pm 1$ (depending on positive or negative slope), $r^2 = 1 = 100\%,$ and all of the variation in y is explained by regression on x.