As I understand it, when you fail to reject the null hypothesis that slope=0 for simple linear regression using a t-test - it would suggest that the linear model degenerates into an intercept only model i.e. a mean only model. So for future estimates of values of y, we shouldn't bother with the explanatory variable, and $\bar{y}$, the sample mean, would be a more natural choice.
Also as I understand it, one (really rough way) of looking at the coefficient of determination is the percent improvement that the least squares regression line is over the sample mean only model for explaining variation in y. So it seems very related to the above hypothesis test. Of course, the value is never 0 in practice, so there is always a slight improvement with the linear model. But, the question becomes when is the improvement worth it?
Does the t-test for significance for the slope answer this question where as $r^2$ gives a sense of magnitude? In fact, on further thought, isn't the t-stat just the square root of the f-stat. A test about the f-stat seems to really be a test about $r^2$...is that the connection that I'm looking for?