Let $\mathbb{X}, \mathbb{Y}$ be training and test sets respectively for some data we assume comes from a function$f$. Let $\hat{f}(\theta)$ be a model of $f$ with a parameter vector $\theta$. Assume our model fits the data with some accuracy $\mathcal{A}$, which signifies the accuracy of the model over the validation set. What does $\mathcal{A}$ tells us about the 'real' relationship between the parameters $\theta$ and $\mathbb{Y}$?
For example, let each $x_i$ be some economic variable (such as GDP, Gini coefficient, etc.) and $Y$ be the overall happiness of a country. Now assume we performed a linear regression, $\hat{Y}=x_1w_1 + x_2w_2 ... + x_nw_n $, where parameters $x_1, x_2, ... x_n$ are weighted features used to predict $Y$. Assume our model yields on average $40\%$ accuracy on cross validation.
$40\%$ is not a lot, but it is definitely something provided that we are only using economical variables to predict happiness. But what exactly is this accuracy telling us? Our model would clearly say that happiness has something to do with the economic variables, since they take our prediction power from just guessing to $40\%$ accuracy, but how do we know this is not just some property of our model? In other words, what conclusions can we draw about the 'real' relationship between economic variables and happiness?