I have some time series data. The data changes continuously; Thus, the underlying, true distribution is not constant, but it does not change rapidly.
What I do is:
I find good performing predictors from the oldest data to 365 days ago and put them into set A.
Out of set A, I test which of those predictors have good performance for data from 365 days ago to today and put them into set B.
Out of set B, I test which predictors have the best performance for the entire data set. I then use that predictor to try to predict new incoming data. I do this on a rolling basis daily.
Is this a good method to validate predictors?
Is step 3 necessary? Considering step 2 already measures the predictability of the predictors. Should I just pick the best predictor resulting from step 2?