I am currently making a model for a set of raw data of sea levels from the NOAA data base. On the site, the sea level is recorded every 6 minutes. Because I wouldn't have time to copy data every 6 minutes from May 27th to June 27th, I decided to only record the data for the maximum and minimum points. The graph looks like this:
(note that the model is in green and is unfinished, the blue line is the one based on raw data)
Once I finish finding a model (function), I want to see the error, or basically find out how accurate my model is.
The problem: since the blue line (raw data) is based only on maximum and minimum points, I can't use MAPE (or percentage error method) to calculate the real difference, because it will only tell me how much the maximum and minimum points on my model fit those of the original graph.
Is there another way to calculate the error at different points as well?
I don't quite understand your question because I'm unsure how you plan to compare your model result with actual data for points other than the maximum & minimum because you don't have the actual data? So when you say 'because it will only tell me how much the maximum and minimum points on my model fit those of the original graph' I don't understand how you could ever hope to do anything else, regardless of the test you use?
Is there a reason you can't use just the standard mean square error? So long as the error is expected to be Gaussian, it will allow you to do statistical inference testing on the model to determine how well it fits the data.