I'm an Operations Research student who's trying to wrap his mind around extreme value theory. I've read into EVT and more specific into the first theorem of Fisher, Tippett and Gnedenko.
In their modeling approach, they apply block maximum modeling (BMM) in order to partition data in m clusters of size k. My question relates to determining the size of those partitions, i.e. how do determine m and k?
In [1] I read about the decision making process of choosing an optimal m, but it seems to me that it's mostly done in an iterative fashion, constantly evaluating the model's accuracy.
What I'd like to know if some of you have helpful information w.r.t. choosing BMM parameters and useful tips from your own experience? I'm curious to how others have handled this process.
Kind regards, N
[1] Coles, S., Bawa, J., Trenner, L., & Dorazio, P. (2001). An introduction to statistical modeling of extreme values (Vol. 208). London: Springer.