Consider the Gaussian white noise model given by, $$ dX_{t_{1},...,t_{d}}=f(t_{1},...,t_{d})dt_{1}...dt_{d}+\theta dW_{t_{1},...,t_{d}}$$ with $W$ a $d-$parameters Wiener field, $f:[0,1]^{d}\rightarrow \mathbb{R}$ a signal and $\theta\geq0$ the level of noise. Furthermore suppose that $f$ is $(L,\alpha)$-Hölder continuous, i.e $\forall x,y\in[0,1]^{d}$, $$|f(x)-f(y)|\leq L\|x-y\|_{2}^{\alpha}.$$ As far as i know, this framework seems pretty common in non-parametric statistics, it has been extensively studied, there are several options to estimate $f$ while achieving minimax rates, for example wavelet estimators. Most of these methods involve a smoothing parameter $h$ that need to be calibrated, the calibration depending on the regularity parameter $\alpha$. Thus, knowledge of $\alpha$ is required, which is in practice often not the case.
If i understood well, the purpose of adaptive estimation, in this context, is to overcome this issue by proposing ways to choose $h$ without knowledge of $\alpha$. A such method that I encountered many times in various papers, is the "Lepski's method" (as introduced in "Asymptotically minimax adaptive estimation. i: Upper bounds. optimally adaptive estimates", Oleg Lepski, 1991).
In several papers that I have come across (in Lepski original papers, or in more recent works by Bertin or Klutchnikoff on anisotropic-Hölder signals), to apply Lepski method, they suppose knowing that $\alpha\in[\alpha_{\min},\alpha_{\max}]$ and furthermore that $L\in[L_{*},L^{*}]$. Hence, even if this not require exact knowledge on the value $\alpha$ and $L$ it still requires some knowledge about $\alpha$ and $L$. Especially, the fact that we need knowledge about $L$ confuse me because in the non-adaptive case estimators are independent of $L$.
Are those assumptions on $\alpha$ and $L$ necessary for adaptive estimation in this context ? If not, what other methods are used to get rid of those assumptions on $\alpha$ and $L$ ?