I need to optimize a function $f:\mathbb{R}^n\to\mathbb{R}$ of which the explicit form missing, but I can evaluate it at any point of its domain. The only thing I know is that such function is piecewise constant. Also, the evaluation of the function is time-consuming, so is preferable that the set of points at which the function is evaluated remains relatively small.
I wanted to use some kind of direct search method, but it looks like the latter do not converge for non-differentiable function.
Is there any other method which can be used to optimize a black-box function made of step functions? Alternatively, is there any argument I can use in order to legitimate the use of a direct search algorithm although it does not converge?
Many thanks
Surely you know a little more about your function. Otherwise, I would say that it looks quite hopeless. Being a step function gives you almost no information.
Let us take $n=1$, and even imagine that your function is only defined on $[0,1]$ (not the full real line $\mathbb{R}$), so that the domain in which we are searching for is bounded. Let us also imagine that you know that your function has exactly $10$ steps, located on $[\frac {j-1}{10}, \frac{j}{10}]$ for $j \in \{ 1, \dotsc, 10 \}$. So your function is characterized by its $10$ values on these $10$ steps. If you have no a priori information on these values, then there is no better algorithm then to compute the ten values by evaluating $f$ say at the middle of each of these steps.
You have to add some kind of (even implicit) assumption on the behavior of the function or its steps to be able to use a more clever algorithm. Otherwise, as we have seen, the values could be completely random, even if we have a priori knowledge of the number of steps and their position.