Suppose $X\sim \mathcal{N}(0,\sigma)$, and $\sigma$ is another random variable in a sense that we only know that it is some constant random variable with finite support, i-e $\sigma \in [\sigma_\max, \quad \sigma_\min].$
Question Is it plausible to consider $\sigma$ as uniformly distributed i-e $$\sigma \sim \mathcal{U}[\sigma_\max, \quad \sigma_\min]?$$
If No, then what could be the best we could do to estimate (or infer about) $\sigma$ by only knowing its maximum and minimum values?
The true prior distribution on $\sigma$ could be strongly centered at any value within your range, or even take on any value in your range with probability $1$. Thus the only thing you can really do, if you don't have any guess for the prior distribution on $\sigma$, is give bounds on things you're trying to compute, for example in terms of the minimum and maximum possible values for $\sigma$.
Alternatively, people try to come up with "flat" uninformative priors. However this is tricky, because a seemingly flat prior might actually produce bias in the posterior if the prior isn't formulated correctly. I believe there is a supposedly "flat" prior for the variance of a Gaussian, but I'm not sure if it accepts a lower and upper bound. Look up priors on parameters for a Gaussian distribution.