Across the set of d-regular n-vertex graphs, if there is a strongly regular graph in that set, it often (always, as far as I was able to check) seems to maximize the spectral gap: $\lambda_1 - \lambda_2$, the difference of the two leading eigenvalues of the adjacency matrix. Since they're regular, $\lambda_1=d$, so equivalently they seem to minimize $\lambda_2$. Is this always the case? If not, why does it seem to happen so often (this can be a soft question), and are there additional containts that make this property sufficient?
This might be related to the fact that they're often good expanders, which implies larger spectral gaps. It might also be related to something about maintaining a certain standard deviation in their eigenvalues (or some similar metric of spread), which coupled with their high degeneracy will draw their spectrum $[\lambda_n \dots \lambda_2]$ close together.