What will happen if we minimize a concave function via gradient descent? Where does it get stuck?
Intuitively a concave function has more structure than an arbitrary function, and seem to be easier to minimize than an arbitrary function. It seems to me that the minimum of a concave differentiable function occurs on the boundary of the domain.
A computer science/ machine learning aspect:
Basically, it depends on how you implement your gradient descent. In a general case, it won't get stuck. It will keep searching infinitely. Gradient descent algorithm includes a stopping rule (check here also), namely a condition that will denote the fact that you found a minimum. If this condition is not true, then the algorithm will keep searching unless you add an extra rule to stop this infinite search.
Now, if your function as a minimum value as a boundary, then I don't see a reason to talk about minimizing a "concave" function. In a concave function, it would be more meaningful to search for maxima with gradient ascent.