Can Gradient Based Optimization Techniques be used to solve Travelling Salesperson Problem?
The Travelling Salesperson Problem is a famous optimization problem in which a Salesperson has to figure out which order to travel across "n" cities - such that the total distance travelled is minimum and each city is visited once.
The "objective function" in this problem is a function that maps "different orders of cities" to the "total distance corresponding to that order of cities". As an example, the objective function might look something like this:
My Question: In the case of the Travelling Salesperson Problem - is the Objective Function "differentiable" in the same sense that the equation of a parabola (y = x^2) is differentiable?
At first glance, the answer obviously seems to be "no". Afterall, how can we take the derivative of the above function? The above function is not defined over a continuous domain!
Thus, can we say that in the case of the Travelling Salesperson Problem (i.e. discrete combinatorial optimization problems), the objective functions are "non-differentiable"? If this is true - is it then "impossible to use Gradient Based Optimization on Travelling Salesperson"?
Or even in such cases, the objective functions can still be said to be differentiable?
Can someone please comment on this?
Thanks!


A particular case of continuous optimization is linear programming (LP), where we have a linear objective function and a set of linear constraints over variables (constrained to be real). LPs are convex and can be solved efficiently via gradient based algorithms, like the simplex algorithm.
A related problem is mixed-integer linear programming (MILP), where we also have a linear objective function and a set of linear constraints, but this time, the variables can be constrained to be integer. This offers much more expressiveness, and we can easily show that every $NP$-problem can be expressed as a (polynomially small) MILP problem. However, the gradient methods are no longer guaranteed to converge to optimality as the problem is now discrete. But if we relax the integrality constraints on variables, the optimum often gives a strong dual bound on our problem. Combined with tree search (via a branch and bound), this can drastically speed up the resolution (though complexity remains exponential).
It is still possible to express combinatorial problems with convex functions. Using an exponential set of constraints, we can express the Travelling Salesman Problem as a linear problem. This can be achieved by expressing the problem as a SAT instance. The set of points formed by the corresponding binary vectors will always be convex, so we can take as constraints all the facets of the convex hull. Unfortunately, it is known that we will always end up with an exponential number of facets, no matter how we model the Travelling Salesman Problem. In fact, we still don't know any polynomial time algorithm for $NP$-hard problems.