Suppose there is an algorithm that runs on a finite set. If we do not directly specify a halting condition, such as reaching a certain value, or after given number of iterations, what are the methods of proving that the algorithm will eventually halt?
I know at least one, the potential function argument used to prove that best responses in a game halts if the game has a potential function. I want to know other methods.
First of all, you should make sure that your algorithm has the potential to stop at all. This means, at some point there should be a condition that allows the algorithm to break.
Than you should prove somehow, that this condition is true eventually.
Some conditions are rather simple to prove, like e.g. for loops. A variant connected to potential functions is counting the states. In a finite setting, the algorithm can only be in a finite number of states. If you can make sure, that the algorithm never returns to a state it has already been to, you are safe.
An example where you would take care of that is the simplex algorithm. It may be the case that the algorithm runs along some cycle and thus never ends. In that case, you need to store all states it has been in to to break out.
A simple variant of that is the Ford and Fulkerson max flow algorithm (for integral values). The flow value strictly increases in each iteration (e.g. no state can be visited twice) and it is limited from above. This argument is invalid for irrational values. The algorithm may run forever in this case.
There are also formal methods of proving correctness. These methods are the Hoarce calculus and semantic proving strategies. These are only usable for very simple algorithms, such as computing $n^k$.