How can I prove that if you apply some algorithm over and over again on a solved Rubik's cube, the cube will be solved?
I mean mathematically not conceptually.
How can I prove that if you apply some algorithm over and over again on a solved Rubik's cube, the cube will be solved?
I mean mathematically not conceptually.
As others have noted, this is a theorem of group theory, but it’s easy enough to give the argument without explicitly mentioning groups. First, observe that any sequence of moves can be undone by doing them backwards and returning to the same position. Now observe if you repeat the same algorithm over and over again, and at any point you return to a configuration you’ve seen before, then you must have returned to your initial configuration before that. (Say you see a certain configuration after 100 iterations and again after 500 iterations. Then you must have seen the initial configuration after 400 iterations since we know we can rewind by 100 and reach the initial configuration.) Finally, observe that there are only finitely many configurations, so by the pigeonhole principle you must eventually see one twice.
The set of "moves" on a rubix cube forms a group, and since there are only finitely many states a rubix cube can be in, this group is finite. It is an early theorem of group theory that every element of a finite group has finite order. That is, repeating a move of a finite group will always bring you back where you started in a finite number of repetitions.
If you want me to prove any of these claims, or elaborate more on why the rubix cube moves form a group, etc. Let me know, and I'll elaborate when I'm at a desktop instead of on mobile. I'm not sure what your background with group theory is, so I'm not sure what you need me to include.
Edit:
Ok, I'm back. Let's go a bit more in depth on how everything I just mentioned works.
First - What does the rubix cube have to do with a group?
Second - Why does any element of a finite group have finite order, and what does this mean?
Third - Why does this let us conclude that, if we start from a solved cube, repeating any sequence of moves will always (eventually) get us back to a solved state.
Let's look at the set of all sequences of moves, or "algorithms" as the cubers call them. I claim these form a group, where group multiplication is "do the first algorithm, then do the second algorithm".
Notice there is an identity algorithm, where we just... don't move anything.
There is also an inverse algorithm for any algorithm. We reverse the order and direction of each "primitive" move. So ULR' --> RL'U'. It is easy to see that this will always give an inverse (or you could induct on the length of the algorithm, if you want).
Finally, algorithm-composition is associative. Doing algorithm AB then C is the same sequence of moves as doing algorithm A then BC, because we do the same moves in the same order no matter what.
Ok, so we have a group. But there's infinitely many algorithms I could try! After all, L, LL, LLL, LLLL, LLLLL, ... and so on are all different algorithms. Of course, we know that LLLL is the same thing as the identity algorithm.
Let's see how to make this more general. Let's identify an algorithm A with the state it brings the rubix cube to when we start solved. So the identity algorithm, LLLL, UUUU, and so on all correspond to a solved cube. Whereas L corresponds to a solved cube, but with, say, the orange face rotated once (here I'm holding the cube with white on bottom, orange on left).
Now, there are only a finite number of states the rubix cube can be in, so many of our algorithms will correspond to the same state. If we consider two algorithms "equivalent" if they send the solved cube, we will get a quotient group with only finitely many elements -- one for each state the cube can be in.
It is possible that this is too complicated, so I'll give a simple example of the same principle. Let's say we have a button which turns a light on and off. There is only one "move" here, P, for "push the button". We again have a bunch of "algorithms" we can build out of this, but each one looks like $P^n$ for some number of presses. These algorithms again form a group (albeit a much simpler one). We again consider two algorithms "equivalent" if they leave the light in the same position. So P, PPP, PPPPP, and so on are all equivalent, because they turn the light on. Similarly, PP, PPPP, PPPPPP, and so on (plus the identity algorithm) are equivalent because they leave the light off.
Now we can quotient our group by this equivalence, and we are left with a group with 2 elements: $\{P, PPP, PPPPP, \ldots\}$ and $\{\langle \rangle, PP, PPPP, PPPPPP,\ldots\}$. Here we have one group element for each reachable state. We are doing the same thing with our rubix cube, it is just a more complicated object (Also, I wrote $\langle \rangle$ for the "do nothing" algorithm, because it's hard to show it otherwise).
Ok, so now we know that we have one group element for each reachable state. Let's show a group theory fact that will help us:
Theorem: If $G$ is a finite group, and $g \in G$, then $g^n = 1$ for some $n$.
Proof: Consider $\{g^0, g^1, g^2, g^3,\ldots\}$. This set is infinite, but $G$ is finite. So there must be some $g^i = g^j$ by the pigeonhole principle. Without loss of generality, say $i < j$. Then by multiplying both sides by $g^{-i}$, we see $g^i = g^j$ becomes $1 = g^0 = g^{j-i}$. So repeating $g$ some number of times (in particular $j-i$ times) is the same thing as doing nothing.
Finally we see the light at the end of the tunnel! We know that the set of all algorithms (where we identify two algorithms that "do the same thing") is a finite group! So by the above theorem, we are guaranteed that any algorithm $A$ has $A^n = 1$ for some $n$. Now we are done! Because $A^n = 1$ means $A^n$ does the same thing as the do nothing algorithm, so $A^n$ doesn't change the cube.
I hope this helps ^_^