In the context of Decision Making and Game Theory, "Bellman's Equations and Bellman's Conditions of Optimality" are said to be some of the most important mathematical principles in this field.
Reading the corresponding Wikipedia page (https://en.wikipedia.org/wiki/Bellman_equation), Bellman's Condition of Optimality is defined as follows:
I am trying to understand what is considered so "groundbreaking" about the above statements.
As far as I understand, Bellman's Principle of Optimality is saying that - for a policy to be considered as optimal, the policy must be optimal at each time point where the policy is being considered. If I have understood this correctly - isn't this kind of obvious?
To me, this sounds like a tautology - for something to be blue, the thing must also be blue.
I think I am obviously not understanding the above statements properly.
In short, could someone please explain why Bellman's Equations and Bellman's Conditions of Optimality are considered so "important and groundbreaking"?
Thanks!

Bellmans's optimality principle:
An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision.
First of all, optimal control was an open problem at the time. So, any solution to that problem was groundbreaking.
Bellman proposed one such solution and his solution is called Dynamic Programming. Another one is based on Pontryagin Maximum Principle which, as opposed to Dynamic Programming, was first derived for continuous processes.
It is not because a statement is simple to state that it is simple to obtain or figure out. When one gets to understand the solution to a problem, everything becomes obvious. Dynamic Programming is around 70 years old and it is now well-established and well-understood. This is one of the reasons why it can be explained in quite simple terms. Another one is that it is also a very natural approach.
Bellmans's optimality principle is an interpretation of dynamic programming, but he likely did not develop dynamic programming from this statement.
What was groundbreaking at that time is that this is established by deciding on the decisions backwards in time. That makes perfect sense if one thinks about it because if we would decide forward-in-time, there is no reason why the decision you make now will lead to an overall optimal policy (i.e. an optimal total cost). If you go backward-in-time, you can always choose to make the best decisions.