I wondered the other day whether there is an optimum strategy for the famous children's game "it" in a formalised, game-theoretic sense. The game is as follows:
The "play area" is an $m \times n$ units (squared) grid. The "players" are two circles with some initial starting centers $\Omega_0$ & $\Omega_1$ and radii $r_0$ and $r_1$. When a "frame" or unit of time passes, each player is allowed to move their circle's center in any direction by up to 0.1 units of distance. The boundaries for this are that the circles' centers cannot move into such situations where their circumference goes beyond the boundary of the play area.
Player 1 is "it" and so their goal is to have a non-zero area intersection between it and Player 2's circles, while Player 2 wants to avoid such a situation for the longest possible time. The question is, if moves in a frame happen simultaneously (in that, both players choose where they want to move without knowledge of where the other player is going to go and then the frame is carried out and they are moved), is there an optimum strategy for Player 1 in the form of a function which takes the current centers of both players' circles, their radii, the size of the play area and outputs an optimum direction an distance below or equal to 0.1 units to travel in?
Further, if such a thing exists, is it a winning strategy?
This isn't an answer but too long for a comment: Your question is a variant of the "Cops and Robbers" problem that has recently been discussed on the PBD Infinite Series on YouTube.
There are two differences, as far as I understand your post:
This complicates things a bit and, frankly, my first step towards a solution would be to reconsider the modeling of this game. Do we really care about arbitrary (non-discrete) traveling distances and arbitrary angles of movement? Maybe it's okay to model this game as a fine but finite grid where players can only move along the edges of this grid. (Then it doesn't really matter that they can move more than one unit during any given step, as this just adds a bunch of edges to our graph.)
If that's acceptable, then we are back to the "Cops and Robbers" problem, on a finite graph, but with a slightly altered winning condition for the cops. Still, this winning condition does seem to complicate things. One trivial observation, however, is
Proof. Suppose the cop has a winning strategy in our new game. In the old game let the $n$ cops move such that they cover, at all times, all the vertices of the graph within winning distance. This is a winning strategy. Q.E.D.