I am going over Models of Computations and I have stumbled upon an excercise I have issues working out. Consider the following pair of mutually defined recursive functions:
$$ evens(w) := \epsilon\ for\ w = \epsilon\\ evens(w) := odds(x)\ for\ w = ax $$
with
$$ odds(w) := \epsilon\ for\ w = \epsilon\\ odds(w) := a.evens(x)\ for\ w = ax $$
(The dot operator is just concatenation.)
Now suppose we would like to show the following: $$ \forall w\ x, even(|w|) \rightarrow evens(w.x) = evens(w) . evens(x) $$
I would naturally proceed with induction on w, with the empty case being rather trivial, following from the neutrality of epsilon wrt. concatenation.
However I find it somewhat awkward to find a suitable inductive hypothesis for the case where w = ht.
I was thinking about something along the lines of:
$$
even(|t|) \rightarrow evens(t.x) = evens(t).evens(x)
$$
But considering evens-ness is defined in terms of odds, this hypothesis seems somewhat pointless. I proceeded like so:
$$ evens(htx) = evens(ht).evens(x)\\ odds(tx) = odds(t).evens(x) $$
Now considering |w| was originally even and non-empty and w = ht, having stripped h away, |t| is odd and as such it surely takes the shape of t = h't'. So we have: $$ odds(h't'x) = odds(h't').evens(x)\\ h'.evens(t'x) = h'.evens(t').evens(x)\\ evens(t'x) = evens(t').evens(x)\ $$
I now know that |t'| is even so I could feed the aforementioned inductive hypothesis, but it is expressed in terms of fixed t, not any w (nor t' which I currently have).
I have a feeling that I also need to somehow include some information about odds in my inductive principle, but i have no idea how.
Let me restate what we want to prove: $$\forall x\,\forall w \ \text{if $|w|$ is even, then } evens(w.x)=evens(w).evens(x)$$
For this, we fix an arbitrary $x$ and we prove the following by induction: $$\forall w \ \text{if $|w|$ is even, then } evens(w.x)=evens(w).evens(x)$$
Now, one way to prove this is by induction on $|w|$:
Since the OP was apparently going for a proof by induction on the structure of $w$ (i.e., assuming the statement holds for $t$, prove that it holds for $a.t$ where $a$ is a symbol), I'm including a proof along those lines. Unfortunately, in this case, the inductive hypothesis is not strong enough to allow us to conclude the proof. In cases like this, a standard technique is to strengthen the inductive hypothesis: we aim to prove a stronger statement than the one we originally wanted to prove, so that we can use a stronger inductive hypothesis during the proof. We will prove that $$\begin{align} \forall w\ & \text{if $|w|$ is even, then } evens(w.x) = evens(w) . evens(x) \ \wedge \\ & \text{if $|w|$ is odd, then } odds(w.x) = odds(w).evens(x) \end{align}$$
How does one guess the strengthened statement? It's a matter of understanding what the functions actually do. Here, $evens(w)$ just returns a string consisting of the symbols at even positions in $w$ (position numbering starts from $1$). Similarly, $odds(w)$ just returns a string consisting of the symbols at odd positions.
With this knowledge, it's actually not very surprising that $evens(w.x)=evens(w).evens(x)$ if $|w|$ is even: Sure, in order to write down the symbols at even positions of $w.x$, all you have to do is write down the symbols at even positions of $w$, all the way up to the last symbol of $w$ (which is at an even position because |w| is even), followed by the symbols at even positions of $x$.
Now, when you do the structural induction, you will have to deal with a string $t$ of odd length, so you need a similar property for odd-length strings. By a similar reasoning as in the previous paragraph, this property is $odds(w.x)=odds(w).evens(x)$.