Problem.
Let $u_t^\varepsilon + a u_x^\varepsilon = \varepsilon u_{xx}^\varepsilon$ where $a \in \mathbb{R}$. Use change of variables $w^\varepsilon = u^\varepsilon(x + at, t)$ and show that $w$ satisfies the heat equation.
My Question.
Apparently, $w^\varepsilon_t = au^\varepsilon_x(x + at, t) + u_t^\varepsilon(x + at, t)$ and $w_{xx}^\varepsilon = v^2_{xx}(x + at, t)$, and the answer immediately follows. My question is: How exactly is this change of variables being executed?
More specifically, I think what throws me off is the fact that the domains and codomains of the functions are never defined, the "of $x$ and $t$" notation is used or dropped seemingly at the whim of my professor, and the explicit change of variables functions are never defined. I also suspect that '$x$' and '$t$' don't mean the same thing on the left and right sides of the expression? Like, we're writing $x$ and $t$ as a function of two other variables...but then also calling those new variables $x$ and $t$?
If my professor would just write these things down, I could work out the details. So, I'm hoping someone can just supply a more rigorous set up regarding the functions involved. Thanks in advance!
Edit: If I were guessing, we're saying that $u$ is a function of $\bar{x}$ and $\bar{t}$, i.e. $u(\bar{x}, \bar{t})$, and then the change of variables is that $\bar{x}(x,t) = (x + at)$ and $\bar{t}(x,t) = t$ so that $w(x,y) = u(\bar{x}(x,y), \bar{y}(x,y))$ meaning that, for example, $$w_t(x,t) = \frac{\partial u}{\partial \bar{x}} \frac{\partial \bar{x}}{\partial t}(x,t) + \frac{\partial u}{\partial \bar{t}} \frac{\partial \bar{t}}{\partial t}(x,t) = au_\bar{x}(ax + t,t) + u_\bar{t}(ax + t,t),$$ which is what I mean when I say I suspect that $x$ and $t$ don't mean the same thing on the left and right. Notice that I'm taking partial derivatives with respect to $\bar{x}$ and $\bar{t}$, not $x$ and $t$.
(I'll drop the $\varepsilon$ superscript as its not relevant to this answer.)
What your professor is doing is actually correct, though I will admit it is confusing. The way to overcome this confusion is to think about arguments, not variables.
As a case study, suppose we have a function of a single variable, $f:\Bbb R\to \Bbb R$. One common way to express its derivative is $\frac{\mathrm df}{\mathrm dx}$. This is "Leibniz" notation. While powerful, this notation makes the assumption that we have given the argument of $f$ the name $x$. But, it shouldn't matter what name we give to the argument of $f$. So, is $\frac{\mathrm df}{\mathrm dx}=\frac{\mathrm df}{\mathrm dz}$? The only thing that has changed between the two expressions is the name of the argument, and names shouldn't affect derivatives. But, most people would be uncomfortable claiming these two to be equal. This is, in my mind, is the downfall of Leibniz notation. On the other hand, there is Newton's notation, in which the derivative is denoted $f'$. At least for this purpose, this is much better, because it is completely coordinate free. It doesn't matter if you call the argument of $f$ by the name of $x$, $z$,$\pi$, or even $\text{apple}~\pi$. The derivative is always the same: $f'$. But alas, Newton's notation has its disadvantages as well. If $F:\Bbb R^2\to\Bbb R$ is a function of two arguments then the notation $F'$ is ambiguous. Some sources and mathematical software packages use notation like $F^{(1,0)}$ to denote the first derivative of $F$ with respect to its first argument, but this quickly gets confusing. So, by far the most common solution is to start using Leibniz notation when dealing with higher-dimensional mathematics and instead denote this derivative as $\frac{\partial F}{\partial x}$, giving the first argument of $F$ the name $x$. But, like any other notational standard, it has its disadvantages. When dealing with multiple different functions of multiple different arguments, it is hard to give each argument a unique name, and there will be repeats. That is exactly what has happened here. In your work, the symbols $x,t$ refer not only to the variables $x,t$, but also to the first and second arguments of the functions $u,w$. Let's resolve this issue by doing everything in terms of arguments.
Let's say, we have a function $u:\Bbb R^2\to\Bbb R$, which satisfies the PDE $$c\partial_1 u+\partial_2u=\varepsilon \partial_1^2u$$ Here, $\partial_i^k$ denotes the $k$th derivative with respect to the $i$th argument. We suppose that there exist functions $w,\xi_1,\xi_2$ such that $$u(x_1,x_2)=w\big(\xi_1(x_1,x_2),\xi_2(x_1,x_2)\big)$$ Or perhaps better, this equation can be written coordinate free, $$u=w\circ(\xi_1,\xi_2)$$ The chain rule says that $$(\partial_1 u)(x_1,x_2)=\sum_i(\partial_i w)\big(\xi_1(x_1,x_2),\xi_2(x_1,x_2)\big)\cdot(\partial_1\xi_i)(x_1,x_2)$$ Similarly, $$(\partial_2 u)(x_1,x_2)=\sum_i(\partial_i w)\big(\xi_1(x_1,x_2),\xi_2(x_1,x_2)\big)\cdot(\partial_2\xi_i)(x_1,x_2)$$ We can as well calculate the second derivative, using the product rule $$(\partial_1^2 u)(\boldsymbol x)=\sum_i\left[(\partial^2_{1}\xi_i)(\boldsymbol x)\cdot (\partial_i w)(\boldsymbol \xi(\boldsymbol x))+(\partial_1\xi_i)(\boldsymbol x)\cdot\left(\sum_j (\partial_j\partial_iw)(\boldsymbol \xi(\boldsymbol x))\cdot (\partial_1 \xi_j(\boldsymbol x))\right)\right]$$ I have used the abbreviations $\boldsymbol \xi=(\xi_1,\xi_2)$ and $\boldsymbol x=(x_1,x_2)$ so that the equation stays on one line! We now consider the special case $$\xi_1(x_1,x_2)=x_1-cx_2 \\ \xi_2(x_1,x_2)=x_2$$ Let's calculate $(\partial_1 u)$. Since $\partial_1 \xi_2=0$ the second term disappears: $$(\partial_1u)(\boldsymbol x)=(\partial_1 w)\big(\boldsymbol \xi(\boldsymbol x)\big)\cdot(\partial_1\xi_1)(\boldsymbol x) \\ =(\partial_1 w)(\boldsymbol \xi(\boldsymbol x))$$ Now we do $(\partial_2 u)$: $$(\partial_2 u)=(\partial_1 w)\big(\boldsymbol \xi(\boldsymbol x)\big)\cdot(\partial_2\xi_1)(\boldsymbol x)+(\partial_2 w)\big(\boldsymbol \xi(\boldsymbol x))\big)\cdot(\partial_2\xi_2)(\boldsymbol x) \\ =-c(\partial_1 w)(\boldsymbol \xi(\boldsymbol x))+(\partial_2 w)(\boldsymbol \xi(\boldsymbol x))$$ Finally the tricky one, $(\partial_1^2 u)$. Because all of the second derivatives vanish, i.e $\partial^2_j\xi_i=0$ it simplifies a little: $$(\partial_1^2 u)(\boldsymbol x)=\sum_i\left[(\partial_1\xi_i)(\boldsymbol x)\sum_j(\partial_j\partial_iw)(\boldsymbol \xi(\boldsymbol x))\cdot (\partial_1 \xi_j(\boldsymbol x))\right]$$ Becuase $\partial_1\xi_2=0$ all but one of the terms drop out: $$(\partial_1^2 u)(\boldsymbol x)=(\partial_1\xi_1)(\boldsymbol x)\cdot (\partial_1^2 w)(\boldsymbol\xi(\boldsymbol x))\cdot (\partial_1\xi_1)(\boldsymbol x) \\ =(\partial_1^2 w)(\boldsymbol \xi(\boldsymbol x))$$ Putting everything together, $$0=\varepsilon(\partial_1^2 u)(\boldsymbol x)-c(\partial_1 u)(\boldsymbol x)-(\partial_2 u)(\boldsymbol x) \\ =\varepsilon(\partial_1^2 w)(\boldsymbol \xi(\boldsymbol x))-c(\partial_1 w)(\boldsymbol \xi(\boldsymbol x))+c(\partial_1 w)(\boldsymbol \xi(\boldsymbol x))-(\partial_2 w)(\boldsymbol \xi(\boldsymbol x)) \\ \implies \varepsilon(\partial_1^2 w)(\boldsymbol \xi(\boldsymbol x))=(\partial_2 w)(\boldsymbol \xi(\boldsymbol x))$$ Since the function argument is the same on both sides, we can drop it: $$\varepsilon\partial_1^2 w=\partial_2 w$$ Finally, we can give the first argument of $w$ the name $x$ and the second argument the name $t$ in order to write $$\partial_t w=\varepsilon \partial_x^2 w$$ Which is the heat equation.