In this video, it is said that an otherwise-stationary time series with non-constant linear mean is analyzed by taking the first difference of the time series to produce a new, stationary time series. That is, use the substitution$$z_t = a_{t + 1} - a_t$$This makes sense, since the derivative (first difference, up to hand-waving the continuous/discrete distinction) of a linear function is a constant function, and stationarity requires a time series to have constant mean. Similarly, an otherwise-stationary time series with $n$-th order polynomial mean is analyzed by taking the $n$-th difference.
In this video, it is said that an otherwise-stationary time series with seasonality is analyzed by a substitution such as$$z_t = a_{t + 365} - a_t$$Here $365$ would be the number of timestamps in the seasonal period (such as if $t$ is measured in days and the seasonality occurs on a yearly basis).
The similarity of the two substitutions suggest that the same basic transformation can remove either non-constant linear trendedness or seasonality from a time series. The natural interpretation would be that in time series, non-constant linear means are the special case of seasonality with a period of $1$ timestamp. However, this feels quite strange. For one thing, seasonality does not ordinarily imply any change to the mean in the long run, whereas in the degenerate case it apparently does. It is also hard to think about what seasonality with a period of $1$ timestamp is, conceptually speaking.
Is it viable to interpret seasonality in this way? What insight makes these concepts mesh together in the way the substitutions suggest? Is there some multidimensional notion of seasonality of which $n$-th order polynomial means are special cases?