minmod slope total variation diminishing

204 Views Asked by At

This is exercise 6.5 of the book Finite Volume Methods for Hyperbolic Problems by R.J. LeVeque (2002).

Show that the minmod slope guarantees that $$ TV(q^n(·, t_n)) ≤ TV(Q^n) \tag{6.23} $$ will be satisfied in general, and hence the minmod method is TVD.

with the minmod slope defined as

$$minmod(a,b) =\left\{\begin{array}{lll}a&\text{if } |a|<|b|&\text{and } ab>0\\b&\text{if } |b|<|a|&\text{and }ab>0\\0&\text{if } ab\leq 0&\end{array} \right.$$

$$\sigma_i^n= minmod\left(\frac{Q_i^n−Q_{i-1}^n}{\Delta x},\frac{Q_{i+1}^n−Q_i^n}{\Delta x}\right).$$

and the total variation diminution (TVD) property defined by $TV(Q_{n+1}) \leq TV(Q_n)$.

It is known that

$TV(Q)=\sum_i|Q_i-Q_{i-1}|$ (definition of total variation)

$TV(Q^{n+1})\leq TV(q^n\left(\cdot,t_{n+1})\right)$ (given in exercise 6.4)

$TV(q^n\left(\cdot,t_{n+1})\right)= TV(q^n\left(\cdot,t_{n})\right)$ (follows from REA algorithm 2nd step)

$TV(q^n\left(\cdot,t_{n})\right)=\sup\sum_i|q^n(x_i)-q^n(x_{i-1})|$ (defitinion)

My idea was

$$\begin{align} TV(Q^{n+1})&\leq TV(q^n\left(\cdot,t_{n+1})\right)\\ &=TV(q^n\left(\cdot,t_{n})\right)\\ &=\sup\sum_i|q^n(x_i)-q^n(x_{i-1})|\\ &\leq \sup\sum_i|Q_i^n-Q_{i-1}^n|\\ &=TV(Q^n) \end{align}$$

But nowhere I've used something from minmod. What am I missing?

Edit

The REA algorithm is the following:

  1. Reconstruct $q^n(\cdot,t_n)=Q_i^n+\sigma_i^n(x-x_i)$, $\forall x\in C_i$, where $q^n$ is a piecewise polynomial function, $Q_i^n$ the cell average and $\sigma_i^n$ the slope.

  2. Evolve the hyperbolic equation exactly with this initial data to obtain $q^n(x,t_{n+1})$ a time $\Delta t$ later.

  3. Average this function over each grid cell to obtain new cell averages $Q_i^{n+1}=\frac{1}{\Delta x}\int\limits_{C_{i}}q^n(\cdot,t_{n+1})dx$

1

There are 1 best solutions below

0
On

We consider the advection equation $q_t + cq_x = 0$ with $c>0$. For the REA slope-limiter method with minmod slope $\sigma_i^n$ specified in OP, the reconstructed piecewise linear data of the book's Eq. (6.11) reads \begin{aligned} q^n(x, t_n) &= \sum_{i=-\infty}^\infty \left(Q_i^n + \sigma_i^n (x-x_i)\right) \Bbb I_i(x) \\ &= Q_{-\infty}^n + \sum_{i=-\infty}^\infty \left(Q_{i+1}^n - \sigma_{i+1}^n x_{i+1} - Q_i^n + \sigma_i^n x_i\right) H(x-x_{i+1/2}) \\ &\; + x \sum_{i=-\infty}^\infty (\sigma_{i+1}^n - \sigma_i^n)\, H(x-x_{i+1/2}) \, , \end{aligned} where $\Bbb I_i(x)$ is the indicator function of the $i$th finite volume $[x_{i-1/2}, x_{i+1/2}[$, and $H$ is the Heaviside step function. In the case of zero slope $\sigma_i^n \equiv 0$, a straightforward computation of the total variation yields $$ TV(q^n(\cdot, t_n)) = \sum_{i=-\infty}^\infty |Q_{i+1}^n - Q_{i}^n| = TV(Q^n) \, , $$ cf. Eq. (6.21) of the book. The present exercise consists in proving that the above equality becomes an inequality of the form $TV(q^n(\cdot, t_n)) \leq TV(Q^n)$ in the case of minmod slope $\sigma_i^n \not\equiv 0$ defined in OP --- in fact, the evaluation of $TV(q^n(\cdot, t_n))$ in OP is incorrect. To understand how things work, one could start with the computation of $TV(q^n(\cdot, t_n))$ for very simple piecewise linear functions $q^n$, e.g. the case of one single nonzero state $Q_0^n \neq 0$, then the case of two successive nonzero states $Q_0^n, Q_1^n \neq 0$, etc. This way, you'll be able to tackle the case of arbitrary data $Q^n$. From the above inequality (Eq. (6.23) of the book) and Eqs. (6.24)-(6.25), one shows that the minmod slope-limiter method is TVD.