Finite volume schemes
We will now design efficient schemes for the conservation law
We have already seen that finite difference schemes works poorly, even for linear transport:
- In this case, we had to "upwind" wind scheme, taking the derivative in the direction of information propagation. This is not possible a-priori for non-linear equations like \(\eqref{eq:conservation_law}\).
- It further requires smoothness and that \(\eqref{eq:conservation_law}\) is satisfied pointwise. However, we are looking for entropy solutions, which may be discontinuous.
Finite volume schemes
The grid
For simplicity, we use uniform discretization of \([x_L, x_R]\):
-
points:
\[ \begin{aligned} x_j &= x_L + \qty(j+\frac{1}{2})\Delta x \\ \Delta x &= \frac{x_R - x_L}{N+1} \end{aligned} \]for \(j=0,\dots, N\)
-
midpoints:
\[x_{j-\frac{1}{2}} = x_j - \frac{1}{2}\Delta x = x_L + j\Delta x\]for \(j=0,\dots, N+1\)
-
control volumes:
\[\Cell_j = [x_{j-1/2}, x_{j+1/2}]\]for \(j=1,\dots, N\)
-
uniform discretization of time:
\[t^n = n\Delta t\]
Cell averages
The finite difference schemes approximates the point values, whereas the finite volume schemes aims to approximate the cell averages
starting with
Integral form
Assuming \(U_j^n\) is known for som time \(t^n\), we can integrate \(\eqref{eq:conservation_law}\) over the swuare \(\Cell_j \times [t^n, t^{n+1})\) to get the next averages:
Defining the flux \(\overline F_{j+1/2}^n\) across the boundary \(x_{j+1/2}\) over the time interval \([t^n, t^{n+1})\):
we can rewrite the above equation as
In other words, the change in cell averages is given by difference in fluxes across the cell boundaries. It now remains to approximate the fluxes.
Godunov method
Godunov noticed that the cell averages are constant in each cell \(\Cell_j\) at each time. We thus get a Riemann problem at each cell interface \(x_{j+1/2}\):
So at each time, we get a superposition of Riemann problems of the form \(\eqref{eq:riemann_problem}\), at each interface. We have seen that the entropy solution is a combination of shocks, rarefactions and compound waves.
As previously discussed, the solution \(\overline U_j(x,t)\) of each Riemann problem \(\eqref{eq:riemann_problem}\) is self-similar:
However, the waves intersect after some time. The maximal speed of the waves is given by \(\displaystyle\max_j \abs{f'(U_j^n)}\). Thus, to ensure that two neighboring waves do not intersect, they cannot travel in total more that \(\Delta x\) in time \(\Delta t\). This yields the CFL-condition
Assume that this condition is satisfied, and consider the curve given by \(\xi = 0\), i.e. the cell interface \(x_{j+1/2}\).
Claim: \(f(\overline U_j(0))\) is well defined
By \(\eqref{eq:self_similar}\), we have that \(\overline U_j\) is constant whenever \(\xi\) is, and particularly at \(\xi = 0\):
Along this curve, \(\overline U_j\) is either continuous or not.
In this case, we obviously have that
For \(\overline U_j\) to be an entropy solution, the shock must satisfy the Rankine-Hugoniot condition
with \(s=0\). We immidiately get that
so \(f(\overline U_j(0)) = f(\overline U_j(0^-)) = f(\overline U_j(0^+))\) is well defined.
Now, we can define the (Riemann) edge-centered fluxes
$$ \begin{equation} F_{j+1/2}^n := f(\overline U_j(0)). \label{eq:edge_centered_flux} \end{equation}
Since this is constant in time, we can insert into \(\eqref{eq:flux}\) and explicitly calculate
Then, substituting into \(\eqref{eq:cell_average_update}\), we get
Godunov flux
It can be shown that the Riemann fluxes at the interfaces \(x_{j+1/2}\) is
This also holds for non-convex flux functions. Further, for flux functions \(f\) with a unique minimizer \(\omega\), we can simplify further to
Numerical example
Now, using \(\eqref{eq:godunov_flux_simplified}\), one can easily implement the Godunov method. Testing against Burgers' equation with initial conditions
we know the solution is a shock wave at \(x = 1/2t\).
x_L, x_R = -1, 1
N = 50
x = cell_centers(N, x_L, x_R)
u0(x) = x < 0 ? 1. : 0.
grid = UniformGrid1D(N, NeumannBC(), u0.(x), (x_L, x_R))
system = ConservedSystem(BurgersEQ(), NoReconstruction(), GodunovFlux(0.), ForwardEuler(grid))
simulator = Simulator(system, grid, 0.)
dt = 0.1 # max time step
T = 1
U, t = simulate_and_aggregate!(simulator, T, dt)
animate_solution(U', (x, t) -> u0(x - 0.5t),
x, t)
The numerical solution approxumates this well, having a sharp shock at \(x = 1/2t\). Further, it seems stable without any oscillations. Next, for the initial condition
we expect a rarefaction wave between the curves \(x = -t\) and \(x = t\):
u0(x) = (x > 0) * 2 - 1.
grid = UniformGrid1D(N, NeumannBC(), u0.(x), (x_L, x_R))
simulator = Simulator(system, grid, 0.)
U, t = simulate_and_aggregate!(simulator, T, dt)
animate_solution(U',
"Approximate solution",
x, t)
For a more complicated example, \(U_0 = \sin(4\pi x)\), we expect a compound wave:
u0(x) = sin(4π*x)
grid = UniformGrid1D(N, PeriodicBC(), u0.(x), (x_L, x_R))
simulator = Simulator(system, grid, 0.)
U, t = simulate_and_aggregate!(simulator, T, dt)
# U, t = godunov_scheme(f, df, ω, U0, BC, dx, dt, T)
animate_solution(U',
"Approximate solution",
x, t)
Limitations
Despite having many disirable properties, the Godunov method has some limitations:
- It requires having an explicit formula for the solutions of the Riemann problems \(\eqref{eq:riemann_problem}\). This is not always the case for systems of conservation laws.
- In the numerical flux \(\eqref{eq:edge_centered_flux}\), the only required information is the flux at the interface. It may thus be unnecessary to solve an entire Riemann problem for this.
- For more complicated flux functions with multiple extremas, one cannot use \(\eqref{eq:godunov_flux_simplified}\). Instead, one must solve the optimization problem \(\eqref{eq:godunov_flux}\), which may be computationally expensive.
Approximate Riemann solvers
Instead of solving \(\eqref{eq:riemann_problem}\) exactly, one can approximate them. These solutions can be used to define the numerical flux \(F\). Such schems are called approximate Riemann solvers.
Linearized (Roe) solvers
One simple method is to linearize the conservation law \(\eqref{eq:conservation_law}\):
For example, one may use the Roe average
Now, we can obtain the numerical flux \(F\) by replacing \(\eqref{eq:riemann_problem}\) with the linearized problem
Notice that this is just the linear transport equation with the explicit solution
The numerical scheme \(\eqref{eq:cell_average_update}\) with this flux is called the Roe or Murman-Roe scheme. This preserves shocks well, but fails at rarefaction waves, for example our usual test case with initial data \(\eqref{eq:initial_condition_rarefaction}\).
Central schemes
Due to the linearization, we only get a single wave travelling in a single direction depending on the sign of the Roe average \(\hat A\). This is also the case for shocks for the exact solution, but not for rarefaction waves. Therefore, the Roe scheme fails for rarefaction waves. Hence, instead of linearizing the conservation law, one can approximate the solution of the Riemann problem with two waves travelling in opposite directions from the interface with speeds \(s_{j+1/2}^l\) and \(s_{j+1/2}^r\). Different such speeds yields different schemes. The approximate solution is then
The middle state \(U_{j+1/2}^*\) can be determined by using the Rankine Hugoniot condition:
where \(f_{j+1/2}^*\) is the middle flux. Now, solving for \(f_{j+1/2}^*\), we get
This yields the numerical flux
Lax-Friedrichs scheme
By using the maximum speed the waves can travel without intersecting at the interface,
we get the Lax-Friedrichs flux
The solutions are stable and non-oscillatory. Unlike the Roe scheme, it also approximates the entropy solution. However, shocks are not well preserved. See for example the usual Burgers' test case with initial data \(\eqref{eq:initial_condition_shock}\):
Rusanov scheme
If we instead of using the maximum speeds use the speeds of propagation, we get the Rusanov scheme:
with
Then, we get the Rusanov (or Local Lax-Friedrichs) flux
For the same problem as above with get
and with the initial data \(\eqref{eq:initial_condition_rarefaction}\), we get
Comparison
Consistent, conservative and monotone schemes
Conservative schemes
A numerical scheme approximating \(\eqref{eq:conservation_law}\) can be formulated as
for some update function \(H\) depending on the \(2p+1\) points stencil \(\{U_{j-p}^n, \dots, U_{j+p}^n\}\) for the scheme. Until now, we have worked with \(3\)-stencil schemes, i.e. \(p=1\).
Definition (Conservative shceme)
The generic scheme \(\eqref{eq:general_scheme}\) approximating \(\eqref{eq:conservation_law}\) is called conservative if
ignoring the boundary conditions.
Theorem
Assume \(H(0, \dots, 0) = 0\).
Proof
Define
By conservation, we have that
for any sequence \(\{U_j^n\}_j\).
Now, define the sequence
Then, we have
Now, define the sequence
Then, we have
This results in the form \(\eqref{eq:godunov}\):
Then, we have
This yields a telescoping sum
Consistent schemes
We now consider numerical fluxes \(F_{j+1/2}^n\) defined on \(2p+1\) stencils
Definition (Consistency)
The numerical flux \(\eqref{eq:numerical_flux}\) is consistent if \(F(U, \dots, U) = f(U)\) for all \(U \in \R\).
Example
All the numerical fluxes disussed above are consistent. Take for example Lax-Friedrichs:
However, conservation and consistency does not imply stability or convergence. Take for example the Roe flux \(\eqref{eq:roe_flux}\), which is trivially consistent but does not converge for certain problems.
Monotone schemes
Recall that entropy solutions to the conservation law \(\eqref{eq:conservation_law}\) are monotinicity preserving. It would therefore be desirable to have numerical schemes that have a discrete version of this property.
Definition (Monotone scheme)
The numericals scheme \(\eqref{eq:general_scheme}\) is monotone if it is non-decreasing in each argument, i.e. \((\nabla H)_i \ge 0\) for all \(i\).
Lemma
Let \(F\) be a locally Lipschitz continuous two-point numerical flux. Then we have
and we get the following CFL type condition
One cannowuse monotoicity to differentiate between robust and possibly non-robust schemes. For example, the Roe scheme is not monotone.
Stability properties of monotone schemes
Recall that the entropy solutions of \(\eqref{eq:conservation_law}\) have
- \(L^\infty\) bound
- \(L^p\) bounds
- \(TV\) bound
and satisfy time constinuity. We will show that monotone schemes have these properties.
L∞ bound
Lemma
Let \(U_j^n\) be the approximate solution using a consistent, monotone scheme of the form \(\eqref{eq:general_scheme}\). Then we have
and in particular
Proof
Let \(\overline U_j^n = \max\{U_{j-p}^n, \dots, U_{j+p}^n\}\) be the maximum of the stencil. Now, as \(\eqref{eq:general_scheme}\) is monotone, we have
Similarly, we get the minimum principle.
Propagating this through time, we get the last claim too.
Entropy-inequalities and Lᵖ bounds
In the continuous case, we used the entropy inequality to obtain \(L^p\) bounds. We will use discrete counterpart to the Kruzkov entropy inequality. The Crandall-Majda numerical entropy flux is given by
where \(a \lor b = \max\{a, b\}\) and \(a \land b = \min\{a, b\}\). For consistent numerical fluxes, we see that this entropy flux is consistent with the Kruskov entropy flux:
Lemma (Crandall-Majda [CM80])
Let \(U_j^n\) be the approximate solution using a consistent, conservative, monotone scheme of the form \(\eqref{eq:general_scheme}\). Then we have the discrete entropy inequality
Moreover, if \(U_0 \in L^1(\R)\), then
Proof
The scheme is conservative, so we have
and similarly for the minimum. Taking their difference, we get
Further, by monotonicity, we have
so inserting into the above equation, we get the first claim.
Now, let \(k=0\) and sum over \(j\) to get the second claim.
TV bound
If we interpret the cell averages as step functions, then the total variation reduces to the sum of the jumps:
We can rewrite the finite volume scheme \(\eqref{eq:godunov}\) in the incremental form
where
Then, wecanuse the following result:
Hartem's Lemma [Har83]
Let \(U_j^n\) be computed with \(\eqref{eq:incremental_form}\).
-
If \(C_{j+1/2}^n, D_{j+1/2}^n \ge 0\) and \(C_{j+1/2}^n + D_{j+1/2}^n \le 1\) for all \(n, j\), then
\[\norm{U^{n+1}}_{TV(\R)} \le \norm{U^n}_{TV(\R)}.\] -
If \(C_{j+1/2}^n, D_{j-1/2}^n \ge 0\) and \(C_{j+1/2}^n + D_{j-1/2}^n \le 1\) for all \(n, j\), then
\[\norm{U^{n+1}}_{L^\infty} \le \norm{U^n}_{L^\infty}.\]
Proof
Using \(\eqref{eq:incremental_form}\), we get the difference
Taking the absolute value, we get the inequality
Summing over \(j\), we get a telescoping sum, resulting in
Rewriting \(\eqref{eq:incremental_form}\) as
we see that \(U_j^{n+1}\) is a convex combination of \(U_{j-1}^n, U_j^n, U_{j+1}^n\). Thus, \(U_j^{n+1}\) is bounded by the maximum and minimum of these values, so we get the claim.
It turns out that consistent, conservative, monotone schemes satisfies these criteria, so we can use this lemma to show that the schemes are TVD.
Lemma
Monotone, consistent, conservative 3-point schemes are TVD under the CFL condition \(\eqref{eq:monotone_CFL}\).
Proof
The coefficients are positive:
Similarly for \(D_{j-1/2}^n\). Further, we have
and similarly for \(D_{j-1/2}^n\). Then, under the CFL condition \(\eqref{eq:monotone_CFL}\), we get
Time continuity
In the continuous case, we have the time continuity property
We can show that the discrete solution has a similar property.
Lemma
Let \(U_j^n\) computed using consistent, conservative and monotone scheme with Lipschitz continuous numerical fluxe \(F=F(a,b)\) whose Lipschitz constant is \(C_F\). Then we get
Proof
Taking the absolute value of \(\eqref{eq:godunov}\) and summing over \(j\), we get
Now, taking the difference from \(k=n\) to \(k=m\) and using the TVD property, we get
Convergence of monotone schemes
We have now shown that conserative, consistent and monotone schemes are stable in \(L^\infty\) and \(TV\), are TVD and satisfy the discrete entropy inequality. We will use this to prove convergence. Further, Lax-Wendroff theorem shows that the limit is the entropy solution. To simplify notation, we will set
Lax-Wendroff theorem
Let \(U_j^n\) an approxiamate solution of \(\eqref{eq:conservation_law}\) using a consistent and conservative finite volume scheme. Assume that the numerical flux is differentiable and that \(U_j^0\) are the cell averages of the initial data.
Assume further that
- \(U_0 \in L^\infty(\R)\),
- \(\UDx\) is uniformly bounded in \(\Delta x\),
- there exists a function \(U \in L_{\text{loc}}^1(\R \times R_+)\) such that \(\UDx \to U\) in this space as \(\Delta x, \Delta t \to 0\).
Then, \(U\) is a weak solution of \(\eqref{eq:conservation_law}\) with initial data \(U_0\).
Proof
Let \(\phi \in C_c^\infty(\R \times \R_+)\), be a test function. Next, sum \(\eqref{eq:godunov}\) over \(j, n\):
As \(\phi\) has compact support, the above sum is finite. Define \(\phi_j^n = \phi(x_j, t^n)\) and denote \(\pDx(x, t)\) similarly to \(\UDx\). Then, we can express the above sums as integrals:
We have that \(\abs{\UDx} \le \abs{U}\) and \(\abs{\pDx} \le \abs{\phi}\), so we can use the dominated convergence theorem for the first two integrals \(I_1^{\Delta x}\) yielding
Next, rewrite \(I_2^{\Delta x}\) as
Claim: \(I_{2,2}^{\Delta x} \xrightarrow{\Delta x, \Delta t \to 0} 0\)
Recall that \(F\) is differentiable and \(U \in L^\infty(\R \times \R_+)\), so we have
we can use the above inequality for \(I_{2,2}^{\Delta x}\) to get
Using approximation by \(C^\infty\) functions, we get thedesired result.
Let \(\psi_l \in C^\infty(\R)\) be a sequence of functions converging to \(U\) in \(L_{\text{loc}}^1(\R\times \R_+)\). The integrand can be bounded by
By dominated convergence, \(\iint_{\supp \phi_x} J_i \dd x \dd t \to 0\) as \(\Delta x, \Delta t \to 0\) for \(i=1,3\). By definition of \(\psi_l\), the same holds for \(i=4\). Convince yourself that it stays true for \(i=2\).
Again, by diminated convergence, we get that
Combining the results, we get
Lemma
Assume the same conditions as in the Lax-Wendroff theorem, including that the discrete entropy inequality \(\eqref{eq:discrete_entropy_inequality}\) holds. Then,
is the entropy solution of \(\eqref{eq:conservation_law}\).
Proof
The proof is almost identical to that of the Lax-Wendroff theorem. We start with the discrete entropy inequality \(\eqref{eq:discrete_entropy_inequality}\) instead of \(\eqref{eq:godunov}\) and use a non-negative test function \(\phi\). Then, following the same steps as in the proof of the Lax-Wendroff theorem, we get
where
As with in the proof of the Lax-Wendroff theorem, we have that
We also rewrite \(I_2^{\Delta x} = I_{2,1}^{\Delta x} + I_{2,2}^{\Delta x}\) as in Lax-Wendroff. Again,
Doing as in the proof of the Lax-Wendroff theorem, we see that \(I_{2,2}^{\Delta x}\) vanishes. Combining the results, we get
Now, we can use monotinicity, conservation, consistency and the following two known results to show convergence.
Ascoli's Theorem
Let \((x, d_X)\) be a metric space and \(K \subset X\) be relatively compact. Further, let
be a sequence of uniformly Lipschitz continuous functions, i.e.
Then, there exists a subsequence \(u_{k_\ell}\) converging to a Lipschitz continuous function \(u\) uniformly on \([0, T]\).
Helly's Theorem [Giu84, T.1.19]
Let \(a, b \in \R\) and \(K \subset L^1([a, b])\). If there exists a number \(M>0\) such that
then \(K\) is relatively compact in \(L^1([a, b])\).
If additionally, \(K \subset L^1(\R)\) and
then \(K\) is relatively compact in \(L^1(\R)\).
For the convergence, we define the piecewise affine (in time) function
Theorem
Consider the conservation law \(\eqref{eq:conservation_law}\) with \(f \in C^1(\R)\) and \(U_0 \in BV(\R)\), and a conmsistent, conservative, monotone finite volumes scheme under the CFL condition \(\eqref{eq:monotone_CFL}\). Assume further that there is some \(c>0\) such that \(\frac{\Dt}{\Dx} \ge c\).
Then, \(\UDx \to U\) in \(L^1(\R \times [0,T])\) and \(U\) is the entropy solution.
Proof
Let \(K = \qty{U = \UDx(\cdot, t) \mid t \in [0, T], \Dx > 0}\) be functions \(U : \R \to \R\) attained at some time \(t\) by the scheme.
Claim: The conditions of Helly's Theorem are satisfied
By Crandall-Majda, we have that \(K\) is bounded in \(L^1(\R)\) by \(\norm{U_0}_{L^1(\R)}\). We have also shown that the scheme is TVD, so \(K \subset BV(\R)\).
For the second part of the theorem, we have that \(U_0 \in L^1(\R)\), so
Additionally, by monotinicity, we have that
So, propagating at the speed \(1/c\), we get that
where \(R = r + T/c\). Next, let \(J \in \N\) be such that \(R \in \Cell_J\). Then, we sum the discrete entropy inequality \(\eqref{eq:discrete_entropy_inequality}\) over \(\abs{j} > J\) and over \(n=0,\dots,N\) for \(k=0\):
Now, \(U_j^0\) is the cell average of \(U_0\), so we further get the bound
Finally, for \(t \in [t^m, t^{m+1})\), we have
Claim: \(\abs{Q_{\pm J \pm 1/2}^n} \le 2C_F\eps\)
Recall that \(F\) is Lipschitz continuous, so
Similarly for \(Q_{-J-1/2}^n\). Recall further that \(J\) is chosen such that \(U_J^n, U_{J+1}^n \le \eps\) for all \(n\). Hence, we get the desired result.
Then, we get the bound
independent of \(t\) and \(\Dx\). This proves the claim.
So \(K\) is relatively compact in \(L^1(\R)\). Then, by Ascoli's theorem, there exists a subsequence \(\Dx'\) and \(U \in L^1(\R \times [0, T])\) such that \(\UDx{}^{'} \to U\). By Lax-Wendroff, \(U\) is the entropy solution.