ug4
|
In order to discretize a problem as presented in General Problem Description the different problems can be categorized into several classes. All classes of problems can be implemented in a general framework.
Let the Domain \(\Omega\) be covered by a grid \(\Omega_h\). By choosing an appropriate trial space, each unknown continuous function \(u_i(\vec{x},t)\) is now approximated by an finite-dimensional numerical function \(u_{h,i}(\vec{x},t)\). For each function \(u_{h,i}\) a certain number of degrees of freedoms (DoF) has to distributed on the grid, being associated with the geometric objects. For example, one can think of piecewise linear functions on each grid elements, requiring one DoF per vertex of the element and no DoFs on the other Geometric Objects like edges or faces. Let \(N_{h,i}\) be the number of DoFs needed to represent the unknowns of the i'th function and let \(N_h = \sum_{i=1}^n N_{h,i}\) be the number DoFs needed for the whole problem. Each discrete function \(u_{h,i}\) is isomorphic to a representation in \(\mathbb{R}^{N_{h,i}}\), thus it can be represented by an algebraic vector \(\vec{u}_{h,i} \in \mathbb{R}^{N_{h,i}}\) and the entries of the vector \((\vec{u}_{h,i})_j \in \mathbb{R}\) are the DoFs of the function. By the same construction a representation for the system solution can be found.
To facilitate notations for the remainder of this page, let \(\vec{u} = \vec{u}_1\), i.e. it is a scalar problem. For systems of unknowns the ideas are straight forward.
For the discretization of the PDE there are several possibilities.
This gives rise to discretized operators \(\mathcal{M}_h(\cdot)\) and \(\mathcal{A}_h(\cdot)\) and a discrete right-hand side \(\vec{f_h}\), with \( \mathcal{M}_h, \mathcal{A}_h: \mathbb{R}^{N_h} \mapsto \mathbb{R}^{N_h} \) and \(\vec{f_h} \in \mathbb{R}^{N_h}\).
The discrete problem can be written as
\begin{align*} \partial_t \mathcal{M}_h(\vec{u}_h) + \mathcal{A}_h(\vec{u}_h) = \vec{f_h} \end{align*}
Note, that the boundary conditions are taken into account in the operators and the right-hand side.
The easiest case are time-independent (stationary) and linear problems. This means
In this case the "Stiffness-Matrix" \(A_h\) and the right-hand side vector \(\vec{f}_h \in \mathbb{R}^{N_h}\) has to be assembled. Then, it remains to use an adequate matrix solver to solve the linear system
\begin{align*} A_h \vec{u}_h = \vec{f}_h. \end{align*}
In order to specify a problem, the user has to supply:
The solution of the algebraic problem (inverting \(A_h\)) can be implemented independently from the actual problem and/or discretization.
Stationary, non-linear problems are described by
Thus, in order to solve this problem typically a newton-method is used. This means to reformulate the problem using the "defect":
\begin{align*} d_h: \mathbb{R}^{N_h} &\mapsto \mathbb{R}^{N_h}\\ \vec{u}_h &\mapsto \vec{d}_h(\vec{u}_h) := \mathcal{A}_h(\vec{u}_h)-\vec{f}_h. \end{align*}
The aim is to find a root \(\vec{d}_h(u_h) = 0\) of the defect. This is usually solved by the Newton method or some kind of fixed-point iteration. Starting with an initial guess \(u_h^0\) the following interation is performed
\begin{align*} \vec{c}_{h,k} &:= (J_h(\vec{u}_{h_k}))^{-1} d_h(\vec{u}_{h,k}),\\ \vec{u}_{h,k+1} &:= \vec{u}_{h,k} - \alpha_k \vec{c}_{h,k} \end{align*}
with
There are several choises of \(J_h(\vec{u}_{h_k})\):
Jacobian
\begin{align*} J_h(\vec{u}_{h_k}) = \frac{\partial \vec{d}_h(u_h)}{\partial \vec{u}_h} = \frac{\partial \mathcal{A}_h(u_h)}{\partial \vec{u}_h} \end{align*}
The Jacobian is the linearization of the defect with respect to the unknown DoFs, i.e.
\begin{align*} (J_h(\vec{u}_{h_k}))_{ij} = \frac{\partial (\vec{d}_h(u_h))_i}{\partial (\vec{u}_h)_j} \end{align*}
\begin{align*} J_h(\vec{u}_{h_k}) = \mathbb{I} \end{align*}
This gives rise to a very simple fixed-point iteration.Between these two methods, a lot of different approaches by neglecting terms of the exact jacobian. Choosing the Jacobian as a preconditioner will result in a quadratic convergence rate of the iteration. The other schemes give only a linear convergence rate.
One major part of the non-linear iteration is the solution of the linear problem
\begin{align*} J_h(\vec{u}_{h_k}) \vec{c}_{h,k} = \vec{d}_h(\vec{u}_{h_k}). \end{align*}
This problem has exactly the same structure as the linear, stationary problem and therefore all linear matrix solvers can also be used here. It is worthwhile to notice the analogy between solution and correction on the one hand and between right-hand side and defect on the other hand. The preconditioner has usually a similar sparse pattern as the matrix of the linear problem. Note, that every linear problem can be viewed as a linear one. In this case, the preconditioner is just the linear matrix itself \(J_h \equiv A_h\) and the newton scheme must and will converge in one single step.
In order to specify a problem, the user has to specify:
All the other steps can be implemented independently from the actual problem and/or discretization.
The discretization of time-dependent part of a PDE is closely related to the solution of ordinary differential equations (ODE). Those have the form
\begin{align*} \partial_t u(t) &= - f(t, u(t)) \\ u(t_{start}) &= u_0 \end{align*}
where the unknown function is \(u: \mathbb{R} \mapsto \mathbb{R}^N\) and the right-hand side is \(: \mathbb{R} \times \mathbb{R}^N \mapsto \mathbb{R}^N\).
In order to discretize such an ODE introduce a time grid \(\{t_0 = t_{start}, t_1, t_2, \ldots \}\) and define the time step size \(\Delta t^k := t_{k} - t_{k-1}\). Let \(u^{(k)} \approx u(t_k)\) be the approximation of \(u\) at time step k. A general L-step discretization scheme for ODEs takes the form
\begin{align*} \sum_{l=0}^{L-1} \left[ s_{m,l} u^{(k-l)} + s_{a,l} f(t^{k-l}, u^{(k-l)}) \right] = 0. \end{align*}
where
Denote by \(\vec{s} := (s_0, s_1, \ldots, s_{L-1})\) the specific scaling factors.
Some common schemes are
\begin{align*} \vec{s}_a &= (\Delta t^k, 0)\\ \vec{s}_m &= (1, -1) \end{align*}
\begin{align*} \vec{s}_a &= (0, \Delta t^k)\\ \vec{s}_m &= (1, -1) \end{align*}
\begin{align*} \vec{s}_a &= (\frac{1}{2} \Delta t^k, \frac{1}{2} \Delta t^k)\\ \vec{s}_m &= (1, -1) \end{align*}
\begin{align*} \vec{s}_a &= (\theta \Delta t^k, (1- \theta)\Delta t^k)\\ \vec{s}_m &= (1, -1) \end{align*}
with implements the three forementioned schemes by specifing \( \theta\).\begin{align*} \vec{s}_a &= (\frac{2}{3} \Delta t^k, 0, 0)\\ \vec{s}_m &= (1, -\frac{4}{3}, \frac{1}{3}) \end{align*}
Now assume \(\mathcal{M}(u)\) to have the form \(\frac{\partial}{\partial t} u\). One can rearrange the problem to
\begin{align*} \mathcal{M}(u) = \frac{\partial}{\partial t} u = - \mathcal{A}(u, t) + f(t) \end{align*}
and finds the setup for an ODE problem in time.
Discretize the time-dependent PDE problem first in space. This gives
\begin{align*} \frac{\partial}{\partial t} M_h \vec{u}_h(t) = - \mathcal{A}_h(\vec{u}_h(t), t) + f_h(t). \end{align*}
Supposing that the Mass-matrix is independent of time one can apply the ODE time stepping scheme and gets the next time step by solving
\begin{align*} \vec{d}_h(\vec{u}_h^{(k)}) := \sum_{l=0}^{L-1} \left[ s_{m,l} M_h \vec{u}_h^{(k-l)} + s_{a,l} \left( \mathcal{A}_h(\vec{u}^{(k-l)}, t^{(k-l)}) - \vec{f}_h(t^{(k-l)}) \right) \right] = 0. \end{align*}
The defect \(\vec{d}(u_h^{(k)})\) can be split up into
\begin{align*} \vec{d}_h(\vec{u}_h^{(k)}) := \sum_{l=0}^{L-1} \hat{\vec{d}}_h(\vec{u}_h^{(k-l)}, t^{(k-l)}, s_{m,l}, s_{a,l}) \end{align*}
where
\begin{align*} \hat{\vec{d}}_h(\vec{u}_h, t, s_m, s_a) := s_m M_h \vec{u}_h + s_a \left( \mathcal{A}_h(\vec{u}, t) - \vec{f}_h(t) \right) \end{align*}
For the Jacobian one finds
\begin{align*} J_h(\vec{u}_h^{(k)}, t^k, s_{m,0}, s_{a,0}) = \frac{\partial \vec{d}_h(\vec{u}_h)}{\partial \vec{u}_h} \left( \vec{u}_h^{(k)} \right) = \frac{\partial \hat{\vec{d}}_h(\vec{u}_h, t^k, s_{m,0}, s_{a,0})} {\partial \vec{u}_h} \left( \vec{u}_h^{(k)} \right) = s_{m,0} M_h + s_{a,0} \frac{\partial \mathcal{A}_h(\vec{u}_h, t^k)}{\partial \vec{u}_h} \left( \vec{u}_h^{(k)} \right) \end{align*}
Thus, in order to specify a problem, the user has to specify:
If the time-dependent problem is linear if \(\mathcal{M}(\cdot)\) and \(\mathcal{A}(\cdot)\) are linear. In this case one finds \(\mathcal{A}(\vec{u}_h) \equiv A_h \vec{u}_h\).
The defect equation can now be written in the form
\begin{align*} s_{m,0} M_h \vec{u}_h^{(k)} + s_{a,0} A_h \vec{u}_h^{(k)} = s_{a,0} \vec{f}_h^{(k)} - \sum_{l=1}^{L-1} \left[ s_{m,l} M_h \vec{u}_h^{(k-l)} + s_{a,l} \left( A_h \vec{u}^{(k-l)} - \vec{f}_h^{(k-l)} \right) \right] \end{align*}
This system of equations has the form
\begin{align*} B_h \vec{u}_h^{(k)} = b_h \end{align*}
where
The problem can thus be solved using the same matrix solvers used for the stationary linear problems.
In this case the defect equation
\begin{align*} \vec{d}_h(\vec{u}_h^{(k)}) := \sum_{l=0}^{L-1} \left[ s_{m,l} M_h \vec{u}_h^{(k-l)} + s_{a,l} \left( \mathcal{A}_h(\vec{u}^{(k-l)}, t^{(k-l)}) - \vec{f}_h^{(k-l)} \right) \right] = 0 \end{align*}
has to be solved using a newton scheme. Fixing the time step \(k\) this problem has the same structure as the stationary non-linear problems. Thus, the solver framework can be reused.
This splitting concept is strongly influenced by a short but very illuminative paper by Dmitriy Logashenko. Special thanks to him for sharing his insight.