Constructing operator for mesh fairing, are these 2 approaches equivalent?

91 Views Asked by At

There is a 2004 paper in computer graphics called "An Intuitive Framework for Real-Time Freeform Modeling" which explains how to make fairing operators to smooth out meshes. In particular equation 5 looks like this:

$$ \begin{pmatrix} \bar\Delta^k \\ 0 | I_{F+H} \end{pmatrix} \begin{pmatrix} p\\f\\h \end{pmatrix} = \begin{pmatrix} 0\\f\\h \end{pmatrix} $$

They define their operator using an atypical formulation but I will assume that it's just a high order Laplacian (which it is if you fix a user controlled parameter $\lambda _k$ to 1).

i.e. $\Delta^k = \Delta \Delta \Delta \cdots \Delta$ $k$ times.

They have a recursive definition:

$$\bar \Delta ^ k(p) := \Delta(\lambda_{k-1}(p)\cdot \bar \Delta ^ {k-1}(p))$$

But by the assumption on $\lambda_k$ we get:

$$\bar \Delta ^ k(p) := \Delta(\bar \Delta ^ {k-1}(p))$$

There is also a discrete operator for the laplacian on meshes, I will simply call that operator $L$ where $L$ is a square, symmetric matrix.

if $X$ are the sorted coordinates of all vertices int he mesh then $L X$ p[roduces the evaluation of the Laplacian as applied to the position function $f(x,y,z) = (x,y,z)$.

Question 1:

Is it true that $L\times L$ is equivalent to applying the laplacian twice?

If that is true, them am I right to simplify the above matrix as:

$$ \begin{pmatrix} L^k & 0 & 0 & 0 \\ 0 & L^k & 0 & 0\\ 0 & 0 & L^k & 0\\ 0 & \cdots & & I_{F+ H} \end{pmatrix} \begin{pmatrix} p_x\\ p_y\\ p_z\\f\\h \end{pmatrix} = \begin{pmatrix} 0\\f\\h \end{pmatrix} $$ ???

Matrix example

Following lightxbulb's advice I made a method that should compute the described matrix. I tried it on the toy example of a cube, where the last 3 vertices are considered fixed vertices, that yields the following matrix:

[[-0.5000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.1667505949, 0.1666247026, 0.1666247026, 0.0000000000],
 [0.0000000000, -0.5000000000, 0.1667093676, 0.1667093676, 0.0000000000, 0.0000000000, 0.0000000000, 0.1665812649],
 [0.0000000000, 0.1666247026, -0.5000000000, 0.0000000000, 0.1667505949, 0.0000000000, 0.1666247026, 0.0000000000],
 [0.0000000000, 0.1666247026, 0.0000000000, -0.5000000000, 0.1667505949, 0.1666247026, 0.0000000000, 0.0000000000],
 [0.1666666733, 0.0000000000, 0.1666666733, 0.1666666733, -0.5000000199, 0.0000000000, 0.0000000000, 0.0000000000],
 [0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000],
 [0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000],
 [0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000],
 [0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 1.0000000000, 0.0000000000, 0.0000000000],
 [0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 1.0000000000, 0.0000000000],
 [0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 1.0000000000]]

And this is the RHS for the x coordinate:

[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0]

The first 5 rows are the laplacian weights for the cube, i.e. $DM$ where $D$ are the inverse voronoi areas and $M$ are the cotan weights. The next 3 rows are 0, then there are 3 rows with an embedded 3x3 identity matrix. Each of the 5 rows has 4 non zero entries, they add up to 0, the diagonal entry is negative and the other entries add up to its absolute value...

I am so confused, as far as I can tell this matrix is what has been described, but when I solve this system in the form $A^TAx = A^Tb$ I get all sorts of nans for my non-toy examples and as far as I can tell, the inputs are right.

These are the inputs and putputs for the 3 coordinates:

x in: [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0]
x out: [1.4, 133.1, 28.7, -55.9, -43.0, -10.7, -2.2, 1.0]

y in: [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, -1.0, 1.0]
y out: [-1.4, 148.9, 19.6, -48.6, -45.7, -11.1, -5.5, 1.0]

z in: [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, -1.0, -1.0]
z out: [-1.4, 150.2, 18.5, -46.9, -45.7, -11.0, -5.7, -1.0]
1

There are 1 best solutions below

9
On

$\newcommand{\bm}[1]{\boldsymbol{#1}}$

Is $L^2$ equivalent to applying the Laplacian twice

It is equivalent to a specific discretisation of the Laplacian, however you have to modify this to account for different boundary conditions.

Decomposition into Poisson Problems

Consider the polyharmonic equation: $$\Delta^q u = 0.$$ You could introduce variables $u^0,\ldots, u^{q-1}$ such that $u^k = \Delta u^{k-1}$, then: $$\Delta u^{q-1} = \Delta^2 u^{q-2} = \ldots = \Delta^{q} u_0.$$ Now you can write the $q$-harmonic equation as the following system: \begin{align} \Delta u^{q-1} &= 0 \\ \Delta u^{q-2} &= u^{q-1} \\ &\,\,\,\vdots \\ \Delta u^1 &= u^2 \\ \Delta u^0 &= u^1. \end{align} But those are just $q$ Poisson equations (the first one is a Laplace equation).

Weak Formulation

Let's write the weak formulation for one of those equations: \begin{align} \int_{\Omega} v \Delta u^k &= \int_{\Omega} v u^{k+1} \\ -\int_{\Omega} \nabla v \cdot \nabla u^k + \int_{\partial\Omega} v \partial_n u^k &= \int_{\Omega} vu^{k+1}. \end{align}

Boundary Conditions

For a practical problem you typically have some boundary conditions prescribed. In the above decoupled system the boundary conditions that are easy to implement are of the form $\Delta^{\ell} u = f^{\ell}$ (Navier) and $\partial_{n} \Delta^{\ell} u = g^{\ell}$ (I don't know how one calls those, but they are essentially equivalent to reflecting conditions when localised to one of these equations). Other conditions may be harder or inapplicable for this coupled formulation (i.e. in that case you wouldn't be able to just write $\Delta^q \approx \bm{L}^q$). However, in the paper that you cite they consider Navier conditions $\Delta^{\ell} u = b^{\ell}$ so all is good.

Galerkin Discretisation

The next step is to get a finite-dimensional formulation of the above. For a conforming discretisation you would need a subspace $V_h$ of $H^q(\Omega)$ for your solution - this is however rather unwieldy: already for $q=2$ you have to resort to complex elements such as Argyris or Bell elements. The trick here is that you have this decomposition into $q$ Poisson subproblems, so you can choose a non-conforming discretisation, where you discretise each equation separately, i.e. you take a subspace of $H^1(\Omega)$. In your case the subspace would be the space of piecewise linear functions over your triangles, with basis functions the hat functions associated with each verex: $\mathcal{B} = \{\phi_1,\ldots,\phi_n\}$. You can now write any function $u$ from this space as $$u(x) = \sum_{j=1}^n u_j \phi_j(x).$$ Because you consider Navier conditions, then $v(\partial\Omega)=0$ in all of the equations, and the boundary term vanishes. Since we are looking for solutions $u^k$ and $u^{k+1}$ that are from $V_h$ we can write them as linear combinations of the basis functions $\{\phi_j\}_{j=1}^n$. Plugging this into the weak formulation you get: \begin{gather}\int_{\Omega} \nabla v \cdot \nabla u^k = -\int_{\Omega} \nabla v \cdot \sum_{j=1}^n u^k_j \phi_j = -\sum_{j=1}^n u^k_j \int_{\Omega}\nabla v \cdot \nabla \phi_j = \sum_{j=1}^n u^{k+1}_j\int_{\Omega} v \phi_j = \int_{\Omega} vu^{k+1}. \end{gather} We can now consecutively plug $v = \phi_i$ for $1\leq i \leq n$, and we get $n$ different equations: $$-\sum_{j=1}^n u^k_j \int_{\Omega}\nabla \phi_i \cdot \nabla \phi_j = \sum_{j=1}^n u^{k+1}_j\int_{\Omega} \phi_i \phi_j, \quad 1\leq i \leq n.$$ We can collect the integral terms involving the derivatives into a matrix $W_{ij} = \int_{\Omega}\nabla \phi_i \cdot \nabla \phi_j$, which is termed the stiffness matrix. We can collect the remaining integral terms into the mass matrix $M_{ij} = \int_{\Omega} \phi_i \phi_j$. We can now write the system in matrix-vector form as: $-\bm{W}\bm{u}^k = \bm{M}\bm{u}^{k+1}$. Note that here $\bm{u}^{k}$ and $\bm{u}^{k+1}$ are the vectors of coefficients forming $u^k$ and $u^{k+1}$. With this we have the following discretisation: \begin{alignat}{3} \Delta u^{q-1} &= 0 &&\implies&& -\bm{W}\bm{u}^{q-1} &= \bm{0} \\ \Delta u^{q-2} &= u^{q-1} &&\implies&& -\bm{W}\bm{u}^{q-2} &= \bm{M}\bm{u}^{q-1} \\ & &&\;\;\;\;\vdots&& & \\ \Delta u^1 &= u^2 &&\implies&& -\bm{W}\bm{u}^{1} &= \bm{M}\bm{u}^{2} \\ \Delta u^0 &= u^1 &&\implies&& -\bm{W}\bm{u}^{0} &= \bm{M}\bm{u}^{1}. \end{alignat}

Discrete Operator

We can now rewrite the equation as $\bm{u}^{k+1} = -\bm{M}^{-1}\bm{W}\bm{u}^k$, and using the recursive definition we get that $\Delta u^{q-1} \approx \bm{W}\bm{u}^{q-1} = (-1)^q\bm{W}(\bm{M}^{-1}\bm{W})^{q-1}\bm{u}^0$. Thus we ghave the following discretisation of the $q$-harmonic equation: $$\Delta^{q} u = 0\implies (-1)^q\bm{W}(\bm{M}^{-1}\bm{W})^{q-1}\bm{u} = \bm{0}.$$

The Operator Modification in the Paper

In the paper they have this "modification" of the operator meant to mix different boundary conditions: $$\bar{\Delta}^k(p) = \Delta(\lambda_{k-1}(p)\cdot \bar{\Delta}^{k-1}(p)), \,\, \lambda_{k}(p) = \begin{cases} 1, & c(p)>k, \\ c(p)-k, & k-1\leq c(p) \leq k, \\ 0, & c(p)<k-1. \end{cases}$$ I tried to figure out what this means in practice by computing some $\bar{\Delta}^k$ and relying on the intuition of what they are trying to do, and I think they have a mistake in the above. I believe they meant to write: $$\lambda_{k}(p) = \begin{cases} 1, & c(p)>k+1, \\ c(p)-k, & k\leq c(p) \leq k+1, \\ 0, & c(p)<k. \end{cases}$$ It is likely that their mistake arose from writing the definition of $\lambda_{k-1}(p)$ instead of $\lambda_k(p)$. Taking into account this correction, let's look at some examples, e.g. set $c(p) = 0$ for all fixed vertices, then $\bar{\Delta}^k(p) = 0$, since $\lambda_j(p) = 0$ for all $0\leq j \leq k-1$. Thus you get $\bar{\Delta}^k(p) = \Delta(0) = 0$. Instead if you take $c(p) = \ell \in \{1,\ldots,k-1\}$ then you get $\bar{\Delta}^{k}(p) = \Delta^{\ell}(p)$. Finally, if you take $c(p) = \ell -1 + r$, where $r\in(0,1)$ you get $\bar{\Delta}^k(p) = r\Delta^{\ell}(p)$. In the discrete case $\bar{\Delta}^k(p_i)$ corresponds to modifying your $i$-th row (corresponding to a boundary condition index) to a discretisation of the operator part $\bar{\Delta}^k(p_i)$. For instance if you had $c(p_i)=0$ then you just replace the row with a zero row. If you had that $\bar{\Delta}^k(p_i) = r\Delta^{\ell}(p_i)$ then you replace the row with a discretisation of $r\Delta^{\ell}$ at vertex $i$, etc. In either case, let's call the matrix that you get after all of the modifications for all of the boundary vertices $\bm{A}$. That is, it results from modifying $(-1)^q\bm{W}(\bm{M}^{-1}\bm{W})^{q-1}$ by replacing some rows as discussed.

Linear System

Now let's look at the linear system which they form: $$\begin{bmatrix} \bm{A} \\ \begin{array}{cc}\bm{0}_{m\times (n-m)} & \bm{I}_{m\times m}\end{array} \end{bmatrix} \begin{bmatrix}\bm{p}\\ \bm{f} \\ \bm{h}\end{bmatrix} = \begin{bmatrix} \bm{0}_{n} \\ \bm{f} \\ \bm{h} \end{bmatrix}.$$ Here the number of components of $\bm{f}$ plus the number of components of $\bm{h}$ is $m$. Also $\bm{p}\in\mathbb{R}^{n-m}$, $\bm{A}\in\mathbb{R}^{n\times n}$. If you apply an iterative solver, then at each iteration you will keep $\bm{f},\bm{h}$ the same, and the only thing that will be modified is $\bm{p}$. To be sure, the rows of $\bm{A}$ that were modified for the boundary conditions are the ones corresponding to $\bm{f}$ and $\bm{h}$, not the ones corresponding to $\bm{p}$.

Coordinates

Remember that $\bm{p}, \bm{f}, \bm{h}$ were considered coefficients of some functions. You can now decide that those are the vectorial coordinates of the points (e.g. $p_i = \begin{bmatrix} p_{x,i} & p_{y,i} & p_{z,i} \end{bmatrix}$, note that this is a row vector) then you get the system: $$\begin{bmatrix} \bm{A} \\ \begin{array}{cc}\bm{0}_{m\times (n-m)} & \bm{I}_{m\times m}\end{array} \end{bmatrix} \begin{bmatrix}\bm{p}_x & \bm{p}_y & \bm{p}_z \\ \bm{f}_x & \bm{f}_y & \bm{f}_z \\ \bm{h}_x & \bm{h}_y & \bm{h}_z\end{bmatrix} = \begin{bmatrix} \bm{0}_{n} & \bm{0}_{n} & \bm{0}_{n} \\ \bm{f}_x & \bm{f}_y & \bm{f}_z \\ \bm{h}_x & \bm{h}_y & \bm{h}_z \end{bmatrix},$$ where $\bm{p}_x$ is the column vector of the $x$ coordinates, etc. Note that you can solve the systems for the $x$, $y$, and $z$ coordinates separately.

Your Question

If that is true, them am I right to simplify the above matrix as:

$$ \begin{pmatrix} L^k & 0 & 0 & 0 \\ 0 & L^k & 0 & 0\\ 0 & 0 & L^k & 0\\ 0 & \cdots & & I_{F+ H} \end{pmatrix} \begin{pmatrix} p_x\\ p_y\\ p_z\\f\\h \end{pmatrix} = \begin{pmatrix} 0\\f\\h \end{pmatrix} $$ ???

So no, this is not correct.

Note on the Weak Formulation

The weak formulation that I wrote is for Euclidean space. In the case of a manifold the first fundamental form enters there as well as the conormal. You don't need to care about this really however, since the cotan discretisation with the Voronoi area lumped mass matrix discretise the latter properly.

MATLAB Example with the Supplied Matrix

Extended matrix:

M = zeros(11,8);
M(1,:) = [-0.5000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.1667505949, 0.1666247026, 0.1666247026, 0.0000000000];
M(2,:) = [0.0000000000, -0.5000000000, 0.1667093676, 0.1667093676, 0.0000000000, 0.0000000000, 0.0000000000, 0.1665812649];
M(3,:) = [0.0000000000, 0.1666247026, -0.5000000000, 0.0000000000, 0.1667505949, 0.0000000000, 0.1666247026, 0.0000000000];
M(4,:) = [0.0000000000, 0.1666247026, 0.0000000000, -0.5000000000, 0.1667505949, 0.1666247026, 0.0000000000, 0.0000000000];
M(5,:) = [0.1666666733, 0.0000000000, 0.1666666733, 0.1666666733, -0.5000000199, 0.0000000000, 0.0000000000, 0.0000000000];
M(9,6) = 1;
M(10,7) = 1;
M(11,8) = 1;

[MU,MS,MV] = svd(M);

Compact matrix:

M = zeros(8,8);
M(1,:) = [-0.5000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.1667505949, 0.1666247026, 0.1666247026, 0.0000000000];
M(2,:) = [0.0000000000, -0.5000000000, 0.1667093676, 0.1667093676, 0.0000000000, 0.0000000000, 0.0000000000, 0.1665812649];
M(3,:) = [0.0000000000, 0.1666247026, -0.5000000000, 0.0000000000, 0.1667505949, 0.0000000000, 0.1666247026, 0.0000000000];
M(4,:) = [0.0000000000, 0.1666247026, 0.0000000000, -0.5000000000, 0.1667505949, 0.1666247026, 0.0000000000, 0.0000000000];
M(5,:) = [0.1666666733, 0.0000000000, 0.1666666733, 0.1666666733, -0.5000000199, 0.0000000000, 0.0000000000, 0.0000000000];
M(6,6) = 1;
M(7,7) = 1;
M(8,8) = 1;

[MU,MS,MV] = svd(M);

Both have a non-singular SVD (the systems are equivalent, so it should be like so), thus there will always be a unique solution.