Mesh Denoising

Important: Please read the installation page for details about how to install the toolboxes. $\newcommand{\dotp}[2]{\langle #1, #2 \rangle}$ $\newcommand{\enscond}[2]{\lbrace #1, #2 \rbrace}$ $\newcommand{\pd}[2]{ \frac{ \partial #1}{\partial #2} }$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\umax}[1]{\underset{#1}{\max}\;}$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\uargmin}[1]{\underset{#1}{argmin}\;}$ $\newcommand{\norm}[1]{\|#1\|}$ $\newcommand{\abs}[1]{\left|#1\right|}$ $\newcommand{\choice}[1]{ \left\{ \begin{array}{l} #1 \end{array} \right. }$ $\newcommand{\pa}[1]{\left(#1\right)}$ $\newcommand{\diag}[1]{{diag}\left( #1 \right)}$ $\newcommand{\qandq}{\quad\text{and}\quad}$ $\newcommand{\qwhereq}{\quad\text{where}\quad}$ $\newcommand{\qifq}{ \quad \text{if} \quad }$ $\newcommand{\qarrq}{ \quad \Longrightarrow \quad }$ $\newcommand{\ZZ}{\mathbb{Z}}$ $\newcommand{\CC}{\mathbb{C}}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\EE}{\mathbb{E}}$ $\newcommand{\Zz}{\mathcal{Z}}$ $\newcommand{\Ww}{\mathcal{W}}$ $\newcommand{\Vv}{\mathcal{V}}$ $\newcommand{\Nn}{\mathcal{N}}$ $\newcommand{\NN}{\mathcal{N}}$ $\newcommand{\Hh}{\mathcal{H}}$ $\newcommand{\Bb}{\mathcal{B}}$ $\newcommand{\Ee}{\mathcal{E}}$ $\newcommand{\Cc}{\mathcal{C}}$ $\newcommand{\Gg}{\mathcal{G}}$ $\newcommand{\Ss}{\mathcal{S}}$ $\newcommand{\Pp}{\mathcal{P}}$ $\newcommand{\Ff}{\mathcal{F}}$ $\newcommand{\Xx}{\mathcal{X}}$ $\newcommand{\Mm}{\mathcal{M}}$ $\newcommand{\Ii}{\mathcal{I}}$ $\newcommand{\Dd}{\mathcal{D}}$ $\newcommand{\Ll}{\mathcal{L}}$ $\newcommand{\Tt}{\mathcal{T}}$ $\newcommand{\si}{\sigma}$ $\newcommand{\al}{\alpha}$ $\newcommand{\la}{\lambda}$ $\newcommand{\ga}{\gamma}$ $\newcommand{\Ga}{\Gamma}$ $\newcommand{\La}{\Lambda}$ $\newcommand{\si}{\sigma}$ $\newcommand{\Si}{\Sigma}$ $\newcommand{\be}{\beta}$ $\newcommand{\de}{\delta}$ $\newcommand{\De}{\Delta}$ $\newcommand{\phi}{\varphi}$ $\newcommand{\th}{\theta}$ $\newcommand{\om}{\omega}$ $\newcommand{\Om}{\Omega}$

This tour explores denoising of 3-D meshes using linear filtering, heat diffusion and Sobolev regularization.

In [1]:
from __future__ import division

import numpy as np
import scipy as scp
import pylab as pyl
import matplotlib.pyplot as plt

from nt_toolbox.general import *
from nt_toolbox.signal import *

import warnings
warnings.filterwarnings('ignore')

%matplotlib inline
%load_ext autoreload
%autoreload 2

3-D Triangulated Meshes

The topology of a triangulation is defined via a set of indexes $\Vv = \{1,\ldots,n\}$ that indexes the $n$ vertices, a set of edges $\Ee \subset \Vv \times \Vv$ and a set of $m$ faces $\Ff \subset \Vv \times \Vv \times \Vv$.

We load a mesh. The set of faces $\Ff$ is stored in a matrix $F \in \{1,\ldots,n\}^{3 \times m}$. The positions $x_i \in \RR^3$, for $i \in V$, of the $n$ vertices are stored in a matrix $X_0 = (x_{0,i})_{i=1}^n \in \RR^{3 \times n}$.

In [2]:
from nt_toolbox.read_mesh import *
X0,F = read_mesh("nt_toolbox/data/elephant-50kv.off")

Number $n$ of vertices and number $m$ of faces.

In [3]:
n = np.shape(X0)[1]
m = np.shape(F)[1]

Display the mesh in 3-D.

In [4]:
from nt_toolbox.plot_mesh import *
plt.figure(figsize=(10,10))
plot_mesh(X0, F)

Noisy Mesh

We generate artificially a noisy mesh by random normal displacement along the normal. We only perform normal displacements because tangencial displacements do not impact the geometry of the mesh.

The parameter $\rho>0$ controls the amount of noise.

In [5]:
rho = 0.015

We compute the normals $N = (N_i)_{i=1}^n$ to the mesh. This is obtained by averaging the normal to the faces ajacent to each vertex.

In [6]:
from nt_toolbox.compute_normal import *
N = compute_normal(X0, F)

We create a noisy mesh by displacement of the vertices along the normal direction $$ x_i = x_{0,i} + \rho \epsilon_i N_i \in \RR^3 $$ where $\epsilon_i \sim \Nn(0,1)$ is a realization of a Gaussian random variable, and where $N_i \in \RR^3$ is the normal of the mesh for each vertex index $i$.

In [7]:
from numpy import random
X = X0 + np.tile(rho*random.randn(n), (3,1))*N

Display the noisy mesh.

In [8]:
plt.figure(figsize=(10,10))
plot_mesh(X, F)

Adjacency Matrix

We define linear operators that compute local averages and differences on the mesh.

First we compute the index of the edges that are in the mesh, by extracting pairs of index in the $F$ matrix.

In [9]:
E = np.hstack((F[[0,1],:],F[[1,2],:],F[[2,0],:]))

Add the reversed edges. This defines the set of edges $\Ee$ that is stored in a matrix $E \in \{1,\ldots,n\}^{2 \times p}$.

In [10]:
from nt_toolbox.unique_columns import *
E = unique_columns(np.hstack((E,E[[1,0],:])))

We keep only oriented pairs of index $(i,j)$ such that $i<j$, to avoid un-necessary computation.

In [11]:
E0 = E[:,E[0,:] < E[1,:]]

This defines a matrix $E \in \{1,\ldots,n\}^{2 \times p_0}$ where $p_0=p/2$.

In [12]:
p0 = np.shape(E0)[1]

Display statistics of the mesh.

In [13]:
print("#vertices = %i, #faces = %i, #edges = %i" %(n,m,p0))
#vertices = 24955, #faces = 49918, #edges = 74877

The weight matrix $W$ is the adjacency matrix defined by $$ W_{i,j} = \choice{ 1 \qifq (i,j) \in \Ee, \\ 0 \quad \text{otherwise.} } $$ Since most of the entries of $W$ are zero, we store it as a sparse matrix.

In [14]:
from scipy import sparse

W = sparse.coo_matrix((np.ones(np.shape(E)[1]),(E[0,:],E[1,:])))

Compute the connectivity weight vector $ d \in \NN^n $ $$ d_i = \sum_{j} W_{i,j} $$ i.e. $d_i$ is the number of edges connected to $i$.

In [15]:
d = np.ravel((W.sum(0)))

Display the statistics of mesh connectivity.

In [16]:
h = np.histogram(d, np.arange(np.min(d),np.max(d)+1))

plt.figure(figsize=(10,7))
plt.bar(h[1][:-1]-.5,h[0], color="darkblue", width=1)
plt.show()

Store in sparse diagonal matices $D$ and $iD$ respectively $D=\text{diag}_i(d_i)$ and $D^{-1} = \text{diag}_i(1/d_i)$.

In [17]:
D = sparse.coo_matrix((d, (np.arange(0,n),np.arange(0,n))))
iD = sparse.coo_matrix((1/d, (np.arange(0,n),np.arange(0,n))))

The normalized weight matrix is defined as $$ \tilde W_{i,j} = \frac{1}{d_i} W_{i,j}, $$ and hence $\tilde W = D^{-1} W$.

In [18]:
tW = iD.dot(W)

It satisfies $$ \forall i , \quad \sum_j \tilde W_{i,j} = 1, $$ i.e. $\tilde W \text{I} = \text{I}$ where $\text{I} \in \RR^n$ is the vector constant equal to one.

The operator $\tilde W \in \RR^{n \times n} $, viewed as an operator $\tilde W : \RR^n \rightarrow \RR^n$, can be thought as a low pass filter.

Laplacian and Gradient Operators

The un-normalized Laplacian is on the contrary a symmetric high pass operator $$ L = D-W \in \RR^{n \times n}. $$ It satisfies $L \text{I} = 0$.

In [19]:
L = D - W

The gradient operator compute directional derivative along edges. It can be used to factor the Laplacian operator, but in practice it is never computed explicitely since it is never needed in numerical computation.

To represent the gradient, we index the set of (oriented) edges $ \Ee_0 = (e_k)_{k=1}^{p_0} $ where each edge is $e_k = (i,j) \in \{1,\ldots,n\}^2$ with $i<j$.

The gradient operator is a matrix $G \in \RR^{p_0 \times n}$ defined as, for all $e_k=(i,j)$ and all $\ell \notin \{i,j\}$, $$ G_{k,i}=1, \quad G_{k,j}=-1, \quad G_{k,\ell}=0. $$

It is stored as a sparse matrix, and can be thought as a derivative operator $G : \RR^n \rightarrow \RR^{p_0} $ that maps signal defined on vertices to differences located along directed edges.

In [20]:
G = sparse.coo_matrix((np.hstack((np.ones(p0),-np.ones(p0))),(np.hstack((np.arange(0,p0),np.arange(0,p0))),np.hstack((E0[0,:],E0[1,:])))))

Display the non-zero entries of $G$ and $W$.

In [21]:
plt.figure(figsize=(15,7))

plt.subplot(1, 2, 1)
plt.scatter(sparse.find(W)[0],sparse.find(W)[1], lw=.3)
plt.xlim(0,np.shape(W)[0])
plt.ylim(1,np.shape(W)[1])
plt.title("W")

plt.subplot(1, 2, 2)
plt.scatter(sparse.find(G)[0],sparse.find(G)[1], lw=.3)
plt.xlim(0,np.shape(G)[0])
plt.ylim(1,np.shape(G)[1])
plt.title("G")

plt.show()

The Laplacian can be factored as follow $$ L = G^* G $$ where $G^*$ is the transposed matrix (i.e. the adjoint operator, which can be thought as some kind of divergence).

Check numerically that the factorization indeed hold.

In [22]:
err = sparse.linalg.norm((np.dot(np.transpose(G),G) - L), 'fro')
print("Factorization error (should be 0) = %.2f" %err)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-22-5125b857c4be> in <module>()
----> 1 err = sparse.linalg.norm((np.dot(np.transpose(G),G) - L), 'fro')
      2 print("Factorization error (should be 0) = %.2f" %err)

AttributeError: 'module' object has no attribute 'norm'

Note that this factorization shows that $L$ is a positive semi-definite operator, i.e. it satisfies

$$ \dotp{L f}{f} = \norm{G f}^2 \geq 0. $$

If the mesh is connected, then only constant signals $f \in \RR^n$ satisfies $Lf=0$.

Note that this convention is the contrary to the usual convention of differential calculus, in which a Laplacian is a negative operator.

Function Denoising with Filtering

A signal defined on the mesh is a vector $f \in \RR^n$, where $f_i \in \RR$ is the value at vertex $1 \leq i \leq n$.

Load a texture image $I$.

In [23]:
M = load_image("nt_toolbox/data/lena.png", 256)

Compute spherical coordinates $ (\theta_i,\phi_i)$ for each vertex $x_{0,i}$ on the mesh.

In [24]:
v = X0 - np.tile(np.mean(X0, 1)[:,np.newaxis], (1,n))
theta = np.arccos(v[0,:]/np.sqrt(np.sum(v**2,0)))/np.pi
phi = (np.arctan2(v[1,:],v[2,:])/np.pi + 1)/2.

Interpolate the texture on the mesh.

In [25]:
from scipy import interpolate
from matplotlib import colors

x = np.linspace(0, 1, np.shape(M)[0])
f = rescale(interpolate.RectBivariateSpline(x, x, np.transpose(M)).ev(theta,phi))
my_cmap = (np.repeat(f[:,np.newaxis],3,1))

Display the textured mesh.

In [26]:
fig = plt.figure(figsize = (10,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X0[0,:], X0[1,:], X0[2,:], lw=0, c=my_cmap, s=30)
ax.axis("off")
ax.view_init(elev=90, azim=-90)
ax.dist = 6

The operator $\tilde W : \RR^n \rightarrow \RR^n$ can be used to smooth a function $f$, simply by computing $\tilde W f \in \RR^n$.

To further smooth the mesh, it is possible to iterate this process, by defining $f^{(0)} = f$ and

$$ f^{(\ell+1)} = \tilde W f^{(\ell)}.$$

Note that one has $ f^{(\ell)} = \tilde W^{\ell} f, $ but it is preferable to use the iterative algorithm to do the computations.

Exercise 1

Display the evolution of the image on the mesh as the number of iterations increases.

In [27]:
run -i nt_solutions/meshproc_3_denoising/exo1
In [28]:
## Insert your code here.

Mesh Denoising with Filtering

The quality of a noisy mesh is improved by applying local averagings, that removes noise but also tends to smooth features.

The operator $\tilde W : \RR^n \rightarrow \RR^n$ can be used to smooth a function, but it can also be applied to smooth the position $W \in \RR^{3 \times n} $. Since they are stored as row of a matrix, one should applies $\tilde W^*$ (transposed matrix) on the right side. $$ X^{(0)} = X \qandq X^{(\ell+1)} = X^{(\ell)} W^* $$

In [29]:
niter = 5
X1 = np.copy(X)
for i in range(niter):
    X1 = tW.dot(np.transpose(X1)).transpose()

We can compute the errors in dB with respect to the clean mesh, using

$$ \text{SNR}(X,Y) = -20 \log_{10} \pa{ \norm{X-Y}/\norm{Y} }. $$
In [30]:
pnoisy = snr(X0, X)
pfilt  = snr(X0, X1)
print("Noisy = %.f dB, Denoised = %.f dB" %(pnoisy,pfilt))
Noisy = 27 dB, Denoised = 40 dB

Display the results.

In [31]:
plt.figure(figsize=(10,10))
plot_mesh(X1, F)