While I was reading the paper "Geometric deep learning on graphs and manifolds using mixture model CNNs", I didn't understand the figure of "patch operator weighting functions". Can someone explain me clearly how these red curves relates to graphs and manifolds? Thank you.
2026-03-27 10:09:38.1774606178
Weighting functions in the local polar system of coordinates.
147 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in GEOMETRY
- Point in, on or out of a circle
- Find all the triangles $ABC$ for which the perpendicular line to AB halves a line segment
- How to see line bundle on $\mathbb P^1$ intuitively?
- An underdetermined system derived for rotated coordinate system
- Asymptotes of hyperbola
- Finding the range of product of two distances.
- Constrain coordinates of a point into a circle
- Position of point with respect to hyperbola
- Length of Shadow from a lamp?
- Show that the asymptotes of an hyperbola are its tangents at infinity points
Related Questions in GRAPH-THEORY
- characterisation of $2$-connected graphs with no even cycles
- Explanation for the static degree sort algorithm of Deo et al.
- A certain partition of 28
- decomposing a graph in connected components
- Is it true that if a graph is bipartite iff it is class 1 (edge-coloring)?
- Fake induction, can't find flaw, every graph with zero edges is connected
- Triangle-free graph where every pair of nonadjacent vertices has exactly two common neighbors
- Inequality on degrees implies perfect matching
- Proving that no two teams in a tournament win same number of games
- Proving that we can divide a graph to two graphs which induced subgraph is connected on vertices of each one
Related Questions in MANIFOLDS
- a problem related with path lifting property
- Levi-Civita-connection of an embedded submanifold is induced by the orthogonal projection of the Levi-Civita-connection of the original manifold
- Possible condition on locally Euclidean subsets of Euclidean space to be embedded submanifold
- Using the calculus of one forms prove this identity
- "Defining a smooth structure on a topological manifold with boundary"
- On the differentiable manifold definition given by Serge Lang
- Equivalence of different "balls" in Riemannian manifold.
- Hyperboloid is a manifold
- Integration of one-form
- The graph of a smooth map is a manifold
Related Questions in GEOMETRIC-TOPOLOGY
- Finite covers of handlebodies?
- CW complexes are compactly generated
- Constructing a fat Cantor Set with certain property
- Homologically zero circles in smooth manifolds
- Labeled graphs with unimodular adjacency matrix
- Pseudoisotopy between nonisotopic maps
- A topological question about loops and fixed points
- "Continuity" of volume function on hyperbolic tetrahedra
- Example of path connected metric space whose hyperspace with Vietoris topology is not path connected?
- What is the pushout of $D^n \longleftarrow S^{n-1} \longrightarrow D^n$?
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Any differentiable manifold is locally homeomorphic to Euclidean space. In other words, if we select a point on the manifold, then over very small distances the manifold can be approximated by Euclidean space. It is then possible to parameterise the manifold with local polar coordinates $(\rho,\theta)$ which behave like polar coordinates in an infinitesimal region around the selected point.
The models GCNN, ACNN and MonNet each use a differentiable manifold parameterised by local polar coordinates. They have a weighting function, called the patch operator weighting function $w_i(\rho,\theta)$. Table $1$ in the paper gives $w_i(\rho,\theta)$ for ACNN and GCNN.
The red curves are $0.5$ level sets. That is to say, $w_i(\rho,\theta)=0.5$ along the red curves.
Edit: The OP asked about the definition of MoNet
In section 4, the paper mentions using a weighting function of the form $w_j({\bf{u}})=\exp\left(-\frac{1}2(\bf{\mu}-\bf{\mu}_j)^T\bf{\Sigma}_j(\bf{\mu}-\bf{\mu}_j)\right)$ with $\bf{\Sigma}_j$ and $\bf{\mu}_j$ learnable (formula 11 in the paper). $\bf{\Sigma}_j$ is restricted to being a diagonal matrix.
The paper then describes the neural network used to learn $\bf{\Sigma}_j$ and $\bf{\mu}_j$ and the procedure used to train it. The Adam method is explained by the following paper: https://arxiv.org/abs/1412.6980