I'm new to geometry and when I was reading some research paper about geometric deep learning, there was a word "pseudo-coordinates". I searched the means of it, but there was few references. Can someone please explain me what it is and how it is related to manifolds? Thank you in advance.
2026-03-27 05:34:24.1774589664
What is "pseudo-coordinates"?
826 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in GEOMETRY
- Point in, on or out of a circle
- Find all the triangles $ABC$ for which the perpendicular line to AB halves a line segment
- How to see line bundle on $\mathbb P^1$ intuitively?
- An underdetermined system derived for rotated coordinate system
- Asymptotes of hyperbola
- Finding the range of product of two distances.
- Constrain coordinates of a point into a circle
- Position of point with respect to hyperbola
- Length of Shadow from a lamp?
- Show that the asymptotes of an hyperbola are its tangents at infinity points
Related Questions in MANIFOLDS
- a problem related with path lifting property
- Levi-Civita-connection of an embedded submanifold is induced by the orthogonal projection of the Levi-Civita-connection of the original manifold
- Possible condition on locally Euclidean subsets of Euclidean space to be embedded submanifold
- Using the calculus of one forms prove this identity
- "Defining a smooth structure on a topological manifold with boundary"
- On the differentiable manifold definition given by Serge Lang
- Equivalence of different "balls" in Riemannian manifold.
- Hyperboloid is a manifold
- Integration of one-form
- The graph of a smooth map is a manifold
Related Questions in COORDINATE-SYSTEMS
- How to change a rectangle's area based on it's 4 coordinates?
- How to find 2 points in line?
- Am I right or wrong in this absolute value?
- Properties of a eclipse on a rotated plane to see a perfect circle from the original plane view?
- inhomogeneous coordinates to homogeneous coordinates
- Find the distance of the point $(7,1)$ from the line $3x+4y=4$ measured parallel to the line $3x-5y+2=0.$
- A Problem Based on Ellipse
- Convert a vector in Lambert Conformal Conical Projection to Cartesian
- Archimedean spiral in cartesian coordinates
- How to find the area of the square $|ABCD|$?
Related Questions in MACHINE-LEARNING
- KL divergence between two multivariate Bernoulli distribution
- Can someone explain the calculus within this gradient descent function?
- Gaussian Processes Regression with multiple input frequencies
- Kernel functions for vectors in discrete spaces
- Estimate $P(A_1|A_2 \cup A_3 \cup A_4...)$, given $P(A_i|A_j)$
- Relationship between Training Neural Networks and Calculus of Variations
- How does maximum a posteriori estimation (MAP) differs from maximum likelihood estimation (MLE)
- To find the new weights of an error function by minimizing it
- How to calculate Vapnik-Chervonenkis dimension?
- maximize a posteriori
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Pseudo-coordinates in geometric learning architectures serve two purposes:
They provide local pairwise features among neighbours, i.e. they associate some latent vector to the edges of the graph, rather than just the nodes. They are thus like an adjacency matrix but describing something richer than just connectivity.
They act like a local coordinate system describing a local "patch" on the manifold surface or graph. This tells the network something about directionality on the graph.
Essentially, they give the network easy access to the local geometry or structure of the patches, rather than forcing it to figure it out from e.g. just binary connectivity.
Mathematically, consider a graph-like construct $\mathcal{M}=(\mathcal{V},\mathcal{E},\mathcal{U})$, where $\mathcal{V}$ is the set of nodes (with features $f(v)\in\mathbb{R}^n\;\forall\;v\in\mathcal{V}$), $\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}$ is the set of directed edges, and $\mathcal{U}$ is the pseudo-coordinate function. Let $\mathcal{N}(v)=\{u\in\mathcal{V}\mid (u,v)\in\mathcal{E}\}$ be the set of neighbours of a node $v$. We can think of $\mathcal{U}$ in two equivalent ways: (1) as a function $u(x,y) : \mathcal{V}\times \mathcal{N}(x)\rightarrow\mathbb{R}^d$ that maps a vertex and any of its neighbours to a vector and (2) as a set associating a vector to every directed edge $\mathcal{U}=\{ u(e)\in\mathbb{R}^d \mid e\in\mathcal{E}\}$.
If you prefer to think of a smooth Riemannian manifold $M=(\mathcal{X},g)$, then one example is to consider a local chart $C(p)$ around some $p\in M$ with local coordinates $\alpha_p,\beta_p$ (in the 2D case). One simple pseudo-coordinates would be just $u(p,q)=(\alpha_p(q),\beta_p(q))$. This is the basis of the Geodesic CNN, referenced in the paper you linked. But they can be more general than this (e.g. a transform thereof). (See SplineCNN, for instance, or the graph example in the paper you linked).
How they are used depends on the paper. For example, in most graph (convolutional) neural networks, one wants to compute a weighted average of the features of a point and those of its connected neighbours. But how to compute the weight? If all you know is that the nodes are connected, then the weights are limited in what they can be computed from. But now the weights in the average can depend on the pseudo-coordinates, for instance: $$ F(v)_j = \sum_{\xi\in\mathcal{N}(v)} W(u(\xi,v)|\Theta_j) f_j(\xi) $$ where we are computing the $j$th output (indexing over the channels of the weighting kernel and those of the input feature map) of node $v\in\mathcal{V}$, dependent on learned parameters $\Theta_j$ of weight function $W:\mathbb{R}^d\rightarrow \mathbb{R}$. This pseudo-coordinate-dependent weighted sum is called a patch operator, since it extracts a representation $F(v)$ of a patch about a point $v$. The analogy to this in classical convolutional neural networks is simply the Euclidean image patch around a given point, which is convolved with a kernel to give rise to the new feature map at that point. Thus, given the (pseudo-)patch $F(v)$, the natural thing to do is "convolve" it to a learned graph signal $g$ (analogous to the learned kernel weights of classical CNN's filters): $$ (f\ast g_\ell)(v) = \sum_j g_{\ell j} F(v)_j $$ So that the output features are $ f_\text{out}(v)=((f\ast g_1)(v),\ldots,(f\ast g_K)(v)) $. Again, though, it depends on the paper.
Basically, relating this back to classical CNNs, in the case of Euclidean images, we extract little windows as patches $P$, treating each element of this window equally. The learned kernel $\kappa$ convolved to it easily associates each weight to an input value: $P_{ij}$ gets multiplied to $\kappa_{ij}$, before performing the summation part of the convolution. But on manifolds or graphs, this association is no longer so obvious. For instance, imagine rotating an image: the CNN weights would then not properly apply to the input, because the positions of the template filter would have gone astray. Instead, on manifolds, we create some pseudo-coordinates instead, which attempt to help let the network learn a solution to this directional ambiguity problem, though it does not solve it in general.
References
Monti et al, Geometric deep learning on graphs and manifolds using mixture model CNNs
Fey et al, SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels