The Universal Approximation Theorem shows that deep neural networks can approximate any function in $C(\mathbb{R}^d,\mathbb{R}^n)$ uniformly on compacts. I'm curious, can the collection of a neural networks with bounded height interpolate any finite set of points?
2026-03-25 19:05:56.1774465556
Interpolation Capability of Deep Neural Networks of bounded height
111 Views Asked by user355356 https://math.techqa.club/user/user355356/detail At
1
There are 1 best solutions below
Related Questions in STATISTICS
- Given is $2$ dimensional random variable $(X,Y)$ with table. Determine the correlation between $X$ and $Y$
- Statistics based on empirical distribution
- Given $U,V \sim R(0,1)$. Determine covariance between $X = UV$ and $V$
- Fisher information of sufficient statistic
- Solving Equation with Euler's Number
- derive the expectation of exponential function $e^{-\left\Vert \mathbf{x} - V\mathbf{x}+\mathbf{a}\right\Vert^2}$ or its upper bound
- Determine the marginal distributions of $(T_1, T_2)$
- KL divergence between two multivariate Bernoulli distribution
- Given random variables $(T_1,T_2)$. Show that $T_1$ and $T_2$ are independent and exponentially distributed if..
- Probability of tossing marbles,covariance
Related Questions in MACHINE-LEARNING
- KL divergence between two multivariate Bernoulli distribution
- Can someone explain the calculus within this gradient descent function?
- Gaussian Processes Regression with multiple input frequencies
- Kernel functions for vectors in discrete spaces
- Estimate $P(A_1|A_2 \cup A_3 \cup A_4...)$, given $P(A_i|A_j)$
- Relationship between Training Neural Networks and Calculus of Variations
- How does maximum a posteriori estimation (MAP) differs from maximum likelihood estimation (MLE)
- To find the new weights of an error function by minimizing it
- How to calculate Vapnik-Chervonenkis dimension?
- maximize a posteriori
Related Questions in NEURAL-NETWORKS
- Retrain of a neural network
- Angular values for input to a neural network
- Smooth, differentiable loss function 'bounding' $[0,1]$
- How to show that a gradient is a sum of gradients?
- Approximation rates of Neural Networks
- How does using chain rule in backprogation algorithm works?
- Computing the derivative of a matrix-vector dot product
- Need to do an opposite operation to a dot product with non square matrices, cannot figure out how.
- Paradox of square error function and derivates in neural networks
- Momentum in gradient descent
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Indeed, if neural networks are universal, then they can also interpolate finite sets of samples.
Theorem: Let $\varrho: \mathbb{R} \to \mathbb{R}$ be an activation function $\varrho: \mathbb{R} \to \mathbb{R}$ such that DNNs of height $L \in \mathbb N$ are universal. Then for every set of points $(x_i, y_i)_{i=1}^N \subset \mathbb R^d \times \mathbb R$, there exists a neural network $\Phi$ of depth $L$ such that $\Phi(x_i) = y_i$ for all $i = 1, \dots, N$.
Proof. Let $(x_i, y_i)_{i=1}^N \subset K \times \mathbb R \subset \mathbb R^d \times \mathbb R$, where $K$ is compact. By the Urysohn lemma, there exist $N$ continuous functions $(f_i)_{i=1}^N\subset C(\mathbb R^d, \mathbb R)$ such that $f_i(x_j) = \delta_{ij}$, for all $i,j \in \{ 1, \dots, N\}$.
Since the set of invertible matrices is open, there exists an $\epsilon >0$ such that every matrix $(a_{i,j})_{i,j =1}^N$ with $|a_{i,j} - \delta_{i,j}| < \epsilon $, for all $i,j \in \{1, \dots, N\}$, is invertible.
Since, by assumption, we have that DNNs with activation function $\varrho$ are universal, there exist neural networks $(\Phi_{i})_{i=1}^N$ such that $$ |\Phi_{i}(x_j) - f_i(x_j)| = |\Phi_{i}(x_j) - \delta_{ij}| < \epsilon. $$ Let $A = (a_{i,j})_{i,j =1}^N \in \mathbb R^{N \times N}$ be defined by $$ a_{i,j} := \Phi_{i}(x_j). $$ Then $A^{-1}$ exists. We define a new network $$ \Phi : = [y_{1}, \dots, y_N] \cdot A^{-1} \left( \begin{array}{c} \Phi_1 \\ \Phi_2 \\ \vdots \\ \Phi_N\end{array} \right). $$ It holds that $\Phi$ has $L$ layers, because each of the $\Phi_i$ had $L$ layers and we have not applied $\varrho$ again. Per definition, we have that $$ A^{-1} \left( \begin{array}{c} \Phi_1(x_j) \\ \Phi_2(x_j) \\ \vdots \\ \Phi_N(x_j)\end{array} \right) = e_j, $$ where $e_j$ is the $j$'th unit vector. Therefore $\Phi(x_j) = [y_{1}, \dots, y_N] \cdot e_j = y_j$ as desired.
Remark: The result only addresses the case of scalar outputs. However, the same result holds for multivariate outputs by putting networks in parallel.