I just started learning multi-resolution analysis. I know that given a scaling function, through dilation and translation, a sequence of spaces can be generated: $\cdots V_{-1} \subset V_0 \subset V_1 \cdots$ and their union is dense in $L^2(\mathbb{R^n})$. Also $V_i$ can be decomposed as $V_{i-1} \oplus W_{i-1}$, where $W_{i-1}$ is generated by wavelet functions. My question is, since we can approximate functions in $L^2(\mathbb{R^n})$ by scaling functions, why do we still need wavelet functions? And why do we need the decomposition $V_i = V_{i-1} \oplus W_{i-1}$?
2026-03-25 02:57:45.1774407465
How to understand multi-resolution analysis and wavelet transform?
370 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in REAL-ANALYSIS
- how is my proof on equinumerous sets
- Finding radius of convergence $\sum _{n=0}^{}(2+(-1)^n)^nz^n$
- Optimization - If the sum of objective functions are similar, will sum of argmax's be similar
- On sufficient condition for pre-compactness "in measure"(i.e. in Young measure space)
- Justify an approximation of $\sum_{n=1}^\infty G_n/\binom{\frac{n}{2}+\frac{1}{2}}{\frac{n}{2}}$, where $G_n$ denotes the Gregory coefficients
- Calculating the radius of convergence for $\sum _{n=1}^{\infty}\frac{\left(\sqrt{ n^2+n}-\sqrt{n^2+1}\right)^n}{n^2}z^n$
- Is this relating to continuous functions conjecture correct?
- What are the functions satisfying $f\left(2\sum_{i=0}^{\infty}\frac{a_i}{3^i}\right)=\sum_{i=0}^{\infty}\frac{a_i}{2^i}$
- Absolutely continuous functions are dense in $L^1$
- A particular exercise on convergence of recursive sequence
Related Questions in FUNCTIONAL-ANALYSIS
- On sufficient condition for pre-compactness "in measure"(i.e. in Young measure space)
- Why is necessary ask $F$ to be infinite in order to obtain: $ f(v)=0$ for all $ f\in V^* \implies v=0 $
- Prove or disprove the following inequality
- Unbounded linear operator, projection from graph not open
- $\| (I-T)^{-1}|_{\ker(I-T)^\perp} \| \geq 1$ for all compact operator $T$ in an infinite dimensional Hilbert space
- Elementary question on continuity and locally square integrability of a function
- Bijection between $\Delta(A)$ and $\mathrm{Max}(A)$
- Exercise 1.105 of Megginson's "An Introduction to Banach Space Theory"
- Reference request for a lemma on the expected value of Hermitian polynomials of Gaussian random variables.
- If $A$ generates the $C_0$-semigroup $\{T_t;t\ge0\}$, then $Au=f \Rightarrow u=-\int_0^\infty T_t f dt$?
Related Questions in FOURIER-ANALYSIS
- An estimate in the introduction of the Hilbert transform in Grafakos's Classical Fourier Analysis
- Verifying that translation by $h$ in time is the same as modulating by $-h$ in frequency (Fourier Analysis)
- How is $\int_{-T_0/2}^{+T_0/2} \delta(t) \cos(n\omega_0 t)dt=1$ and $\int_{-T_0/2}^{+T_0/2} \delta(t) \sin(n\omega_0 t)=0$?
- Understanding Book Proof that $[-2 \pi i x f(x)]^{\wedge}(\xi) = {d \over d\xi} \widehat{f}(\xi)$
- Proving the sharper form of the Lebesgue Differentiation Theorem
- Exercise $10$ of Chapter $4$ in Fourier Analysis by Stein & Shakarchi
- Show that a periodic function $f(t)$ with period $T$ can be written as $ f(t) = f_T (t) \star \frac{1}{T} \text{comb}\bigg(\frac{t}{T}\bigg) $
- Taking the Discrete Inverse Fourier Transform of a Continuous Forward Transform
- Is $x(t) = \sin(3t) + \cos\left({2\over3}t\right) + \cos(\pi t)$ periodic?
- Translation of the work of Gauss where the fast Fourier transform algorithm first appeared
Related Questions in SIGNAL-PROCESSING
- What is the result of $x(at) * δ(t-k)$
- How is $\int_{-T_0/2}^{+T_0/2} \delta(t) \cos(n\omega_0 t)dt=1$ and $\int_{-T_0/2}^{+T_0/2} \delta(t) \sin(n\omega_0 t)=0$?
- Show that a periodic function $f(t)$ with period $T$ can be written as $ f(t) = f_T (t) \star \frac{1}{T} \text{comb}\bigg(\frac{t}{T}\bigg) $
- Taking the Discrete Inverse Fourier Transform of a Continuous Forward Transform
- Is $x(t) = \sin(3t) + \cos\left({2\over3}t\right) + \cos(\pi t)$ periodic?
- Fast moving object, how to remove noise from observations?
- Computing convolution using the Fourier transform
- Find Fourier Transform of $\cos^2(ωt)x(t)$
- Finding closed expression for the output of an LTI system
- Is there an intuitive way to see that $\mathbb{E}[X|Y]$ is the least squares estimator of $X$ given $Y$?
Related Questions in WAVELETS
- Power spectrum of field over an arbitrarily-shaped country
- Are there analogues to orthogonal transformations in non-orientable surfaces?
- Can anyone please tell me how to calculate discrete wavelet transform of a derivatve?
- Orthogonality of Meyer Wavelets
- "There are no smooth, symmetric, compactly supported wavelets."
- analogue of Fourier transform where i is replaced by $sqrt{i}$
- Scaling function to form a multiresolution analysis
- Explicit formula of scaling coefficient for scaling function
- Is wavelet noise reduction just removing the higher frequency coefficients?
- Describe a signal presence in a time series
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Late to the party :)
This is an excellent question. As you'll understand the answer, you'll realize why wavelets are such a powerful thing.
It is true that scaling functions provide a way to approximate all functions in $L^2(\mathbb R^n)$. The same can be said about decomposing using Wavelets themselves. So if both scaling function and wavelet can be used to approximate a function, what makes wavelets more interesting?
The key is that wavelets have nice compression properties, whereas scaling functions don't. Compression means that if you decompose a smooth function with wavelets, most of the coefficients will be small. This is sometimes referred to as a sparse representation. This has important applications in, well, signal compression (obviously), but also denoising, machine learning etc.
The gory details:
Consider a smooth function $f$ (say $f\in \mathcal C^{\infty}$). Now, decompose it using scaling functions $$f(t)=\sum_{n\in\mathbb Z}a_n \phi\left(\frac {t-nT}T\right)$$ In that decomposition, there is no reason to expect that most of the $a_n$ coefficients will be small. Indeed, think of each translated/rescaled version of the scaling function as a bump (e.g. a Gaussian). Then coefficient $a_n$ will therefore be a rough approximation of $f(nT)$. That approximation becomes exact as $T\rightarrow 0$. So the smoothness of the original signal has no impact on the size of the coefficients. Instead, the coefficients scale as the local magnitude of $f$. $$\boxed{\text{Decomposing }f\text{ using scaling functions does not lead to a sparse representation.}}$$
Now consider using wavelets instead. Wavelets have zero (a.k.a. vanishing) moments, that is $$\int_{\mathbb R}\psi(x)x^kdx=0 \text{ for }k=0,..., K\tag{1}$$ where $K\geq 0$. The exact value of $K$ depends on the family of wavelets you're considering.
That zero-moment property has a very important consequence. It can be shown (outside the scope of my answer, please look at textbooks, or Stephane Mallat / Yves Meyer's papers) that a smooth function's wavelet coefficients will decay rapidly (this can be quantified). In fact, it's fairly easy to realize that $(1)$ means that the wavelet coefficients of any polynomial of degree $\leq K$ will be equal to $0$. From polynomials to smooth functions, you can use Taylor series approximations. So the smoother the function, the smaller most of its wavelet coefficients.
$$\boxed{\text{Decomposing }f\text{ using wavelet functions leads to a sparse representation.}}$$
The conclusion is that smooth functions are not only well approximated with wavelets, their representations are sparse.
Finally, you might say: Wait a minute! I already had that with Fourier series! I can take a smooth function that's in $L^2([0, 1])$ and represent it using Fourier series. The Fourier coefficients will decay rapidly if the function is smooth (Riemann-Lebesgue lemma), leading to a sparse representation. So why do we need wavelets?
Well, wavelets hold a major advantage over the Fourier basis: They are localized in both time and frequency. The Fourier basis is only localized in frequency. A consequence is that the Fourier representation is very sensitive to local perturbations whereas the Wavelet one isn't. Concretely, if you take a smooth signal, and add just one point of discontinuity, then the all Fourier coefficients will be affected and you'll lose the sparse representation. With wavelets, only a few number of coefficients will be affected. This makes it a great fit for many tasks, such as image compression (where you have smooth areas with a few edges).
Finally, I'll let you ponder over the following (ignore if that doesn't make too much sense): Using Fourier to represent a signal is like using scaling functions to represent the Fourier transform of that signal (that is, the signal in the frequency domain). Neither is particularly efficient. Wavelets strike just the right compromise in terms of compressing in both time and frequency.