I was looking for motivation behind Hahn-Banach theorem in functional analysis.So,for a start,I looked at the case when the normed linear space is $X=\mathbb R^2$ and $X_0=\mathbb R\times\{0\}\simeq \mathbb R$ is a subspace.Suppose I consider the functional $f:\mathbb{R\to R}$ defined by $f(x)=2x$ ,clearly this functional has $||f||=2$.Now I am trying to see if I can extend this functional to a functional $g:\mathbb{R^2\to R}$ such that $g|_{X_0}=f$ and $||g||=||f||$.Now $||g||=\sup\limits_{||(x,y)||=1} |g(x,y)|$.Now if I let $g(x,y)=2x+ay$ ,then finding norm of $g$ is equivalent to maximizing $|g(x,y)|=|2x+ay|$ subject to the constraint $x^2+y^2=1$.Now I have to find $M(a)=\max\limits_{x^2+y^2=1}{|2x+ay|}$ and find $a$ such that $M(a)=2$.Thus I can find a suitable extension of $f$ such that the norm is preserved.But I do not know how to solve this maximization problem and the calculations seem to be tedious.Is there any systematic way to solve such problems?Also my question is that is there any simple way to calculate the norm of a linear functional $f(x_1,x_2,...,x_n)=a_1x_1+...+a_nx_n$?If there is such a method,then please illustrate with an example.
2026-04-28 12:18:40.1777378720
Maximization problem in order to find the norm of a linear functional.
35 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in REAL-ANALYSIS
- how is my proof on equinumerous sets
- Finding radius of convergence $\sum _{n=0}^{}(2+(-1)^n)^nz^n$
- Optimization - If the sum of objective functions are similar, will sum of argmax's be similar
- On sufficient condition for pre-compactness "in measure"(i.e. in Young measure space)
- Justify an approximation of $\sum_{n=1}^\infty G_n/\binom{\frac{n}{2}+\frac{1}{2}}{\frac{n}{2}}$, where $G_n$ denotes the Gregory coefficients
- Calculating the radius of convergence for $\sum _{n=1}^{\infty}\frac{\left(\sqrt{ n^2+n}-\sqrt{n^2+1}\right)^n}{n^2}z^n$
- Is this relating to continuous functions conjecture correct?
- What are the functions satisfying $f\left(2\sum_{i=0}^{\infty}\frac{a_i}{3^i}\right)=\sum_{i=0}^{\infty}\frac{a_i}{2^i}$
- Absolutely continuous functions are dense in $L^1$
- A particular exercise on convergence of recursive sequence
Related Questions in FUNCTIONAL-ANALYSIS
- On sufficient condition for pre-compactness "in measure"(i.e. in Young measure space)
- Why is necessary ask $F$ to be infinite in order to obtain: $ f(v)=0$ for all $ f\in V^* \implies v=0 $
- Prove or disprove the following inequality
- Unbounded linear operator, projection from graph not open
- $\| (I-T)^{-1}|_{\ker(I-T)^\perp} \| \geq 1$ for all compact operator $T$ in an infinite dimensional Hilbert space
- Elementary question on continuity and locally square integrability of a function
- Bijection between $\Delta(A)$ and $\mathrm{Max}(A)$
- Exercise 1.105 of Megginson's "An Introduction to Banach Space Theory"
- Reference request for a lemma on the expected value of Hermitian polynomials of Gaussian random variables.
- If $A$ generates the $C_0$-semigroup $\{T_t;t\ge0\}$, then $Au=f \Rightarrow u=-\int_0^\infty T_t f dt$?
Related Questions in OPTIMIZATION
- Optimization - If the sum of objective functions are similar, will sum of argmax's be similar
- optimization with strict inequality of variables
- Gradient of Cost Function To Find Matrix Factorization
- Calculation of distance of a point from a curve
- Find all local maxima and minima of $x^2+y^2$ subject to the constraint $x^2+2y=6$. Does $x^2+y^2$ have a global max/min on the same constraint?
- What does it mean to dualize a constraint in the context of Lagrangian relaxation?
- Modified conjugate gradient method to minimise quadratic functional restricted to positive solutions
- Building the model for a Linear Programming Problem
- Maximize the function
- Transform LMI problem into different SDP form
Related Questions in PROBLEM-SOLVING
- How do you prevent being lead astray when you're working on a problem that takes months/years?
- How to prove the inequality $\frac{1}{n}+\frac{1}{n+1}+\cdots+\frac{1}{2n-1}\geq \log (2)$?
- How to solve higher order polynomial equations?
- Methods in finding invariant subspaces?
- Question about the roots of a complex polynomial
- Using a counting argument to prove some equalities? (Problem Solving)
- (Problem Solving) Proving $\sum_{k=0}^{n}(-1)^k\binom{n}{k}\frac{1}{k+m+1}=\sum_{k=0}^{m}(-1)^k\binom{m}{k}\frac{1}{k+n+1}$
- (Problem Solving) Proving $|x|^p +|y|^p \geq |x+y|^p$
- Each vertex of the square has a value which is randomly chosen from a set.
- Fill in the blanks
Related Questions in NONLINEAR-OPTIMIZATION
- Prove that Newton's Method is invariant under invertible linear transformations
- set points in 2D interval with optimality condition
- Finding a mixture of 1st and 0'th order Markov models that is closest to an empirical distribution
- Sufficient condition for strict minimality in infinite-dimensional spaces
- Weak convergence under linear operators
- Solving special (simple?) system of polynomial equations (only up to second degree)
- Smallest distance to point where objective function value meets a given threshold
- KKT Condition and Global Optimal
- What is the purpose of an oracle in optimization?
- Prove that any Nonlinear program can be written in the form...
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Here's a quick trick: from general Hilbert space theory, if $H$ is a Hilbert space and $\xi\in H$ is a vector, then $\phi:H\to\mathbb{R}$ given by $\phi(x)=\langle x,\xi\rangle$ is a bounded linear functional with $\|\phi\|=\|\xi\|$ (you can easily show this yourself).
The functional $f:\mathbb{R}^n\to\mathbb{R}$ given by $f(x_1,\dots,x_n)=a_1x_1+\dots+a_nx_n$ is nothing but the functional $f(\vec{x})=\langle\vec{x},\vec{a}\rangle_{\mathbb{R}^n}$, so $$\|f\|=\|\vec{a}\|_{\mathbb{R}^n}=\bigg(\sum_{j=1}^n|a_j|^2\bigg)^{1/2}.$$
Alternative approach
We can also verify this using Lagrange's multiplier method, but it takes a few calculations. Let's do that: assume that $(a_1,\dots,a_n)\ne0$.
We need to find the critical points of $f(\vec{x})$ subject to the constraint $\vec{x}\in\mathbb{S}^{n-1}$, which can be restated as $g(x_1,\dots,x_n)=0$, where $g(x_1,\dots,x_n)=\sum_{j=1}^nx_j^2-1$. Let $\lambda\in\mathbb{R}$. We need to solve the system of equations $$(\nabla f-\lambda\nabla g)(x_1,\dots,x_n)=0$$ $$g(x_1,\dots,x_n)=0.$$ If $\lambda=0$, this is impossible. For $\lambda\ne0$, this has a unique solution, namely $(x_1,\dots,x_n)=(\frac{a_1}{2\lambda},\dots,\frac{a_n}{2\lambda})$ and $\lambda=\frac{1}{2}\big(\sum_{j=1}^na_j^2\big)^{1/2}$. So, the maximum of $f$ is equal to $$f(\frac{a_1}{2\lambda},\dots,\frac{a_n}{2\lambda})=\big(\sum_{j=1}^na_j^2\big)^{-1/2}\cdot\big(\sum_{j=1}^na_j^2\big)=\big(\sum_{j=1}^na_j^2)^{1/2}$$