"Almost" in the kernel

84 Views Asked by At

Is there a notion of "almost kernel" of a matrix? Namely, a vector $v$ is "almost" in the kernel of the matrix if it is mapped "close" to 0. And thus the "almost kernel" of a matrix is a subspace (this would need to be verified) of all vectors that are mapped closed to zero.

I guess one could define something like this, but I am wondering if such a notion already exists.

2

There are 2 best solutions below

0
On BEST ANSWER

As mentioned in the comments section, there are two major problems:

  1. Such a notion of "almost kernel" needs a notion of distance, thus a metric is necessary.
  2. The resulting set is not a linear subspace.

So, let $M$ be some $n\times m$ matrix with elements from some field $A$ and $V$ be the vector space $A^{m\times1}$ of all columns of length $m$ with elements from $A$. Also, let $d$ be some metric on $V$. Then, one may define for every $\epsilon>0$ the $\epsilon-$kernel of $M$ as: $$\ker_\epsilon M:=\{x\in V\mid d(Mx,\mathbf{0})\leq\epsilon\},$$ or, alternatively: $$\ker_\epsilon M:=M^{-1}(\hat{B}(\mathbf{0},\epsilon)),$$ where, $\mathbf{0}$ is the zero element of $V$, $\hat{B}(x,r)$ denotes a closed ball with center $x$ and radius $r$ and $M$ is viewed as a linear map $M:V\to V$.

Now, note that the word "ball" is not used just for a nice generalization of our intuition; it also implies some kind of symmetry in our case. More specifically, a "ball" has a non-linear symmetry that makes it impossible for it to be a linear subspace, apart from "trivial" situations. By "trivial", we mean metrics that are not derived from norms (one may object that these are interesting cases and, indeed, they are, but we will examine them later on). So, suppose that $d$ is derived from some norm $\lVert\cdot\rVert$.

Let $B=\{u_1,u_2,\ldots,u_m\}$ be some base of $V$. Then, the vectors $$v_k:=\left\{\begin{array}{ll} \dfrac{\epsilon u_k}{2\lVert Mu_k\rVert} & u_k\not\in\ker M\\ u_k & u_k\in\ker M \end{array}\right.,$$ are linearly independent and: $$\lVert Mv_k\rVert=\frac{\epsilon}{2},$$ for $u_k\not\in\ker_\epsilon M$. Hence $v_k\in\ker_\epsilon M$ for every $k=1,2\ldots,m$.

Supposing that $\ker_\epsilon M$ is a linear subspace of $V$, we have that $\dim\ker_\epsilon M\leq m$. Also, $v_k$ being linerarly independent implies that $\dim\ker_\epsilon M=m$. Since both $V$ and $\ker_\epsilon M$ are both finite-dimensional of equal dimensions with $\ker_\epsilon M\leq V$, we get that $\ker_\epsilon M=V$, which, implies that $\ker M=V$ and, hence $M$ is the zero matrix.

So, "non-trivial" metrics give trivial results (the only matrix that fulfils our "wishes" is the trivial one).

However, what if we demand that $\ker_\epsilon M$ is a non-trivial linear subspace of $V$, with $M\neq0$? Of course, from the above it is obvious that $d$ is not derived from any norm. Before starting, note that one metric that is "almost" (no pun intended) what we would wish is the discrete metric: $$\delta(x,y)=\left\{\begin{array}{ll} 1 & x\neq y\\ 0 & x=y \end{array}\right..$$ Indeed, for $\epsilon<1$ $\ker_\epsilon M=\{0\}$ and for $\epsilon\geq1$, $\ker_\epsilon M=V$ - which are trivial subspaces of $V$, however.

0
On

You can consider for fixed $\epsilon>0$ the set $$ C:=\{v: \ \|Av\|\le \epsilon \|v\|\}. $$ This has some nice properties: it is a closed and convex cone. Also $v\in C$ implies $-v\in C$.

Such constructions are useful in constrained optimization and second-order sufficient optimality conditions. Second-order necessary optimality conditions are proven for directions in some critical cone. The critical cone has to be extended for sufficient conditions to work. There, constructions similar to the above are used. You can look in the book of Bonnans and Shapiro for the theory.