I am doing research in Computer Science, and I found that I need to model some problems using $L^p$ spaces, $0<p<1$. More specifically, when the underlying set is discrete and finite (i.e. a finite set $S \subset \mathbb{N}^{d}$, for some $d \geq 2$).
As far as I understand, When $0<p<1$, the $p$-norm is:
- A quasinorm, which makes the triangle inequality not hold; but what are the practical consequences of this? I suppose that subadditivity is not preserved $\implies$ the metric space is not complete $\implies$ I lose translation invariance. Is it correct? Is there any other consequences?
- Homogeneous, which basically means that scaling is preserved?
From an application point of view, I am interested in 2 operations:
- Calculating the gradient of the difference between two vectors, i.e. calculating $\frac{\partial}{\partial x_k} \left\| \mathbf{x} \right\| _p = \frac{x_k \left| x_k \right| ^{p-2}} { \left\| \mathbf{x} \right\| _{p}^{p-1}}$, where $\mathbf{x} = \mathbf{y} - \mathbf{z}$, for some $\mathbf{y}, \mathbf{z} \in S$. Would the $p$-norm behave weirdly because of point 1. above?; and
- Normalizing the vectors. I suppose this should not be an issue because of point 2. above? Especially when combining the
PS: I never took a functional analysis course or any formal training on these kinds of Maths, so any help/pointers/links/books are highly appreciated.
Thanks.