How to write a formula for a piecewise function that contains a single separate point using Heaviside function?

455 Views Asked by At

Here and here I saw how to rewrite a piecewise function using the Heaviside function. Thus, if I a have a function that looks like this:

enter image description here

I can write it down as: $$y(t) = 1 \cdot [H(t) - H(t-1)] + 2 \cdot [H(t-1)]$$

QUESTION: if I have a piecewise function that look like this:

enter image description here

then how do I write it down using the Heaviside function?


My discussion looks like the following.

I don't see a way to use the Heaviside function to represent a single point. Therefore, I checked if there are functions out there that can represent a single point. The closest I could find was Kronecker delta. I decided that Kronecker delta wasn't a good fit because, first, it can take only two arguments, second, we normally use it with natural numbers only. Therefore, I decided that I had to introduce a function similar to the Heaviside function but that looks like this:

enter image description here

where $\epsilon \rightarrow 0$.

Then I write down my piecewise function as:

$$y(t) = 1 \cdot [H(t) - H(t-1)] + 2 \cdot [H(t-1)] + 3 \cdot [S(t-1)]$$

Then I want to take a derivative of that expression. I know that distributional derivative of the Heaviside function is the Dirac delta function. I have to find out the distributional derivative of my $S(t)$ function. To do that, I make use of the procedure of proving that the Dirac delta function is the distributional derivative of the Heaviside function. That procedure makes use of the fact that the Heaviside function $H(t)$ is constant on the interval $[0, \infty )$ and one can take it out from the integral. But my function $S(t)$ is also constant on the interval $(-\epsilon, \epsilon )$ and I can also take it out from the integral. Therefore, I conclude that the distributional derivative of my $S(t)$ function is the Dirac delta function as well and I'm good to go. That works, of course, for test functions only.

Is that correct?

Disclaimer: honestly speaking, it feels to me like I'm doing some inappropriate things to mathematics here ...


ANSWER

Thank you very much to Thomas Andrews and Paul Garrett for the solutions they provided!

Briefly: one can use either multiplication or addition/subtraction to manipulate the Heaviside function in order to get it nonzero only at one point. Let's says we use Paul Garrett's solution: $H(x) \cdot H(-x)$. We can pick any non-zero $x$ and get zero. For instance, if $x=5$, then $H(5) \cdot H(-5) = 1 \cdot 0 = 0$. But if I pick $x=0$ and if I assign $H(0) = 1$, then $H(x) \cdot H(-x) = H(0) \cdot H(0) = 1 \cdot 1 = 1$. The same holds true for Thomas Andrews' solution. For $x=0$ we have: $H(x) + H(-x) - 1 = H(0) + H(0) - 1 = 1 + 1 - 1 = 1$. And for any non-zero $x$, say $x=5$, we have: $H(x) + H(-x) - 1 = H(5) + H(-5) - 1 = 1 + 0 - 1 = 0$.

Also, definitely check Steven Clark's solution in the comment below. That is the solution for the entire function, not just for $y=3$.

3

There are 3 best solutions below

0
On BEST ANSWER

Assuming $H(0)=1,$ then you can use $$P(x)=H(x)+H(-x)-1=\begin{cases}1&x=0\\0&x\neq0\end{cases}$$ to adjust individual points.

So adding $g(x)=f(x)+aP(x-b)$ will give $g(x)=f(x)$ when $x\neq b,$ and $g(b)=f(b)+a.$

What you can’t get is a constant function with linear combinations of Heavyside functions. The functions $c(x)=1$ and $f_a(x)=H(x-a),g_{a}(x)=H(a-x),$ $a\in \mathbb R,$ are linearly independent.

2
On

Distributionally, your choice of value at one point does not change the (generalized) function at all: as you observe, integrating against a test function (or a continuous function) cannot distinguish between them.

Therefore, "it doesn't matter".

For that matter, I myself do not attribute any particular value to the Heaviside function at the step, because it doesn't matter. Perhaps some people do, but it's purely matter of convention ... because it doesn't matter. :)

But, still, sure, if we take the convention that Heaviside $H$ is $1$ at $0$, then $H(x)\cdot H(-x)$ is $1$ at $0$ and $0$ everywhere else, so can be used to accomplish these things. And $1-H(-x)$ is a step function that's $0$ at $0$, instead of $1$, and so on. But... it doesn't really matter.

EDIT: edited the last paragraph, after @Thomas Andrews observed some errors. :)

0
On

There are several conventions for how the Heaviside step function $\theta(x)$ is defined at $x=0$ including $\theta(0)=\text{undefined}$, $\theta(0)=1$, and $\theta(0)=\frac{1}{2}$.


Assuming the Heaviside step function is defined as

$$\theta(x)=\left\{\begin{array}{cc} 0 & x<0 \\ 1 & 0\leq x \\ \end{array}\right.\tag{1}$$

your function can be evaluated as

$$f(x)=\theta(x)+\theta(x-1)+\delta_{x,1}\tag{2}$$

where

$$\delta _{i,j}=\left\{\begin{array}{cc} 0 & i\ne j \\ 1 & i=j \\ \end{array} \right.\tag{3}$$

is the Kronecker delta function.


Figure (1) below illustrates formula (2) for $f(x)$ above where formula (2) is evaluated using the formula (1) definition for $\theta(x)$. The red discrete evaluation points illustrate the evaluation of formula (2) at $x=0$ and $x=1$.


Illustration of formula (2) for f(x)

Figure (1): Illustration of formula (2) for $f(x)$


Assuming the Heaviside step function is defined as

$$\theta(x)=\left\{\begin{array}{cc} 0 & x<0 \\ \frac{1}{2} & x=0 \\ 1 & 0<x \\ \end{array}\right.\tag{4}$$

your function can be evaluated as

$$f(x)=\theta (x)+\theta (x-1)+\frac{3}{2}\,\delta_{x,1}\tag{5}$$


Figure (2) below illustrates formula (5) for $f(x)$ above where formula (5) is evaluated using the formula (4) definition for $\theta(x)$. The red discrete evaluation points illustrate the evaluation of formula (5) at $x=0$ and $x=1$.


Illustration of formula (5) for f(x)

Figure (2): Illustration of formula (5) for $f(x)$


Now consider the following analytic representations of the Heaviside step function $\theta(x)$ and the Kronecker delta function $\delta _{i,j}$ where the evaluation frequency $f$ is assumed to be a positive integer.

$$\theta(x)=\underset{f\to\infty}{\text{lim}}\ \left(\frac{1}{2}+\frac{\text{Si}(2 \pi f x)}{\pi}\right)\tag{6}$$

$$\delta _{i,j}=\underset{f\to\infty}{\text{lim}}\ \text{sinc}(2 \pi f (i-j)\tag{7})$$


Figure (3) below illustrates formula (5) for $f(x)$ above where formula (5) is evaluated using the analytic representations of $\theta(x)$ and $\delta _{i,j}$ defined in formulas (6) and (7) above where both formulas are evaluated at $f=1000$. The red discrete evaluation points illustrate the evaluation of formula (5) at $x=0$ and $x=1$.


Illustration of formula (5) for f(x) using analytic representations

Figure (3): Illustration of formula (5) for $f(x)$ using analytic representations


Note formulas (2) and (5) for $f(x)$ evaluate differently at $x=0$ where the OP didn't specify a desired value. Formula (2) can be made to evaluate similar to formula (5) by adding a $-\frac{1}{2}\,\delta_{x,0}$ term to formula (2), and formula (5) can be made to evaluate similar to formula (2) by adding a $\frac{1}{2}\,\delta_{x,0}$ term to formula (5).