partially reconstruct information of function convoluted with boxcar kernel

211 Views Asked by At

the function (f) I want to reconstruct partially could look like this: original function f

The following properties are known:

  • It consists only of alternating plateau (high/low).

  • So the first derivation is zero respectively undefined at the edges.

The function was convoluted with a kernel fulfilling the following conditions:

  • It is a boxcar function

  • Its center is at x=0

  • Its integral is 1.

I want to reconstruct only the positions of the edges of the original function (f) from the convolution result (c). So just these positions are of interest to me: interesting edge positions in f

If the convolution kernel width (k) is less than the minimum plateau width (b, 40 in the example above) of f, c looks as follows: boxcar convolution result c with k=31 (The width of the box car convolution kernel here is k=31.)

In that case it is easy to reconstruct the edge positions: I look for (possibly broad) extrema, and in between to neighbours [e1_x, e1_y] and [e2_x, e2_y] (one of them is a minimum and one a maximum of course), I search the x0 fulfilling: c(x0) = (e1_y + e2_y) / 2.

The reconstructed edge positions look like that: successfully reconstructed edge positions

But if k > b my approach fails: failing in reconstructing edge positions (k=57)

Is there a possibility to calculate the original edge positions in f, if g (and so k) and c are known, also for the k>b cases?

2

There are 2 best solutions below

1
On BEST ANSWER

This looks like a perfect match for total-variation deconvolution. In a nutshell, you have a model that your given function is $u^0 = h\ast u^\dagger$ with the box-car kernel $h$ and a piecewise constant function $u^\dagger$. To reconstruct $u^\dagger$ from the knowledge of $u^0$ and $h$ you minimize $$ \|u*h - u^0\| + \lambda TV(u) $$ over $u$ for some parameter $\lambda>0$. The first term shall enforce reconstruction while second term shall both regularize the deconvolution and also push the solution toward piecewise constant one. The term $TV$ refers to the total variation and in the discrete and one-dimensional case it is $TV(u) = \sum |u_{i+1}-u_i|$, i.e. the sum of the magnitude of the first differences. The parameter $\lambda$ allows you to balance both effects - since you do not seem to have noise, a very small $\lambda$ should work.

0
On

Just adding what I found as a solution for me:

I already stated that it is easy to reconstruct the edges for cases with k < b:

convolution with result

For cases with b < k < 2*b one can use the commutativity of the convolution operator: fg = gf

Then the wrongly measured distance between the two edges is K, but the interesting value (B) can just be found in the x length of the slope (diagonal part of the result). :-)

I also found out that there are already fine implementations for deconvolution with completely known kernels out there. This one i.e. works like a charm in my case: http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.deconvolve.html