solving a simple inverse problem related to elliptic pde

216 Views Asked by At

Suppose that I have the elliptic PDE

$\nabla(\nabla A(x)\cdot U(x)) = 0$ where $x \in [0,l_1]\times [0,l_2]$ with boundary conditions $U(0,x_2) = 0, U(l_1,x_2)=1$ and $U_{x_1}(x_1,0)=0, U_{x_1}(x_1,l_2)=0.$

I am trying to test out a basic inverse problem whereby I pick $n$ points in the domain, evaluate $U(x)$ at those points, and compute $Y_i = U(x_i)+ \epsilon_i$ where $\epsilon_i \sim N(0,\sigma^2)$ and $\epsilon_i$ are iid.

Now I pretend that I only have $Y_i, i=1,\dots,n$ at my disposal and I will attempt to recover $U(x_i)$ using the Bayesian approach.

Computing the density function of $\vec{Y}$ given $\vec{U(x)}$ (both of which are vectors of length $n$) is straightforward. But I am stuck on the issue of choosing a prior due to the following:

1) I am not sure how the maximum/minimum principle for elliptic pdes could be applied here since I do not know the values of $U$ on 2 sides of the rectangle. But running numerical simulations show that $U(x)$ is always within $[0,1]$. (Side question: could we apply some max/min principle of some sort to prove the bounds on $U$?)

2) In choosing a prior for the Bayes framework, it seems intuitive to me initially that the joint density of $U(x_i), i=1,\dots,n$ is the uniform density on $[0,1]^n$ because of point 1 above. So here's my major question: Why is it that if I choose the uniform density as a prior, the maximum a posteriori method performs very poorly, almost useless?

As an example, take $n=2$. The posterior distribution is then $\text{1}_{[0,1]^2}\exp\{-\frac{1}{2\sigma^2}((Y_1-U(x_1))^2+(Y_2-U(x_2))^2))\}$.

If we suppose that the true values are $U(x_1)= 0.4, U(x_2) = 0.2$ and that $\epsilon_1 = 1.1,\epsilon_2 = 0.2$ such that $Y_1 = 1.5$ and $Y_2 = 0.4$ then using the maximum a posteriori method, the estimates for $U(x_1)$ and $U(x_2)$ are $U(x_1)=1$ and $U(x_2) = 0.4$ which are not really close to the true values. Of course in this scenario, I used $\sigma^2 =2$ and surely lowering $\sigma$ would give better estimates but does this only imply that I should choose a more sophisticated prior?

I am asking because in this case, I have the actual PDE at hand but in reality, I don't so I would presume that a good first step would be choosing a uniform prior. (I'm new to inverse problems...)

Insights greatly appreciated.

1

There are 1 best solutions below

0
On

Do I understand your problem correctly if I state you want to solve the following problem (assumption $A(x)=Id$) $ \hat U = \arg \min \{ \| U - Y \|^2_2 + \lambda \mathcal R( U) \} $ where the data $Y$ is given on a $N \times N $ grid and $\mathcal R(.)$ is a convex regularization functional? Or is your data given only at certain points on the grid?