I have the following equation: $$F(x)=\int^{a}_{b}f(x')K(x,x')dx'$$ the f(x) and F(x) are known functions and I need to find only K(x,x'). Both functions F and f are smooth, integration limits are finite. I believe this is in some sence integral equation inverse problem. Could it be solved analytically?
mathreadler's answer inspired me on interesting idea: let's define proper solution as one which minimize norm in Hilbert space $$K(x,x'):min_{K}||F(x)-\int^{a}_{b}f(x')K(x,x')dx'||_{L_2}$$ Looking on this as on functional $J[K(x,x')]$ $$J=\sqrt{\int(F(x)-\int^{a}_{b}f(x')K(x,x')dx')^{2}dx}$$ Minimizing this with respect to K should give some equation on K (like Lagrange equation)$$\frac{\delta J[K]}{\delta K(x,x')}=0$$ But i have no idea how to define functional derivative if function have two arguments, is it possible?
If $F$ and $f$ are well behaved enough, you can discretize it into a linear least squares system:
$$K = \min_{K}\|F-S(K\circ (f1^T))\|_2$$ Where $S$ is a discrete sum approximating the integral, $\circ$ is Hadamard product.
$$S = \left[\begin{array}{cccccc}1&1&\cdots&1&1&1\end{array}\right]$$
We may need to add extra terms corresponding to any regularizations which may be needed.
Typical usual regularization terms to acquire smoothness could be $$\cases{\|D_x\text{vec}(K)\|_2\\\|D_y\text{vec}(K)\|_2}$$
Where $D_x,D_y$ are matrices doing some discrete differential wrt x and y (along rows and columns of K). Some popular ones include $[1,-1],[1,-1]^T$, $[1,0,-1],[1,0,-1]^T$ and the famous Sobel $\left[\begin{array}{ccc}1&0&-1\\2&0&-2\\1&0&-1\end{array}\right]$, $\left[\begin{array}{ccc}1&2&1\\0&0&0\\-1&-2&-1\end{array}\right]$. Although at least for the Sobel ones you would probably need some even smaller regularization filter as well.
To enforce time invariance of the kernel we can add the cost term $\|H\text{vec}(K)\|_2$ where $H = \left[\begin{array}{ccc}0&-1\\1&0\end{array}\right]$ so that each neighbouring diagonal element are encouraged to be close to each other.
update since $K$ is a matrix, we can benefit from vectorizing it. If we do that, we can write both operations above $(\cdot \circ f1^T)$ as well as multiplication by $S$ (from the left) as matrix multiplications working on the vectorization of $K$. More explanation exists at wikipedia entry on Kronecker products, if you scroll down to matrix equations chapter.
The new problem to solve becomes: $$\min_{v_K}\|F-M_SM_fv_K\|_2$$ where $v_K$ is the vectorization of $K$ (a vector), we just need to figure out what the new matrices $M_S$ and $M_f$ which do the operations mentioned above looks like.