Now I am try to solve an inverse problem. In the end it is to solve an non-convex optimization problem(we call it phase retrieval problem). The loss is $\lVert |f(x)|^2-I \lVert_2^2+\lVert x\lVert_1+TV(x)$
The $f(x)$ is non-convex function. x is 3D matrix. It is the 3D volume that I want to reconstruct(for example it is matrix with dimension 512X512X100) . TV is a total variation. I is 2D matrix(a image measurement) I just find that when x is not very large like 512X512X5. It can be optimized very well. However, when x become very large, like 512X512X100, it looks like it will fall into some local minimum.
I can understand that when the variable become more and more, the system is more and more underdetermined. There will be more possible solution. It will be hard to find the best solution. But can someone explain it in optimization perspective? When the variable freedom is become larger how it make the optimization more difficult?
like will there be more local minimum point. Or some other reason? like dimension become larger, the gradient direction is hard to obtain? or somehow?
The following is some detail, it doesn't matter, just in case you are interested in, i put here for some reference: Now my problem is that when I solve this problem by FISTA.It can be solved very well. However, now i solve this problem in the pytorch by the auto gradient optimizer like Adam(i only have one image measurement data I, so i guess it Adam degraded as gradient descent with momentum), when x is not very large, for example 512X512X4, it can be solved very well. However, when x beome very large, like 512X512X100, then it looks like it fall into some local minimum.