I have an understanding how the solution of the following convex optimization behaves for a particular data: $min(norm(Ax-b),2)+norm(x,1))$. In this case $A$ is a known matrix and $b$ is a known vector. We want to find $x$. I want to analyse the case when both $A$ and $b$ are changed by a function. This means the new problem is $min(norm(f(A)x-f(b)),2)+norm(x,1))$.
This question may be vague as I have not specified the function $f$. Can there be a class of function for which the problem would not deviate much? Or is there any such $f$ which is actually used to get a better or useful optimization problem compare to the original one? Specifically, if the function $f$ is non-linear is there any application, or foe some non-linear $f$ the optimization problem become more attractive than the original one.