I have a vector of observations $\vec x_{\text{obs}}$ that have been measured with known uncertainties $\vec \sigma_{x}$.
I have a model $f$ that takes parameters $\vec \theta$ and produces values $f(\vec \theta) = \vec x_{\text{mod}}$ that can be compared with observations.
I want to find the parameters $\hat \theta$ that, assuming my model is correct, best reproduce the observed values $\vec x_{\text{obs}}$.
This is an optimization problem that I can use a genetic algorithm (for example) to solve. However, in the end, I obtain a point estimate of $\vec \theta$.
Given that I have known uncertainties $\vec \sigma_{x}$ on my observations, and also given that the solution I find probably has some errors on it (i.e. most likely $||f(\vec \theta) - \vec x_{\text{obs}}|| \neq 0$), how can I calculate uncertainties $\vec \sigma_\theta$ on my model parameters $\vec \theta$?
In general, the framework in which we work to estimate a parameter and to control the quality of the estimation is the following: we need a parameter $\vec \theta$ to be estimated, a set of observations $x$ and we look for a statistic $\hat\theta=\hat\theta(x)$ that estimates the parameter $\vec \theta$. The core of the "quality control" lies on the fact that we know the distribution of some entities (for example the $x$) and we exploit it to obtain information (such as the standard deviation) about the statistic $\hat\theta$. This is why I highlighted the fact that $\hat\theta=\hat\theta(x)$ is a function of $x$.
In order to achieve your goal, you can try to emulate the steps for examples for the creation of a confidence interval (the endpoints are actually two estimators) for the mean value of a normal distributed sample.