Literature on mathematically representing the "no knowledge" or "partial knowledge".

79 Views Asked by At

I'm looking for relevant literature on various ways to encode "no knowledge" or "partial knowledge".

Let's say a variable $e$ is known to take a value from a set $E$ ("environment"), but do not know which particular value it takes. Let's assume further that you want to maximise some function $f(x,e)$ ("utility") that depends on both $x$ and $e$, where you have a control over $x\in X$ ("action") for some set $X$.

In this scenario, I'm looking for ways to model this unknown variable $e\in E$. One way I know of is to treat $e$ as a random variable over $E$, and endow it with a suitable prior distribution $\mathcal{D}$. Perhaps Jeffrey's prior could be used to represent "no information". Then, you can maximise the marginalised utility

$\underset{x\in X}{\max}\,\,\underset{e\sim\mathcal{D}}{\mathbb{E}}\,[f(x,e)]$

Second method used in Game Theory is to look at the worst-case scenario; we assume that there's an adversary choosing the worst environment possible against us, and we pick the best possible action in that case:

$\underset{x\in X}{\max}\,\,\underset{e\in E}{\min}\,\,[f(x,e)]$

The prior is good at representing partial knowledge: the distribution could be more peaky, or have less entropy, as knowledge is gained. But this method relies on having a good prior, which may be a philosophical challenge in itself. The worst-case bound is good at deriving guarantees independent of the particular instance of $e\in E$, but maybe the bound is too pessimistic, in particular if there is some partial knowledge on $e$.

I'm wondering in general if there's any literature on combining the notion of the worst-case bound from game theory and the bayesian "lack of information" interpretation of probability to represent uncertainty.

2

There are 2 best solutions below

0
On BEST ANSWER

There is a robust control theory developed by economists Lars Peter Hansen and Thomas Sargent that takes into account the potential misspecification of the model of uncertainty. In your context, the uncertainty is about $e$, and the model of this uncertainty is either a suitable prior (your method one) or a pessimistic prior that puts probability 1 on $\underline{e}=\arg\min_{e\in E}f(x,e)$.

They have an article that introduces the theory:

as well as a book, which is a more comprehensive treatment with various applications:

0
On

I was following the pointers @Herr K. has provided and came across two fields of study that are highly related to my question - robust optimization (RO) and stochastic optimization (SO).

They study the optimization problems when the data are uncertain. The difference is that RO puts the constraint on the uncertainty over the data space, while SO constrains the set of probabilistic measures over the data space. More precisely, in my scenario

$ \underset{x\in X}{\max}\,\,\underset{e\in E}{\min}\,\,[f(x,e)] $

is an instance of RO and

$ \underset{x\in X}{\max}\,\,\underset{e\sim\mathcal{D}}{\mathbb{E}}\,[f(x,e)] $

is an instance of SO. In fact, SO advocates a more general formulation of the above:

$ \underset{x\in X}{\max}\,\,\underset{\mathcal{D}\in \mathbb{D}}{\min}\,\,\underset{e\sim\mathcal{D}}{\mathbb{E}}\,[f(x,e)] $

where $\mathbb{D}$ is a set of measures over the environment space $E$. Note that this formulation covers RO as well: defining $\mathbb{D}$ as the set of all single-mass (or dirac delta) measures over $E$, we retrieve the RO. So this gives an option to interpolate between the first and second problem. This answers my question fully. For those who are eager to read more I found this book helpful

Ben-Tal et al. Robust Optimization. 2009.

but the field is huge and there are also many other good books on this topic.