Can we use the Shannon Entropy as Utility Function?

234 Views Asked by At

Shannon entropy can explicitly be written as $$\mathrm {H} (X)=-\sum _{i=1}^{n}{\mathrm {P} (x_{i})\log _{2}\mathrm {P} (x_{i})}$$ where the discrete random variable ${\textstyle X}$ has possible values ${\textstyle \left\{x_{1},\ldots ,x_{n}\right\}}$ and probability mass function ${\textstyle \mathrm {P} (X)}$. The function $$h^+(p_1,\dots,p_n)=-\sum _{i=1}^{n} p_1 \log _{2}p_n$$ is concave on the set $[0,1]\times\dots \times [0,1]$, while its opposite $$h^-(p_1,\dots,p_n)=\sum _{i=1}^{n} p_1 \log _{2}p_n$$ is convex on the same set. These properties have got me thinking about the utility (and cost) functions used in game theory. Can these functions be used as utility or cost functions? And, if that's the case, can you suggest me some references in which this happens?

I did a search on Google but the only paper I found is an ArXiv paper: Utility function estimation: the entropy approach, but it presents a way to estimate the utility function of any agent when there is only partial available information about the decision maker's preferences, and so this doesn't answer my question.

1

There are 1 best solutions below

1
On BEST ANSWER

Take a look at the information design literature, particularly, the Bayesian persuasion framework (The seminal paper is "Bayesian Persuasion", from Kamenica and Gentzkow, 2011). It is basically a sender-receiver game, where an informed sender ("she") designs a communication protocol to induce the uniformed receiver ("he") to take the action that she values the most. For instance, in "Costly Persuasion", Kamenica and Gentzkow (2014) assume that the sender faces a cost to reduce the uncertainty with her communication protocol. This cost is represented by Shannon's entropy. Kamenica, Frankel and Ely (2013) in "Suspense and Surprise" uses the same approach to model uncertainty reduction, and Shannon's entropy is one particular case of their model specification. The most recent discussion on entropy as a measure of uncertainty in decision problems is "Quantifying information and uncertainty" from Frankel and Kamenica (2018).

You might want to take a look as well at the rational inattention literature (the seminal paper is "implications of rational inattention" by Sims, 2003). It basically assumes away that agents can perfectly process all information they receive in a game or in single-player decision model. Information processing faces a physical constraint - called Shannon's channel - and must be performed efficiently. This literature is fundamentally based on information theory and entropy and is becoming increasingly revelant in macroeconomic theory.