Consider an $n$-player game exerting effort to increase their probability of winning. Let $x_i$ denote $i$'s probability of winning and $e_i$ $i$'s effort:
$$x_i=\frac{e_i}{\sum_{j\in N}e_j}$$
$i$'s payoff is
$$u_i=v_i\frac{e_i}{\sum_{j\in N}e_j}-e_i$$
Then there exists a vector $\mathbf{e} \in \mathbb{R}_+$, where $x_i$ are the winning probabilities that are a unique solution of the following convex optimization problem:
$$maximize \hspace{0.3cm} \sum_{i \in N}\hat{u}(x_i)$$ $$subject\hspace{0.1cm} to \hspace{0.3cm} \sum_{i \in N}x_i\leq1$$ $$and\hspace{0.3cm}x_i\geq0$$
where $$\hat{u_i}=v_ix_i\bigg(1-\frac{1}{2}x_i\bigg)$$
Where did this $\hat{u_i}$ term come from?
$u_i$ can be rewritten as $$v_ix_i-\frac{x_i}{\sum_{j\in N}e_j}$$
But I can't see how $\hat{u_i}$ was arrived at. Any ideas?
Later is the following definition
$$\hat{u_i}=(1-x_i)u_i(x_i)+x_i\bigg(\frac{1}{x_i}\int_0^{x_i} u_i(z_i)dz\bigg)$$
Maybe this is useful for understanding the expression $\hat{u_i}$? But I can't figure out how this relates to the utility maximization problem.