$f_1,\dots,f_N:\mathbb{R}^+\rightarrow\mathbb{R}^+$ are strictly increasing, bounded functions whose derivatives monotonically decrease to $0$ as their argument increases. (Picture the shape of the function $x/(x+1)$.)
We define: $$f(x_1,\dots,x_N):=\sum_{n=1}^N f_n(x_n).$$
Then $f$ is bounded and concave in ${\mathbb{R}^+}^N$, and $\nabla f$ has all positive components everywhere in the domain.
Define a simplex $S\subset \mathbb{R}^N$ to be the set of all vectors whose components are nonnegative and add up to at most $P$: $$S_P=\left\{(r_1,\dots,r_N)\left| \sum_{n=1}^N r_n \leq P, \ r_n\geq 0 \ \forall n=1,\dots,N\right.\right\}.$$
If we find the max of the function over the simplex: $$\vec{x}_{\max} := \displaystyle \arg \max_{\vec{x}\in S_P} f(\vec{x}),$$ how do we show that all components of $\vec{x}_{\max} $ tend to infinity as $P\rightarrow \infty$?
This is a seemingly obvious fact, and I think there should be a very short proof. However, I am not very experienced in convex optimization and cannot find one.
It's true that some of the $x_i$ will tend to $\infty$ as $P$ does, however, it's not always true that all of them will.
Intuitively, because the functions are increasing, the solutions must always lie on the top right-hand face of the simplex
$$\Sigma_P:=\left\{x:\sum_i x_i=P\right\}.$$
A quick argument for this goes like suppose that a solution $y$ does not:
$$\sum_i y_i = P-\varepsilon$$
where $\varepsilon>0$. Then $f(y_1+\varepsilon,y_2,\dots,y_n)>f(y)$ since $f_1$ is stritcly increasing and $\tilde{y}:=(y_1+\varepsilon,y_2,\dots,y_n)\in \Sigma_P\subset S_P$ which contradicts the optimality of $y$.
So we have that if $y^P$ is a solution for the problem over $S_P$, then $\sum_i y^P_i=P$ and letting $P$ tend to infinity, we have that for any number $C\geq0$ we can always find a $P$ and $i$ such that $y^P_i\geq C$ (note that which coordinate may chance with $P$, or at least we have not proved otherwise yet).
For the counter example, take $N=2$, $f_1(x)=2f_2(x)=\frac{2x}{1+x}$. From before we have that $x^P_1+x_2^P=P$ for any optimal point. Thus
$$f(x^P)=2f_1(x_1^P)+f_2(P-x_1^P)=\frac{2x_1^P}{1+x_1^P}+\frac{P-x_1^P}{1+P-x_1^P}$$
the right-hand side of which is a strictly increasing function on $x_1^P$ (just check that the first derivative is greater than zero everywhere). And so we have that $x^P=(P,0)$.
To wrap up, even though it is true that some of the coordinates must tend to infinity as $P$ does, it's not true that all of them must. What can go wrong is that some of the coordinates are zero (or tend to zero, or some other constant). This depends on how "big" the different $f_i$ are in comparison with each other. For example, if the $ith$ one is less than all the others everywhere, then any optimal solution will have the $ith$ coordinate being zero.