Problem: Given $c \in \mathbb { R } _ { + } ^ { n } , a \in \mathbb { R } _ { + } ^ { n }$ and $\gamma \in \mathbb { R } _ { + } ,$ design an algorithm which, in $O ( n \log n )$ operations, computes the optimal solution $x ^ { * }$ to the following linear program :
$$\begin{array} { c l } { \max } & { c ^ { T } x } \\ { \text { s.t. } } & { a ^ { T } x \leq \gamma } \\ { } & { 0 \leq x _ { i } \leq 1 , \quad \forall i \in [ n ] } \end{array}$$
You may assume that a set of $n$ real numbers can be sorted in time $O ( n \log n )$ and that each arithmetic operation takes constant time.
I tried multiple things without much success.
Start with a feasible solution, for example $0^{T}$ and add $(1/2)^k$ to each coordination at each of the $k$ iterations until the constraints aren't satisfied anymore. I'm not sure this solution would be in $O ( n \log n )$.
Compute the dual of the LP, we know by strong duality that the objective value of the optimal solution of the dual is equal to the objective value of the optimal solution of the primal. However I'm not sure how solving the dual would be any easier.
Lastly I tried to see if the simplex method combined with an adequate sorting would work but the simplex algorithm is in polynomial time.
Reply to query by OP (too long for a comment)
If you assume not merely that $\ c ^ { T } x^* < c ^ { T } y^* \ $, but that $\ y^*\ $ is also the optimal solution (which must exist, because the objective function is continuous, and the set of feasible points is compact) you can get your contradiction by showing that the value of the objective can be improved by modifying $\ y^*\ $. Taking your $y_{j_0}^* > 0\ $, for instance, with $\ j_0\in \left\{i_{k+2},\dots,i_n\right\}\ $, we must have $\ x_{j_0}^*=0\ $, and therefore $\ y_{i_j}^*< x_{i_j}^*\le 1\ $ for some $\ j=1,2,\dots,k+1\ $(because $\ \sum_\limits{i=1}^n a_i y_i^* \le \min\left(\gamma, \sum_\limits{i=1}^n a_i\right) = \sum_\limits{i=1}^n a_i x_i^*\ $). So now, if we decrease $\ y_{j_0}^*\ $ by $\ \frac{\delta}{a_{j_0}}\ $, and increase $\ y_{i_j}^*\ $ by $\ \frac{\delta}{a_{i_j}}\ $, where $\ \delta = \min\left(a_{j_0}y_{j_0}^*, a_{i_j}\left(1-y_{i_j}^*\right) \right)\ $, then all the constraints will still be satisfied, and the objective function will increase by $\ \left(\frac{c_{i_j}}{a_{i_j}}-\frac{c_{j_0}}{a_{j_0}}\right)\delta\ $, which is positive, from the way $\ \frac{c_{i_j}}{a_{i_j}}\ $ are ordered, and this contradicts the supposed optimality of $\ y^*\ $.
There's still a good deal of i-dotting and t-crossing necessary to complete the proof, and it's probably easier to prove it by showing that the dual: $$\begin{array} {c l} \text {Minimise} & \lambda_0\gamma + \sum_\limits{i=1}^n \lambda_i & \\ \text {subject to} & \lambda_0 a^T +\lambda^T \ge c^T & \\ \text {and} & \lambda_i \ge 0 & \text{for } i=0,1,\dots, n \end{array}$$ has a feasible solution: $$\begin{array} {cl} \lambda_0^* &= &\frac{c_{i_{k+1}}}{a_{i_{k+1}}} & \\ \lambda_{i_j}^*& = &c_{i_j}-\frac{a_{i_j}c_{i_{k+1}}}{a_{i_{k+1}}} & \text{for } j=1,2,\dots,k\\ \lambda_{i_j}^*& = & 0 & \text{for } j=k+1,k+2,\dots,n \end{array}$$ with $\ \lambda_0^*\gamma + \sum_\limits{i=1}^n \lambda_i^*=c^T x^*\ $, which implies that both $\ \lambda^*\ $ and $\ x^*\ $ are optimal for their respective programs.