I am building a code to solve an optimization problem defined as
\begin{array}{cl} \text{maximize} & f(x) \\ \text{subject to} & x\in\mathcal{X}, \end{array}
where $f:\mathbb{R}^n\to\mathbb{R}$ is concave with respect to $x\in\mathbb{R}^n$, and $\mathcal{X}\subseteq\mathbb{R}^n$ is a simplex set, e.g.,
$$\mathcal{X}=\left\{ x\in\mathbb{R}^n : x_i \ge 0 , \sum_i x_i \le c\right\}$$
In this regard, I made a code using the Frank-Wolfe method (a.k.a. conditional gradient method). However, many papers dealing with convex problems said that "Since the above problem is a convex one, it can be solved any convex programming tools, e.g., interior-point method."
Why were many authors mentioning the interior-point method, instead of the conditional gradient one? I think both methods can solve constrained convex problems and the main difference between them is whether the algorithm base is gradient or Hessian.
Is there a special reason that many authors only mention the interior-point method? If the interior-point method is better than the Frank-Wolfe one, I will rebuild my code using the interior-point one, instead of the Frank-Wolfe one.
In my humble opinion, the Frank-Wolfe method will be applied if the computation of projection is very expensive or difficult (One can refer to this slide Frank-Wolfe Method). However, the projection onto a simplex often can be computed directly (e.g., Orthogonal Projection onto the Unit Simplex). Thus maybe projected gradient method is also acceptable. In general, the convergence rate of projection gradient method and Frank-Wolfe method are both $O(1/k)$, thus actually we cannot say which one is better.
The convergence rate of inner point method is superlinear but it needs the second-order gradient of $f$. If the dimension of problem is not very high and the Hessian matrix of $f$ is easy to get, the inner point method is more recommended.