Let $n\in\mathbb{N}$. Let $\sigma_k$ ($0\leq k\leq n$) be the elementary symmetric polynomials: \begin{align} \sigma_0(x_1,\ldots,x_n):&=1 \\ \sigma_k(x_1,\ldots,x_n):&=\sum_{1\leq i_1<\cdots<i_k\leq n}x_{i_1}\cdots x_{i_k},\qquad\forall 1\leq k\leq n \end{align} I am interested in showing the quotients \begin{align} f=\frac{\sigma_k}{\sigma_{k-1}},\qquad 1\leq k\leq n \end{align} is monotone, in the sense that \begin{align} \frac{\partial f}{\partial x_i}>0,\qquad\forall 1\leq i\leq n \end{align} (Surely, I do not intend for this to hold everywhere in $\mathbb{R}^n$. Let us thus assume that the domain of $f$ is the open set $\mathcal{C}:=\left\{x\in\mathbb{R}^n:\sigma_k(x)>0\right\}$. By the Maclaurin's inequality this will imply that $\sigma_{k-1}>0$, so $f$ is well-defined on $\mathcal{C}$.)
By quotient rule, we have \begin{align} \frac{\partial f}{\partial x_i}&=\frac{1}{\sigma_{k-1}^2}\left(\sigma_{k-1}\frac{\partial\sigma_k}{\partial x_i}-\sigma_k\frac{\partial\sigma_{k-1}}{\partial x_i}\right) \\ %%% &=\frac{1}{\sigma_{k-1}^2}\left[\sigma_{k-1}\sigma_{k-1}(x_1,\ldots,\hat{x_i},\ldots,x_n)-\sigma_k\sigma_{k-2}(x_1,\ldots,\hat{x_i},\ldots,x_n)\right] \end{align} where the hat symbol denotes the absence of that component. To show the monotonicity, it remains to show that \begin{align} \sigma_{k-1}\sigma_{k-1}(x_1,\ldots,\hat{x_i},\ldots,x_n)>\sigma_k\sigma_{k-2}(x_1,\ldots,\hat{x_i},\ldots,x_n) & & (*) \end{align} If all $x_j$'s are nonnegative, then (*) can be shown easily by a simple counting argument. However, our domain is the larger set $\mathcal{C}$, which may contain some element $(x_1,\ldots,x_n)$ in which some $x_j$'s are negative. Thus I would like to seek for some approach to tackle this problem.
Any comment, hint, suggestion and answer is greatly appreciated.
Edit: Since Maclaurin's inequality can only be used if we assume that all $x_j$'s are already positive, let us modify the domain of $f$ to be $\{x\in\mathbb{R}^n:\sigma_k(x)>0\text{ and }\sigma_{k-1}(x)>0\}$. This is an intersection of two open sets, hence it is still open.
For simplicity let us denote for each $1\leq i\leq n$, that $\sigma_k(\hat{i}):=\sigma_k(x_1,\ldots,\hat{x_i},\ldots,x_n)$. Then for each $1\leq k\leq n-1$, one has the identity \begin{align} \sigma_k=\sigma_{k-1}(\hat{i})x_i+\sigma_k(\hat{i}) \end{align} (Just split the sum in the definition of $\sigma_k$ into two parts, depending on whether it has $x_i$ as a factor.) For $k=n$ one has even simpler identity: \begin{align} \sigma_n=\sigma_{n-1}(\hat{i})x_i \end{align}
For $2\leq k\leq n-1$, the numerator on the R.H.S. is then \begin{align} \sigma_{k-1}\sigma_{k-1}(\hat{i})-\sigma_k\sigma_{k-2}(\hat{i}) &=\big[\sigma_{k-2}(\hat{i})x_i+\sigma_{k-1}(\hat{i})\big]\sigma_{k-1}(\hat{i}) -\big[\sigma_{k-1}(\hat{i})x_i+\sigma_k(\hat{i})\big]\sigma_{k-2}(\hat{i}) \\ &=\sigma_{k-1}(\hat{i})^2-\sigma_k(\hat{i})\sigma_{k-2}(\hat{i}) +x_i\big[\sigma_{k-2}(\hat{i})\sigma_{k-1}(\hat{i})-\sigma_{k-1}(\hat{i})\sigma_{k-2}(\hat{i})\big] \\ &=\sigma_{k-1}(\hat{i})^2-\sigma_k(\hat{i})\sigma_{k-2}(\hat{i}) \end{align} Note that the last expression is now a polynomial which does not involve $x_i$ at all. Thus let us rename $x_1,\ldots,\hat{x_i},\ldots,x_n$ as $y_1,\ldots, y_{n-1}$ so that \begin{align} \sigma_k(\hat{i})=\sigma_k(y_1,\ldots,y_{n-1}) \end{align} hence in the remaining, I will not write '$(\hat{i})$' anymore. Then, let us denote the normalization of $\sigma_k$ by \begin{align} S_k:=\frac{\sigma_k}{C_{n-1,k}}\qquad\text{where }C_{n-1,k}:=\frac{(n-1)!}{k!(n-1-k)!} \end{align} Being understood as a polynomial of $y_1,\ldots,y_{n-1}$, the R.H.S. is now \begin{align} \sigma_{k-1}^2-\sigma_k\sigma_{k-2} &=\big(C_{n-1,k}S_{k-1}\big)^2-\big(C_{n-1,k}S_k\big)\big(C_{n-1,k-2}S_{k-2}\big) \\ &=\underbrace{\frac{[(n-1)!]^2}{k!(n-1-k)!(k-2)!(n-k+1)!}}_{:=M}\left[\frac{k}{k-1}\frac{n-k+1}{n-k}S_{k-1}^2-S_kS_{k-2}\right] \end{align} In our domain of $f$, we have $\sigma_{k-1}>0$ (note: to be more explicit, we have \begin{align} \sigma_{k-1}(y_1,\ldots,y_{n-1})=\sigma_{k-1}(x_1,\ldots,x_n)\bigg|_{x_i=0}>0 \end{align} as long as $(x_1,\ldots,0,\ldots,x_n)\in \mathcal{C}$), hence $S_{k-1}>0$ as well. Thus \begin{align} R.H.S.>M(S_{k-1}^2-S_kS_{k-2}) \end{align} Now this last expression is nonnegative by the Newton's inequalities. This proves that $\displaystyle\frac{\partial f}{\partial x_i}>0$.
For $k=n$, things are even simpler. The numerator is \begin{align} \sigma_{n-1}\sigma_{n-1}(\hat{i})-\sigma_n\sigma_{n-2}(\hat{i}) &=\big[\sigma_{n-2}(\hat{i})x_i+\sigma_{n-1}(\hat{i})\big]\sigma_{n-1}(\hat{i}) -\big[\sigma_{n-1}(\hat{i})x_i\big]\sigma_{n-2}(\hat{i}) \\ &=\sigma_{n-1}(\hat{i})^2 +x_i\big[\sigma_{n-2}(\hat{i})\sigma_{n-1}(\hat{i})-\sigma_{n-1}(\hat{i})\sigma_{n-2}(\hat{i})\big] \\ &=\sigma_{n-1}(\hat{i})^2 \\ &>0 \end{align} so again, we get $\displaystyle\frac{\partial f}{\partial x_i}>0$.
Finally, the case $k=1$ is easy, since now $f=x_1+\cdots+x_n$ which clearly implies $\displaystyle\frac{\partial f}{\partial x_i}=1>0$.