After showing that the set $K_{1} \in \mathbb{R}^{n}$ defined as $$ K_{1} = \{ x = (x_{1}, \cdots , x_{n})^{T} \in \mathbb{R}^{n} : x_{1} \leq x_{2} \leq x_{3} \leq \cdots \leq x_{n} \}$$ is a convex cone, which I've done, I need to determine its polar cone.
In other words, for $x \in K_{1}$, I want to find the set of $y \in \mathbb{R}^{n}$ such that $\mathbf{\langle y, x \rangle} = \left\langle \begin{pmatrix}y_{1} \\ y_{2} \\ \vdots \\ y_{n} \end{pmatrix} , \begin{pmatrix}x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{pmatrix} \right\rangle= \begin{pmatrix}y_{1}& y_{2} & \cdots & y_{n} \end{pmatrix}\cdot \begin{pmatrix}x_{1}\\ x_{2} \\ \vdots \\ x_{n} \end{pmatrix} \\ \mathbf{= y_{1}x_{1}+y_{2}x_{2}+\cdots + y_{n}x_{n} \leq 0},\, \forall x \in K_{1}.$
All $y$ that satisfy this inequality belong to the polar cone $K_{1}^{\circ}$.
Farkas' Lemma states that: Let $A$ be an $m \times n$ matrix and let $$ K=\{x \in \mathbb{R}^{n}: Ax \leq 0 \}. $$ Then, $$ K^{\circ} = \{ y \in \mathbb{R}^{n}: y = A^{T}\lambda, \, \lambda \in \mathbb{R}^{m}, \, \lambda \geq 0\}. $$
To that effect, I consider the following matrix $A$: $$ A = \begin{pmatrix} 1 & -1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 1 & -1 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 1 & -1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots& & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 1 & -1 \end{pmatrix}.$$
Then, $Ax = \begin{pmatrix} 1 & -1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 1 & -1 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 1 & -1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots& & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 1 & -1 \end{pmatrix}\begin{pmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \vdots \\ x_{n}\end{pmatrix} = \begin{pmatrix} x_{1}-x_{2} \\ x_{2}-x_{3} \\ x_{3}-x_{4} \\ \vdots \\ x_{n-1}-x_{n} \end{pmatrix} \leq 0$ because $x_{i}\leq x_{i+1}$ $\forall i \in [1,n-1]$.
Now, by Farkas' Lemma, the polar cone $K_{1}^{\circ} = \{ y \in \mathbb{R}^{n}: y = A^{T}\lambda, \, \lambda \in \mathbb{R}^{n}, \, \lambda \geq 0\}.$
$A^{T} = \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ -1 & 1 & 0 & \cdots & 0 \\ 0 & -1 & 1 & \cdots & 0 \\ 0 & 0 & -1 & \cdots & 0 \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0& \cdots & 1 \\ 0 & 0 & 0 & \cdots & -1 \end{pmatrix},$ so $$ y = \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ -1 & 1 & 0 & \cdots & 0 \\ 0 & -1 & 1 & \cdots & 0 \\ 0 & 0 & -1 & \cdots & 0 \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0& \cdots & 1 \\ 0 & 0 & 0 & \cdots & -1 \end{pmatrix} \lambda = \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ -1 & 1 & 0 & \cdots & 0 \\ 0 & -1 & 1 & \cdots & 0 \\ 0 & 0 & -1 & \cdots & 0 \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0& \cdots & 1 \\ 0 & 0 & 0 & \cdots & -1 \end{pmatrix} \begin{pmatrix} \lambda_{1} \\ \lambda_{2} \\ \lambda_{3} \\ \lambda_{4} \\ \vdots \\ \lambda_{n-1} \\ \lambda_{n} \end{pmatrix} = \begin{pmatrix} \lambda_{1} \\ -\lambda_{1}+\lambda_{2} \\ -\lambda_{2} + \lambda_{3} \\ \vdots \\ -\lambda_{n-1}+\lambda_{n} \\ -\lambda_{n} \end{pmatrix}.$$
Now, checking that for this $y$, we have $\langle y,x \rangle \leq 0$ $\forall x \in K_{1}$, we have that $$\left\langle \begin{pmatrix} \lambda_{1} \\ -\lambda_{1}+\lambda_{2} \\ -\lambda_{2} + \lambda_{3} \\ \vdots \\ -\lambda_{n-1}+\lambda_{n} \\ -\lambda_{n} \end{pmatrix}, \begin{pmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \vdots \\ x_{n}\end{pmatrix} \right\rangle = \begin{pmatrix} \lambda_{1} & -\lambda_{1}+\lambda_{2} & -\lambda_{2} + \lambda_{3} & \cdots & -\lambda_{n-1}+\lambda_{n} & -\lambda_{n} \end{pmatrix}\begin{pmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \vdots \\ x_{n}\end{pmatrix} \\ = \lambda_{1}x_{1} + (-\lambda_{1}+\lambda_{2})x_{2} + (-\lambda_{2}+\lambda_{3})x_{3} + \cdots + (-\lambda_{n-1}+\lambda_{n})x_{n-1} + (-\lambda_{n})x_{n} \\ = \lambda_{1}(x_{1}-x_{2}) + \lambda_{2}(x_{2}-x_{3}) + \lambda_{3}(x_{3}-x_{4}) + \cdots + \lambda_{n-1}(x_{n-2}-x_{n-1}) + \lambda_{n}(x_{n-1} - x_{n}) \leq 0,$$
since $\lambda \geq 0$, $\lambda_{i} \geq 0$, and since $x_{i+1} \geq x_{i}$ $\forall i \in [1,n-1]$, $x_{i}-x_{i+1}\leq 0$ $\forall i \in [1, n-1]$.
Therefore, this $y$ works.
Could someone please help me finish this? I'm not sure if there is anything else I need to do here. I thank you ahead of time for your time and patience!
Note: This question has been edited since I received the answer below. The answer below is based on a method I originally thought would be the best way to solve this problem, but I now know is not. Therefore, the answer is no longer useful to me. Therefore, I have put up a bounty to find an answer that is useful (or to have someone tell me that what I have done here in the question body is sufficient). Thank you.
I will use notation $A^°$ for the polar cone of $A$ as you do, but I have realized that it is used for something quite different, the polar set (negative dot-product for polar cone whereas it is dot-product less than 1 in the case of polar-sets, as described in (https://en.wikipedia.org/wiki/Polar_set).)
In the case $n=2$ (see figure below), it is clear that there is a very narrow margin of maneuver for vectors to be with $\leq 0$ dot products (i.e., with angles $\geq \tfrac{\pi}{2}$) with all vectors of $A$: vectors belonging to $A^°$ have to be of the form $a(1,-1)$ with $a \geq 0$ ((1,-1) is the "outward pointing normal").
Let us now consider case $n=3.$
The double inequality $x_{1} \leq x_{2} \leq x_{3}$ is translated into set language as the intersection of two sets $A_1=\{(x_1,x_2)s.t. x_{1} \leq x_{2}\}$ and $A_2=\{(x_2,x_3)s.t. x_{2} \leq x_{3}\}$, i.e. the intersection of two half-spaces giving what is sometimes called a wedge.
In this case, as the frontiers of $A_1$ and $A_2$ are planes with respective normal vectors:
$$V_1=\left(\begin{array}{r} 1\\-1\\0 \end{array}\right) \ \ V_2=\left(\begin{array}{r} 0\\1\\-1 \end{array}\right),$$
the convex cone of the set defined by $x_{1} \leq x_{2} \leq x_{3}$ is generated by $V_1$ and $V_2$, i.e., the set of all vectors $V$ of the form :
$$\tag{0}V=aV_1+bV_2=\left(\begin{array}{c} a\\ -a+b\\ -b \end{array}\right) \ \ \text{for any } \ \ a \geq 0 \ \text{and any} \ b \geq 0.$$
More generally the convex cone $(C)$ of the set defined by $x_{1} \leq x_{2} \leq x_{3} \leq \cdots \leq x_n$ is defined in the following way. Let us consider hyper-halfspaces $(H_k)$ defined by $x_{k} < x_{k+1}$ whose frontiers are hyperplanes $(\partial H_k)$ with equation $x_{k} = x_{k+1}$ and "outward pointing normal vectors" $V_k=(0,0,...1,-1,,0,...0)^T$ (coefficient $1$ being in the $k$th position).
Then (C) is the convex cone generated by ${V_1}\cup{V_2}\cup \cdots {V_{n-1}},$ i.e., the set of all vectors $V$ of the form :
$$\tag{1}V=\sum_{k=1}^{n-1} a_kV_k \ \ \text{for all} \ \ a_k \geq 0$$
For example, in the case $n=4$, it is the "(n-1)D=3D" volume that can be defined as the "solid infinite tetrahedron defined by 3 vectors $V_1, V_2, V_3$".
Why that ? The mathematical proof relies on property (3) below: $(A \cap B)^°=(A^° \cup B^°)^°$ meaning that the convex-cone of an intersection of two sets is the convex-cone generated by the convex-cones of each one of these sets ; this extends immediately to $n$ sets instead of 2).
Appendix : properties of the "convex-cone operator":
$\tag{0} (A^°)^° =A \ \ \text{duality property} $
$\tag{1} A \subset B \implies B^° \subset A^°. $
$$\tag{2}(A \cup B)^° = A^° \cap B^°$$
$$\tag{3}(A^° \cup B^°)^° = (A \cap B)^°$$
Let us prove that (2) holds as a consequence of (1)
Using (1) twice, we can derive:
$$\begin{cases}A \subset A \cup B \implies (A \cup B)^° \subset A^° \\B \subset A \cup B \implies (A \cup B)^° \subset B^°\end{cases} \ \implies \ \ (A \cup B)^° \subset A^° \cap B^°$$
Thus (2) will be established if we prove the reverse inclusion
$$\tag{3} \ \ ? \ \ A^° \cap B^° \subset (A \cup B)^°$$
It suffices to show that if we take a certain $x \in A^° \cap B^°$, we have $x \in (A \cup B)^°$. Indeed, if $x$ has a negative dot-product with any $a \in A$ and any $b \in B$, it will have a negative dot product with all $c \in A \cup B$... ending the proof of (2).
(3) is obtained by taking $A^°$ and $B^°$ at the place of $A$ and $B$ in (2), and taking the (...)^° of both sides.
Remark: Note the similarity of the properties of the polar-cone operator with the complement set operator. (0), (1) and (2) are in full correspondence with properties:
$$(A^c)^c=A, \ \ (A \cup B)^c = A^c \cap B^c \ \ \text{and} \ \ A \subset B \implies B^c \subset A^c. $$
Edit : Your use of Farkas' lemma is interesting, but formula $ \begin{pmatrix} \lambda_{1} \\ -\lambda_{1}+\lambda_{2} \\ -\lambda_{2} + \lambda_{3} \\ \vdots \\ -\lambda_{n-1}+\lambda_{n} \\ -\lambda_{n} \end{pmatrix}$, which should be replaced by $ \begin{pmatrix} \lambda_{1} \\ -\lambda_{1}+\lambda_{2} \\ -\lambda_{2} + \lambda_{3} \\ \vdots \\ -\lambda_{n-2}+\lambda_{n-1} \\ -\lambda_{n-1} \end{pmatrix}$ for being in $\mathbb{R^n}$ is exactly formula (1) in my answer...