Find $\max_{\sigma \in S_n}\sum_{i=1}^n|\sigma(i)-i|$ where $S_n$ is the group of permutations on $n$ letters (Greedy algorithm shows up?)

185 Views Asked by At

Find $\max_{\sigma \in S_n}\sum_{i=1}^n|\sigma(i)-i|$, where $S_n$ is the symmetric group of permutations of $n$ symbols.

So, the story goes like this:

When I first saw the problem, I thought the problem might be a difficult one. After playing around with $S_3$ I thought that maybe the best way to attack the problem is to construct a permutation $\sigma$ such that for each $i\in\{1,2,\cdots,n\}$ the distance $|\sigma(i)-i|$ is maximized because if each $|\sigma(i)-i|$ is maximized then their sum is maximized as $|\sigma(i)-i|$ are positive.

So I started like this: Let's start from $|\sigma(1)-1|$. If I want to maximize this then $\sigma(1)$ must be $n$. If I want to maximize $|\sigma(2)-2|$ then $\sigma(2)$ must be $n-1$ and so forth..

After thinking about this for sometime, I realized that this isn't a good idea, because I found some counter-examples.

For example, if $n=7$, then following the algorithm I just said we find these permutations that all agree with the method I suggest, but $\sigma_1$ fails to maximize $\sum_{i=1}^n|\sigma(i)-i|$ (I'll write two of the permutations constructed in this way and ignore the rest):

$\sigma_1=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 7 & 6 & 1 & 2 & 3 & 4 & 5 \end{pmatrix}$, $\sigma_2 = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 7 & 6 & 5 & 1 & 2 & 3 & 4 \end{pmatrix}$

If we use $\sigma_1$ then $\sum_{i=1}^n |\sigma_1(i)-i| = 20$ while if we use $\sigma_2$ the sum is $\sum_{i=1}^n |\sigma_2(i)-i|=24$.

I guess for any $\sigma \in S_n$, the maximum sum is obtained when $$\sigma = \begin{pmatrix} 1 & 2 & 3 & \cdots & n-2 & n-1 & n \\ n & n-1 & n-2 & \cdots & 3 & 2 & 1 \end{pmatrix}$$

Based on this assumption, I have solved the problem and my answer is, by chance, correct. But there are some serious issues that I can't explain well and I hope that people on MSE can make them more clear for me:

  1. I'm not very surprised that my algorithm failed. In fact, I would've been surprised if it had worked because my proposed algorithm doesn't construct a unique permutation and is based on the assumption that we are free to go from $k$ to $k+1$ without any restrictions. But why this algorithm fails?
  2. I haven't had a course in discrete mathematics, but by searching on the internet, I read about the greedy algorithm. It seems like the greedy algorithm is the most naive approach that one can take to solve a problem like this. So, it's favorable for that reason. But are there any axioms or assumptions that when they're satisfied they guarantee that the greedy algorithm works?
  3. What is the correct method and mindset to solve a problem like this and in particular this one? This is my least important question of the three. In fact answering the first and second questions could be more teaching for me than this one, but I'd like to see a proof.

Thanks

2

There are 2 best solutions below

0
On

I'll write a start of a proof. You'll have to fill in some details yourself.
Suppose $1\leq i< j\leq n$. Also, suppose that $\sigma(i)\geq j$ and $\sigma(j)\geq j$. Then, it doesn't matter if we exchange $\sigma(i)$ and $\sigma(j)$. When $\sigma(i),\sigma(j)\leq i$, it doesn't matter too. When $\sigma(i)\leq i$ and $\sigma(j)\geq j$, we can improve the permutation by exchanging $\sigma(i)$ and $\sigma(j)$. Thus, for any 'maximal' (in the sense of the given sum) permutation, we know that if $i<j$, $\sigma(i)$ and $\sigma(j)$ are both larger than $j$, both smaller than $i$ or $\sigma(j)\leq i<j\leq \sigma(i)$. Now, say that $k=\min_{1\leq i\leq \frac n2} \sigma(i)$. It should be possible to bound the sum from above for each $k$ and the maximum should occur at $k=\frac n2$. That means that the lower half is mapped to the upper half of $\{1,2,\dots,n-1,n\}$ and vice versa.

0
On

Often times, when you are dealing with combinatorial optimization, graph theoretic models are quite helpful. I read through this in the car on the way home from dinner. So I'll start with how I thought about it, addressing questions (2) and (3) of yours. Anytime we want to maximize/minimize weights/distances, some sort of greedy graph algorithm is usually the trick here. The reason greedy graph algorithms work where they do is because we have exchange arguments, as Ragnar described. These are due to having an underlying vector space structure in our graphs.

Are you familiar with Matroids? The way a Matroid works is that we have a construct $\mathcal{M}(G, I)$, where $G$ is the ground set and $I \subseteq 2^{G}$ is the independent set. With Graphs, $G$ is the edge set and $I$ are the subsets of edges that don't form cycles. We actually have greedy basis algorithms where we sort the elements of $G$, and poll them one at a time, keeping the optimal independent set and throwing away elements we see creating a cycle. Maximally independent sets are called bases, as you may be familiar with in Linear Algebra. And much like in Linear Algebra, we can swap out a basis vector for a new vector, so long as independence is preserved. So a vectorial matroid would be based on a set of edges, and subsets of linearly independent edges from $G$. If you are familiar with the proofs of Dijsktra's, Prim's, or Kruskal's algorithms, they work the exact same way for proving minimality. We swap out edges.

So I started thinking about a spanning tree algorithm on a simple graph, and quickly saw connectivity and acyclicity fail due to the symmetry of the problem. By this, I mean that if I pick (1, 7) as a greedy choice, then I also want (7, 1) as my greedy choice.

After thinking about the problem some more, I set up $\{1, ..., 7\}$ as a weighted $K_{7, 7}$, with the edge weights as $|v_{1} - v_{2}|$. So if I map $1 \to 7$, then the edge weight is $6$. Similarly, mapping $3 \to 4$ yields an edge weight of $1$. From here, I thought about a matching algorithm, and quickly came up with the final solution you got.

Does this help clarify things?