There are lists of reasons to believe that P is not equal to NP. But they are somewhat "metaphysical". Are there more mathematically rigorous reasons?
Reasons to believe that P is not equal to NP.
2k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
There are several reasons to believe $P \neq NP$.
1) This is usually the answer given when first introducing P and NP. At present, there are thousands of NP-Complete problems across virtually every technical discipline impacted by computing- combinatorics, number theory, geometry, economics and finance, biology, operations research, etc. A lot of these problems bare no obvious relation to each other, besides the fact that they are NP-Complete. Since 1971, nobody has been able to find an NP-Complete problem that is in P. Note that a polynomial time solution to one NP-Complete problem yields a polynomial time solution to all NP-Complete problems. Given the vast number of unrelated NP-Complete problems, this is strong evidence (not proof) that P and NP are different.
2) Ladner's Theorem states that if P and NP are different, then there exists an NP-Intermediate problem (that is, a problem in NP that is not in P and not NP-complete). Actually, Ladner's Theorem gives us infinitely many such problems. The Wikipedia page lists several examples of candidate NP-Intermediate problems (https://en.wikipedia.org/wiki/NP-intermediate). The factoring problem is a popular example. Group Isomorphism is another good example. Nobody has shown that Group Isomorphism is NP-Complete. However, except in specific cases (e.g., groups with Abelian Sylow towers), the $O(n^{\log(n)})$ bound is still (to the best of my knowledge) about the best we know how to do.
3) If P = NP, then the polynomial time hierarchy collapses. An open conjecture states that each class in the polynomial time hierarchy is distinct, and most complexity theorists believe this to be true. Consider the Independent Set problem, which we know to be NP-Complete. Now the Maximum Independent Set problem is of the form: $$\{ \langle G, k \rangle : \text{ the largest independent set in G has k vertices } \}$$
The Maximum Independent Set problem is in $\Sigma_{2}^{p}$. Given a largest independent set, we can verify by examining all subsets of vertices of $G$ that there is no other independent set with more than $k$ vertices. Certainly, the Maximum Independent Set problem could be in NP. However, there is no reason to believe the Maximum Independent Set problem is in NP. Could you find a single certificate of reasonable length (length polynomial in $|G|$) that verifies $G$ contains an independent set of size $k$, and no other independent set is larger? There could be exponentially many independent sets in $G$.
Yes, here is one.
Let's assume that Subset sum problem allows some form of "divide and conquer" approach. Below is example of sequence of sorted sums and corresponding sequences:
$0 < a1 < a2 < a1 + a2 < a3 < a1 + a3 < a2 + a3 < a1 + a2 + a3$ # assume all sums are different
[], [1], [2], [1, 2], [3], [1, 3], [2, 3], [1, 2, 3]
000, 100, 010, 110, 001, 101, 011, 111
Each sequence of numbers can be transformed in this way to sequence of bit strings. If we can get any bit of this sequence in polynomial time than Subset sum is solvable in polynomial time. Let's represent this problem in geometrical language. Each sorted sequence of sums correspond to chamber in hyperplane arrangement $HA3$, having all hyperplanes with normal vectors, consisting of -1, 0, and 1. We should somehow subdivide $HA3$ to obtain area of space with only those chambers and parts of chambers, which have the same value of desired bit. This problem is computationally demanding even for 6 numbers, so I considered other problem: How many cutting hyperplanes do we need to isolate every chamber in $HA2$ (having normal vectors, consisting of -1 and 1). More precisely in area of $HA2$ without symmetries, see get_base_simplex() function. Number that we interested in is minimum depth of all possible subdivision trees. For example if number of chambers is 14 than minimum depth cannot be less than 4, but it will be probably more, because hyperplanes are not orthogonal and chambers are arranged in intricate configuration. First I used only hyperplanes of $HA2$ to subdivide $HA2$, then hyperplanes of HA3, but resulting tree depths are the same. And, alas, they are exponential:
n = 3: 1; n = 4: 2; n = 5: 3, n = 6: 5, n = 7: 8.
It seems that this is sequence which have the same values eventually as sequence of facet counts of "most complex chamber" of $HA2$ (see get_mcc2_point()). These facet counts are:
n = 3: 4; n = 4: 4; n = 5: 5; n = 6: 6; n = 7: 8; n = 8: 13; n = 9: 23; n = 10: 40.
But because of computational difficulties I cannot state for sure that tree depth for $HA3$ is exponential. Below is Python code for computing tree depth for $HA2$.
EDIT Added divide_ha2_by_vectors_3_breadth_first() function
EDIT 2 Added separate_mcp2() which separates most complex region and searches only through part of tree.