Explain the proof by Vinay Deolalikar that P != NP

MathComputer ScienceComplexity TheoryProofP Np

Math Problem Overview


Recently there has been a paper floating around by Vinay Deolalikar at HP Labs which claims to have proved that P != NP.

Could someone explain how this proof works for us less mathematically inclined people?

Math Solutions


Solution 1 - Math

I've only scanned through the paper, but here's a rough summary of how it all hangs together.

From page 86 of the paper.

> ... polynomial time > algorithms succeed by successively > “breaking up” the problem into > smaller subproblems that are joined to > each other through conditional > independence. Consequently, polynomial > time algorithms cannot solve > problems in regimes where blocks whose > order is the same as the underlying > problem instance require simultaneous > resolution.

Other parts of the paper show that certain NP problems can not be broken up in this manner. Thus NP/= P

Much of the paper is spent defining conditional independence and proving these two points.

Solution 2 - Math

Dick Lipton has a nice blog entry about the paper and his first impressions of it. Unfortunately, it also is technical. From what I can understand, Deolalikar's main innovation seems to be to use some concepts from statistical physics and finite model theory and tie them to the problem.

I'm with Rex M with this one, some results, mostly mathematical ones cannot be expressed to people who lack the technical mastery.

Solution 3 - Math

I liked this ( http://www.newscientist.com/article/dn19287-p--np-its-bad-news-for-the-power-of-computing.html ):

> His argument revolves around a particular task, the Boolean satisfiability problem, which asks whether a collection of logical statements can all be simultaneously true or whether they contradict each other. This is known to be an NP problem. > > Deolalikar claims to have shown that > there is no program which can complete > it quickly from scratch, and that it > is therefore not a P problem. His > argument involves the ingenious use of > statistical physics, as he uses a > mathematical structure that follows > many of the same rules as a random > physical system.

The effects of the above can be quite significant:

> If the result stands, it would prove > that the two classes P and NP are not > identical, and impose severe limits on > what computers can accomplish – > implying that many tasks may be > fundamentally, irreducibly complex. > > For some problems – including > factorisation – the result does not > clearly say whether they can be solved > quickly. But a huge sub-class of > problems called "NP-complete" would be > doomed. A famous example is the > travelling salesman problem – finding > the shortest route between a set of > cities. Such problems can be checked > quickly, but if P ≠ NP then there is > no computer program that can complete > them quickly from scratch.

Solution 4 - Math

This is my understanding of the proof technique: he uses first order logic to characterize all polynomial time algorithms, and then shows that for large SAT problems with certain properties that no polynomial time algorithm can determine their satisfiability.

Solution 5 - Math

One other way of thinking about it, which may be entirely wrong, but is my first impression as I'm reading it on the first pass, is that we think of assigning/clearing terms in circuit satisfaction as forming and breaking clusters of 'ordered structure', and that he's then using statistical physics to show that there isn't enough speed in the polynomial operations to perform those operations in a particular "phase space" of operations, because these "clusters" end up being too far apart.

Solution 6 - Math

Such proof would have to cover all classes of algorithms, like continuous global optimization.

For example, in the 3-SAT problem we have to evaluate variables to fulfill all alternatives of triples of these variables or their negations. Look that x OR y can be changed into optimizing

((x-1)^2+y^2)((x-1)^2+(y-1)^2)(x^2+(y-1)^2)

and analogously seven terms for alternative of three variables.

Finding the global minimum of a sum of such polynomials for all terms would solve our problem. (source)

It's going out of standard combinatorial techniques to the continuous world using_gradient methods, local minims removing methods, evolutionary algorithms. It's completely different kingdom - numerical analysis - I don't believe such proof could really cover (?)

Solution 7 - Math

It's worth noting that with proofs, "the devil is in the detail". The high level overview is obviously something like:

> Some some sort of relationship > between items, show that this > relationship implies X and that > implies Y and thus my argument is > shown.

I mean, it may be via Induction or any other form of proving things, but what I'm saying is the high level overview is useless. There is no point explaining it. Although the question itself relates to computer science, it is best left to mathematicians (thought it is certainly incredibly interesting).

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionNixuzView Question on Stackoverflow
Solution 1 - MathMichael AndersonView Answer on Stackoverflow
Solution 2 - MathrecipriversexclusionView Answer on Stackoverflow
Solution 3 - MathTony BreyalView Answer on Stackoverflow
Solution 4 - MathsmithView Answer on Stackoverflow
Solution 5 - MathJohn with waffleView Answer on Stackoverflow
Solution 6 - MathskepticView Answer on Stackoverflow
Solution 7 - MathNoon SilkView Answer on Stackoverflow