Computing rides a curve of seemingly unstoppable progress. The empty decades took us from vacuum tubes to microchips, to high-speed dial-up Internet and the Office Assistant. Clippy to ChatGPT. However, across science and industry, thousands of everyday problems remain unsolved by today’s fleet of AI-powered supercomputers.
These “NP-complete” hard problems rule million dollar prizeawarded by the non-profit Clay Mathematics Institute for finding a quick solution or proving it doesn’t exist. A surprising insight from the 1970s makes this challenge all the more appealing: these more than a thousand problems are, in a deep sense, one and the same. If you fix one, you fix them all. This concept, now fundamental in the field of theoretical computing, shows that groups of computational problems form a unified network. Discover the fast algorithm that solves it Sudoku puzzles of any size, and now you can break encryption schemes that protect our digital economy. Show a shortcut to arrange a flight tour within a budget, and you can use it to solve almost every famous open. math problem.
Finding fast algorithms for these NP-complete problems (or proving that no such algorithm exists) would solve “P versus NP” question, which is the most important mystery of computer science. P refers to the set of computational problems that computers can efficiently solve. NP, on the other hand, represents problems with possible solutions verified effectively But these problems cannot necessarily be solved quickly. NP includes everything in P (since finding a solution is a perfectly good way to check), but also harder problems for which we don’t know efficient methods. to find the solutions We will only be able to check them when they are fixed. The P versus NP question asks whether this apparent asymmetry between finding and verifying a solution is real or illusory. Maybe P = NP, and they refer to the same set of problems. In other words, maybe only NP problems that we don’t know how to solve efficiently appear Tough on P problems because we haven’t found the right approaches yet.
About supporting science journalism
If you like this article, please consider supporting our award-winning journalism subscribe. By purchasing a subscription, you’re helping to ensure a future of impactful stories about the discoveries and ideas that shape our world.
For example, is there an algorithm (recipe for simple instructions) that, given a large list of cities, their connecting flights, and budget, would efficiently decide if you can visit them all on a budget? We don’t know We know one effective algorithm: check all possible flight sequences that visit all cities, add up the cost of each and compare the totals to the budget. But as the number of cities on the list grows, the number of routes to check explodes exponentially, quickly becoming impractical. the fastest computers. There may or may not be some clever shortcut that avoids that exact search, but computer scientists haven’t found it yet. Given a the solutionhowever – in this case, with the proposed list of flights – it could be verified in a period of time whether a route reaches all the cities and stays below the budget. If P equals NP, the flight scenario (an example of what is called the traveling salesman problem) has a quick solution. We don’t know yet.
Many natural computational problems converge to the traveling salesman problem in the NP set. This includes logistical challenges (e.g loading boxes into trucks), social networks (finding mutual friend clicks), biology (predicting how proteins will fold), and games like Sudoku, pokemon and Candy Crush. We can also cast mathematics itself as an NP problem, since its proofs can be efficiently checked. It seems strange to categorize these as “hard” problems when people are trucking in trucks and solving Sudokus every day. But we think an algorithm has effectively solved a problem alone if resolved in each instances efficiently, including very large ones. Of course, a computer can solve a 9×9 Sudoku faster than a million x million Sudoku, so the strict definition of “efficient” can be seen as how the time required to solve a problem scales with the size of the input.
The P versus NP question involves several computational problems and how they are related, so it seems that the resolution should investigate each of these problems separately. Say you need to find an efficient algorithm for the traveling salesman problem. That would be heroic progress, but does it tell you anything about your ability to solve giant Sudoku or some serious NP problem? Surprisingly, for that single problem your algorithm would completely solve P versus NP. In 1972 computer scientist Richard Karp published one seminal paper Prove that the classical NP 21 problems have a remarkable property: an efficient algorithm for solving any one of them could be used not only to solve the other 20, but also to solve them. in each problem in NP. He called these 21 problems NP-complete. Over the years, that list has grown, as researchers have discovered that many other NP problems (including traveling salesmen) share this magical property.
We can view NP-completeness optimistically or pessimistically. On the optimistic side, the fortress of incredibly difficult problems and unspeakable technological promise that stands between us now looks like a house of cards. One step into the realm of feasibility and the entire edifice of NP collapses, and a scientific revolution rises from the rubble, full of efficient, effortless travel, fast. drug discovery through protein folding and a new era of mathematics. On the pessimistic side, NP-completeness suggests that these problems do not have efficient algorithms; If all it takes to prove otherwise is conquering a single problem, why hasn’t anyone succeeded yet? Most experts lean towards the latter interpretation and suspect that NP-complete problems lack fast algorithms.
Seeing the glass as half full or half empty, the concept of whole problems changed the way researchers looked at computing. Karp showed that he could use one algorithm for one NP-complete problem to solve another, first by proving that you can translate seemingly unrelated problems into each other’s language using a process called reduction. This shows how to take any instance of a problem (e.g. a list of cities with flights between them and a budget) and turn it into another problem (e.g. A great Sudoku puzzle) in such a way that the Sudoku only has a valid solution if it is possible to visit all the cities within the budget (and otherwise it has no valid solution). Thus, if you found an efficient algorithm for Sudoku, you could also use it to solve the traveling salesman problem by turning instances of the latter into Sudoku puzzles. (See the bottom of this story for a nice example of a reduction in full detail.)
This ability to code a problem using someone else’s language is not a quirk of this example, it is also a feature of computing itself. A network of constraints unites all NP-complete problems. Solve any of these, and you can solve any other NP problem. The implications boggle the mind. Remember that we can frame proving mathematical theorems as an NP-complete problem. Choose any famous unsolved math question. The theory of NP-completeness then tells us that some degree exists Candy Crush that perfectly encapsulates your math question. A certain number of moves in that level of Candy Crush can get a certain score, then your math problem has a proof of a certain length; otherwise it is not. NP-completeness also assures us that certain advances in protein folding (or box packing or Sudoku solving) would destroy the digital economy. That’s because the encryption that protects our sensitive data throws them behind what they think are intractable computational problems. (It should be noted that while solving an NP-complete problem would allow encryption to be broken, the reverse is not true; the unsolvable problems underlying most encryption schemes are not quite NP-complete in themselves).
With everything going on in NP-complete problems, a million dollars seems like a bargain for their solution. And it just might provide some extra motivation the next time you struggle to plan a vacation trip or solve a Sudoku puzzle.
How does the restriction work?
For anyone who wants to dig deeper into how the reduction works, let’s reduce another type of NP-complete problem, the “three-color map problem,” to the “clique problem.” The the three-color map problem asks: Given a map, can you assign each region one of three colors so that neighboring regions do not have repeating colors? The clique problem asks: Given a social network, does a group of any desired number of people have mutual friends? Both problems are NP-complete, that is, we do not know an efficient algorithm for either of them. On the surface, they have little in common. But given a map, we will show that we can transform it into a social network, that the answer to the problem of social networks will give us the answer to the problem of the map.
Picture a map of the USA. To build a social network from it, we will designate three “people” for each state, one each of three colors: blue, green and red. Then we will make two friends except:
-
They represent the same state. (Green Wisconsin representative will not be friends with blue Wisconsin.)
-
They have the same color and represent adjacent states. (North Dakota and South Dakota share a border, so their red representatives will not be friends, but North Dakota and Florida do not, so their red representatives will be friends).
I claim that this social network of 150 people will have a mutual clique of 50 people only if the US map has a valid tricolor. If we find 50 mutual friends online, they must all represent different states, because by design we didn’t friend people when they represented the same state. Also, the color corresponding to the click would never assign the same color to neighboring states—we explicitly banned such links online. So a click of 50 people corresponds to three valid colors. Also, if there are no 50-cliques in the grid, then no tricolor exists for the map.
Just us reduced map the three-color problem to the click problem. This means that if someone found a fast algorithm for the click problem, they could use it to solve any instance of the three-color map problem. Seriously, the first step—turning the map into a grid—is quick. Creating the right people and friendships doesn’t require detailed searches or other infeasible overhead. The cuts show that even if our problems feel unique, they may be more universal than they appear.
This is an opinion and analysis article, and the views expressed by the author(s) are not necessarily their own. American scientific