[Solved]: Why is Turing completeness right?

Problem Detail: I am using a digital computer to write this message. Such a machine has a property which, if you think about it, is actually quite remarkable: It is one machine which, if programmed appropriately, can perform any possible computation. Of course, calculating machines of one kind or another go back to antiquity. People have built machines which for performing addition and subtraction (e.g., an abacus), multiplication and division (e.g., the slide rule), and more domain-specific machines such as calculators for the positions of the planets. The striking thing about a computer is that it can perform any computation. Any computation at all. And all without having to rewire the machine. Today everybody takes this idea for granted, but if you stop and think about it, it’s kind of amazing that such a device is possible. I have two actual questions:

  1. When did mankind figure out that such a machine was possible? Has there ever been any serious doubt about whether it can be done? When was this settled? (In particular, was it settled before or after the first actual implementation?)
  2. How did mathematicians prove that a Turing-complete machine really can compute everything?

That second one is fiddly. Every formalism seems to have some things that cannot be computed. Currently “computable function” is defined as “anything a Turing-machine can compute”. But how do we know there isn’t some slightly more powerful machine that can compute more stuff? How do we know that Turing-machines are the correct abstraction?

Asked By : MathematicalOrchid

Answered By : Artem Kaznatcheev

Mankind formalized computation and developed two system for it in 1936 with the seminal papers of Alonzo Church on $lambda$-calculus and Alan Turing (who today, June 23rd 2012, would turn 100 years old if not for despicable circumstances leading to his early passing) on what became known as Turing-machines. Both mathematicians were solving the Entscheidungsproblem. Although Church’s paper was published slightly earlier, Turing was unaware of it when he developed his ideas, and Turing’s approach proved to be more useful for the design of real-world machines. This was because he showed how to design a Universal Turing Machine that could be programmed to run any computation. This universal machine, with a concrete architecture based on the work of John von Neumann is the basic idea behind the machine on which you are reading my answer. As you noted, computable is defined as “computable on a Turing machine” and all other reasonable models of computation have proven to be equivalent in their power. The belief that all reasonable models of computation are equivalent in what decision problems they can solve is known as the Church-Turing thesis. In its original form, it is almost completely believed by the learned community. In fact it is not completely clear what it would mean to prove/disprove the Church-Turing thesis; in a lot of ways it becomes an empirical question. However, there is still the extended Church-Turing thesis which asks the slightly more subtle question of: what can be computed efficiently?. Many classical models, such as $lambda$-calculus, Turing Machines, tag-based systems, cellular automata, etc are equivalent under the extended thesis as well. However, the recent development of quantum computing casts doubt on the extended thesis. Although most people who work on quantum computers (including me) believe they are more efficient that classical ones, the matter is subject to scholarly debate. Note that in terms of the coarse notion of what is computable (as opposed to efficiently computable) quantum computing is still equivalent to Turing’s model.
Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/2462

Leave a Reply