The halting problem is an important theory in computer science. It gives us tools to explains the fundamental limitations of computers: what kind of problems can we solve with computers? What problems are impossible?
Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. We say that the halting problem is undecidable over Turing machines.
No, it is impossible.
#!/usr/bin/env ruby
def f1
end
def f2 (n)
if (n < 0)
f2(n + 2)
elsif (n > 0)
f2(n - 2)
end
end
def f3 (n)
if (n < 2)
return n
else
return f3(n-1) + f3(n-2)
end
end
# This one is complicated - don't try it for m or n greater than 2. If you can't figure it out - don't worry !
def f4 (m,n)
if (m == 0)
return n+1
elsif n == 0
return f4(m-1, 1)
else
return f4(m-1, f4(m, n-1))
end
end
f1
Haltsf2
Halts with even inputf3
Haltsf4
HaltsSee the annotated examples below:
#!/usr/bin/env ruby
# This function returns immediately - it does nothing. Therefore, it halts.
def f1
end
# This function converges to 0 if the input is even. When n is 0, the function halts.
# If n is odd, this function does not halt.
def f2 (n)
puts n
sleep 0.5
if (n < 0)
f2(n + 2)
elsif (n > 0)
f2(n - 2)
end
end
# This function produces the nth Fibonacci Number.
# This function halts, as it converges to the case where n < 2
def f3 (n)
if (n < 2)
return n
else
return f3(n-1) + f3(n-2)
end
end
# (0...10).each { |n| puts "f3(#{n}) => " + f3(n).to_s }
# This function is the Ackermann Function.
# This function is an example of non-primitive recursion.
def f4 (m,n)
if (m == 0)
return n+1
elsif n == 0
return f4(m-1, 1)
else
return f4(m-1, f4(m, n-1))
end
end
# puts "f4(3, 2) => " + f4(3, 2).to_s
f4
above, what do you notice about the number of steps it takes to run the function as m
increases? Does this make it difficult to find out whether it halts or not?
The function f4
represents the Ackermann Function which has non-linear performance (i.e. increasing the input by 1 might increase the number of steps required by 10, increasing input by 2 requires 100 times as many steps, etc). This makes it difficult to assess whether the function halts by simple testing or reasoning.
In order to prove that this function halts, we require advanced mathematics and logic. In general, the complexity of this function makes it difficult to say definitely whether it halts or not without in-depth analysis.
Computers themselves are still useful devices capable of performing useful calculations.
However, while they increase the speed at which we can solve problems, they don't give us any additional mechanisms for solving difficult problems.
For example, the Traveling Salesman Problem is an example of a difficult problem. While a computer can improve the speed at which this problem is solved over someone doing it manually (e.g. using a pen and paper), it can't reduce the complexities involved in coming up with an algorithm to solve the problem.
It is generally accepted that computers will never go beyond the capacities of "turing-complete". However, some people have theorized that it may be possible to solve some of these problems using the concept of infinity.
These so called "infinity" machines work by solving problems with unlimited resources - many mathematical problems are difficult because there are limits on the amount of time we want to spend solving a problem - however if we can spend an infinite amount of time and resources on solving a problem, it potentially becomes simpler.
There may be other approaches that people have not thought of yet. Some people believe that quantum computing may provide computational capabilities beyond existing computers.