Can the problem be solved in practice?

14

The problem of the stop can be explained as: given a program and an entry for it, will it complete its execution and return a response or will it go into an infinite cycle?

This was proven undecidable by Turing, unquestionable. The point is: I came up with a simple way to solve such a problem.

If I execute a complete dump of memory and registers at each program execution state, I will have the program state at any moment. From there it is enough to arrive if there was any repetition of state. Since all that the program can do is to transition to the next state based only on the previous state, if the state repeated then it is immediate to infer that the program is entering an infinite loop.

Memory is finite, so there is a finite number of states. Any program that does not end will eventually repeat a state. I'm not saying that this is feasible, after all in a memory of four gigs we have in the at least 2 elevated to 4000000000 states. I do not even know how many digits this has.

My question is: what is wrong? Clearly the problem is undecidable. Where does this method fail?

    
asked by anonymous 29.08.2014 / 02:22

3 answers

8

The idea is interesting, but the parsing problem says that given an any program and an entry for it ... If you are limiting your program to run in a finite memory environment (be it 4GB) then you are necessarily restricting the scope of the original problem. For the program to be really arbitrary you need to remove this restriction, which means that your finite solution (although impractical, since its number is more than one billion digits) does not apply to the general case.

Now, your question is whether for a specific case of a program it can be solved "in practice". In practice, you have nowhere to store all the memory dumps your solution needs, and you do not have the (almost) infinite time it requires as well:)

    
29.08.2014 / 02:28
6

In fact, your solution can not catch simple cases. Of course, any program that changes its state without going back is not detected as an infinite loop cause.

Imagine the following Java program:

public static void main(String[] args) {
    LinkedList<Integer> l = new LinkedList<>();
    while (true) {
      l.add(123);
    }
}

Each time the dump is done, it will be in a new state. Being in a new state, its algorithm to check if it has entered some previously known state goes by water down. Nothing done. By removing memory limitations and considering variable / infinite bit-address addressing, the linked list would grow to infinity without worrying about anything else in this world.

I believe that your heuristic would enter into a state following the static analysis of the code. If you simply supply the black program box, maybe not much more than your automaton in assembly-simile format, there we will have a bigger problem. The more basic the representation of the program, the closer the Turing machine the more difficult the analysis static.

Another (perhaps even more efficient) heuristic that computer service providers use is to make a timeout available for execution. If you are dealing with a day-to-day problem, most likely you will be dealing with a class P problem or else some heuristics to bring a satisfying though imperfect answer, so you need to run fast.

More complex questions (such as graph isomorphism or even predicting the meta-stable structure of a protein given its amino acid sequence) actually require greater computational power and possibly the timeout heuristic would give false positives to "eternal execution." This security measure is provided in high-performance computing environments such as the SLURM Execution Scheduler to prevent someone from inadvertently passing an unmanageable case and taking all the computing power of a research kernel. This heuristic is also applied in communication between two distinct machines, because hypothetically you can provide an input that would cause the other machine to go into infinite execution through the read and write timeout of the socket.

    
16.02.2018 / 04:22
5

If each processor instruction, you dump the 4 Gb of memory (and also the caches, the disk and the registers), any programs that are running will eventually repeat a state and you can prove this with the dump . The stop problem applies to cases where memory is infinite, but this case is what has finite memory.

The stop problem is (theoretically) solvable in cases where memory is finite by doing exactly what you propose. However, you need to look at this "theoretically" with enough attention.

In the worst case, you would have to go through all 2 n states of memory until it repeats. If n is a small enough value (for some device with a few bytes of total memory), this is until it is feasible. For a typical computer of a few gigabytes of memory, this is totally unfeasible because thermodynamic physics will impose barriers before you can finish it, and then one of these things happens:

  • The computer is destroyed by forces external to it.

  • The Universe ends before you go through all the states. I do not know if it would be with Big Crunch, Big Rip, Big Freeze, Big Bounce, Heat Death or something else. But whatever the case, it would not be good for you. This implies in hypothesis 1 above.

  • You consume all the energy in the universe in order to create and store enough memory dumps (or even just to keep your computer running). This implies in hypothesis 2 above that in turn implies in hypothesis 1.

  • Each existing memory doubles the number of possible states and therefore doubles the time and energy consumption required to traverse them. So it is not difficult to see that with enough memory, one of the three things listed above would occur (and would happen very early!) In that process.

        
    16.02.2018 / 16:57