How does optimization -O3 break some programs?

2

When compiling my own programs, I have the freedom to change some flags of the compiler and start using the -O3 optimization.

This optimization can break some programs. I understand that the compiler should do and assume several things at this optimization level, but I can not understand how it can break a program.

This question I ask about the -fomit-frame-pointer , and it is clear that it can easily break the program in some architectures. My understanding for this case is that the compiler knows when to enable flag or not.

  • How does optimization -O3 break a program?

Note: Consider for this question the compilers gcc and clang .

    
asked by anonymous 20.02.2017 / 12:40

2 answers

4

Two main causes that can break programs with aggressive optimization (such as -O3 ):

  • compiler bugs
  • programmer errors

Bugs occur because the compiler developers fail, and the user does not have much to do (and it's not up to him, except to send a report and wait for the problem to be corrected). This can happen, for example, when implementing some optimization algorithm that does not consider all possible output / input cases.

Since the errors of the programmer usually start from bad programming practices, they are more common and I will give an example that I have already found in real codes:

Let's say you have a constant variable:

int const a = 5

And a function that receives a pointer int* :

void f(int* x)

You can call the function by passing a pointer to a :

f((int*)a)

Now consider the following code snippet:

int const a = 5;
f((int*)a);
if(a!=5)
{
  //faz alguma coisa
} else
{
  //faz outra coisa
}

It seems plausible that an aggressive optimization eliminates the branch of if , because since a was declared constant, its value should be 5 . It turns out that the f function can modify the value of a , through its pointer.

Without optimizations, the compiler would issue instructions to reload the a value and parse the condition, making the program work as expected, and optimizing would break the program in>, but in fact, a programmer error occurred here when doing an unsecured cast of a constant variable (and changing it then results in undefined behavior by the standard).

Compiler errors are intermittent ... In part you create better ways to test them by lowering bugs, but in part compilers compete aggressively for more dazzling optimization techniques, and sometimes new bugs are introduced . I feel compilers are getting more robust, and the -O3 (or equivalent) flag is safe. (I realize that new optimization techniques are usually released with a specific flag, and only then incorporated into -O3 .)

In a personal opinion: the biggest problem is that programmers still use techniques that rely on direct memory manipulation, tricks that fall outside the scope of language abstraction, breaking the assumptions on which the compiler exercises the optimizations.

    
20.02.2017 / 15:57
4

Some compilers end up establishing a specific behavior that they want to handle that language has specified as undefined behavior.

There is a certain controversy. In my understanding the compiler can do this and break only codes that have preferred to use a behavior that is not optimizable. The advantage of having undefined behavior is just to facilitate optimizations.

It becomes a certain problem if you want portability a compiler to break a code one way and another break another. Often you need to create different codes for each implementation or do not connect this optimization.

O3 optimization is the most aggressive. There are others to choose from if you want one that can modify the behavior in an unwanted way. Even the lower levels can produce some problem in the code. Some compilers have independent levels or optimizations that are even more risky than O3 .

This is due to the fact that they do not adhere to the strict specification of the language and use specific assumptions.

    
20.02.2017 / 12:58