What do I do when I get a bad_alloc error?

6

Before the question in question let's assume a scenario:

  • I have a program written in GTK +;
  • The program uses operator new to allocate resources; (not to be confused with operator new (std :: nothrow) )

At some point in the program I need to allocate a resource, and I use the new operator. The new operator, if it can not allocate heap memory, will generate a exception of type std::bad_alloc .

If there is no more memory available in heap , I will not be able to draw any GTK + windows to inform the user of the problem. I can also no longer allocate any other resource (because the new operator already informed me of the memory shortage with std::bad_alloc ).

  • What is the best solution to be adopted at this time?
  • I should quietly quit the program and just log the error in std::cerr ?
  • What other possible solutions to this type of problem?
  • asked by anonymous 06.11.2014 / 18:27

    3 answers

    4

    My list of options is as follows:

  • In everyday (non-critical) situations, aborting the program due to lack of memory is what you can do yourself (preferably recording in a log). On cell phones that's exactly what happens. If the operating system does overcommiting (say it has been allocated successfully but only allocates memory when it is actually used), it will not be a good idea to test the return of malloc (in C) or capture the new exception (in C ++) so you're really screwed .

  • If your application has a main "waiting for user commands" loop (since it is a GTK + application) and some well defined start and finish operations that can allocate a lot of memory, you could catch the exception in the main loop and warn the user that the operation failed. Probably on returning to the main loop you should already have some memory released (the destructors have run). You can also warn in the message that the program may have become unstable and suggest to the user to restart it.

  • Capturing bad_alloc in the main loop can be useful if the allocation failure has been caused by another error, such as requiring 1 byte of memory, which when converted to unsigned gives a giant number. In this case, the allocation error does not indicate that the system is out of memory, but rather that it has failed to validate some parameter in its code, since the program should not request that colossal amount of memory.

  • If your program needs to run continuously, consider creating a supervisory process that restarts the program whenever it is detected that it terminated unexpectedly. The new process created will start over again in a much more predictable state than a program that attempted to handle a bad_alloc exception!

  • So that bad_alloc does not come as a surprise in the middle of an operation, you could pre-allocate enough memory and just work inside it. But be careful because the system allocator might have some write / read protections outside the allocated area that you will lose if you try to implement everything manually (cof, OpenSSL, cof, Heartbleed, cof).

  • To handle bad_alloc, you can pre-allocate a memory space and leave it unused, releasing only when there is an allocation error. I've heard of this parachute technique.

  • Note that all (except the first) options are labor-intensive and depend on how your program works (interactive "batch operations" can be restarted and continue where you left off). It will be necessary to simulate situations of lack of memory to ensure that the treatment works properly, and to take care of specific problems of certain operating systems, such as overcommiting.

    If you are implementing a language, a database, or some server, these security measures are superimportant. If you're implementing a Flappy Bird clone, not so much ...

        
    10.11.2014 / 20:39
    0
  • In general, some critical procedure that is currently under way is closed. The problem is that during shutdown more errors can be generated in the event that to terminate something, it is necessary to allocate more memory. Errors can range between segfaults or silent crashes of the application, and are especially tricky to handle in case they are launched by a library that you do not have the source code for.

    Maybe, your application will cache something (from a file, for example), and then that cache can be released, allowing the application to continue running. But cases like this are rare and vary from application to application.

  • If you can not free memory and system resources (file descriptors, etc), this may be the only solution, as it may be that at the time of logging, the operating system attempts to allocate memory and then it also falls into a bad_alloc situation, and in this case your application will be forcefully terminated.

  • As discussed in 1, , if your application creates caches or pre-allocates resources, it may be possible to release these resources and continue to run the application normally. >

    This example I gave about the cache is transparently used by the Operating System.

    When, for example, a file is opened and read, it is kept in memory for an indefinite period of time. Your changes, of course, are propagated to the secondary storage structure after a period of time (see parameter commit of EXT4 for more information).

    The file data is then kept in memory until it is full (see note below). Then the operating system deallocates memory previously occupied by the file cache until it is sufficient to allocate to the requesting application. If you want to observe this behavior, the top program, in addition to the /proc/meminfo file are the simplest methods to see this condition ( htop does not display the numbers, not being very effective). Windows%% also displays this data, although I find it a little less intuitive and Windows is a bit more aggressive on the subject I treat in the note below.

    Note: Releasing memory only when a program needs it is a costly operation, which can degrade application performance. Thus, the Operating System actually always maintains a free memory area, allowing memory to be delivered quickly to the application that needs it. After delivery, the OS may or may not remove data from the cache, always keeping a free amount of memory ready for allocation. It is worth mentioning that disk paging is the last case to be used by the OS (although Windows again love to get away from this rule a little).

    Just to illustrate, my machine (Gentoo) has 16GB of RAM and 8GB of swap. I'm using 2.8GB in applications, 5GB of buffers, 6.8GB in caches (and 0GB of swap). That is, only 1.4GB is effectively free of my RAM.

        
  • 06.11.2014 / 19:47
    0

    If you assume heap memory is missing, you should prepare your application to require more heap space at startup. It is an action to mitigate the possibility of lack of memory. But the best alternative in case of failure depends even on the purpose of your application. If it is a high-performance critical application, you can define rules and initial parameters to ensure that a given task can be started. This way you can alert the user if resources are not fully meeting the need. With the constant evaluation of resources, you can pause execution and thereby avoid critical errors, informing the user of the situation and means of getting around the problem. I do not know exactly what the purpose and mode of operation of your project is, but the use of cache is undoubtedly an excellent alternative for managing finite resources.

        
    10.11.2014 / 15:18