How to implement journaling in Python?


I need to do a series of operations involving binary files (you can not use BD), and I need to ensure that they finish successfully even in the middle of the operation. To do this, I see no way out but to implement a journaling system manually. I started to write a code, but I'm not sure if it's correct (ie if there are no cases where a failure in the middle of the task can put it in an inconsistent state - including failure in the journal's own ).

Is there anything ready in this regard? And if it does not exist, is there a problem in my attempt to resolve it?

def nova_tarefa(args):
    refazer_journal() # Refaz o que ficou inacabado da última vez (se houver)
    with open('args.json', 'w') as f:
        json.dump(args, f) # Prepara as novas tarefas
    with open('journal.txt', 'w') as f:
        f.write('comecar\n') # Coloca as novas tarefas no journal
    refazer_journal() # Faz as novas tarefas

def refazer_journal():
        with open('journal.txt', 'r') as f:
            passos = [x.strip() for x in f.readlines() if x.strip()]
        if not os.path.exists('journal.txt'):
            passos = []
    if not passos: # Se não há nada inacabado, termina
    with open('args.json', 'r') as f:
        args = json.load(f)

    # Realiza as tarefas que ainda não foram marcadas como concluídas
    for indice,tarefa in enumerate(args['tarefas']):
        if len(passos) <= indice+1:
            with open('journal.txt', 'a') as f:
                f.write('\ntarefa concluida\n')

    with open('journal.txt', 'w') as f:
        pass # Tudo foi feito com sucesso: esvazia o journal

(Note: what is written in the journal - comecar , tarefa concluida - is not important, if only one letter that is written on the line already considers that the step was well -suced)

Test code (with me always worked - including editing journal.txt to introduce a series of errors):

def realizar_tarefa_indepotente(tarefa):
    if 'falhar' in tarefa:
        raise Exception('Tarefa falhou!')
    print 'Realizando tarefa: ' + tarefa['nome']

tarefa_ok = {'tarefas':[{'nome':'foo'}, {'nome':'bar'}, {'nome':'baz'}]}
falha_3 = {'tarefas':[{'nome':'foo'}, {'nome':'bar'}, {'nome':'baz','falhar':True}]}
falha_2 = {'tarefas':[{'nome':'foo'}, {'nome':'bar','falhar':True}, {'nome':'baz'}]}
falha_1 = {'tarefas':[{'nome':'foo','falhar':True}, {'nome':'bar'}, {'nome':'baz'}]}
falha_1_3 = {'tarefas':[{'nome':'foo','falhar':True}, {'nome':'bar'}, {'nome':'baz','falhar':True}]}
asked by anonymous 14.02.2014 / 06:53

4 answers


I wrote an article describing how to do this: link and even a citizen did a Python-based implementation on article: link

The basic technique is to write at least two files, first one, then the other, in synchronous mode, as described in other answers.

The "seasoning" of the schema is to add timestamp and hash in both files, so that you know which file is the most recent (by timestamp), and can check if the file is complete (by hash).

The file system should implement journaling for at least the metadata (this is usually the case, journaling the data costs performance). This ensures that at least the folder and the file are readable. If the data is corrupted you can detect it by hashing.

15.02.2014 / 00:09
  • An easily visible problem in your implementation is that the way you check to see if the journal.txt file exists after reading is subject to race conditions, see How to check if a file exists using Python .

  • In addition, it is best not to catch "naked" exceptions, but rather to define the specific type of exception you expect, eg IOError , OSError , etc. Likewise, when an exception is%, it is good to use one with a specific name related to the error type .

  • To iterate over the dictionary, as an alternative, instead of iterating over the result of raise , you can simply iterate over the list:

    passos = [{'nome':'foo','falhar':True}, {'nome':'bar'}, {'nome':'baz'}]
    for tarefa in passos:
  • The way you check if the 'fail' value works, but it's not very explicit. I'd recommend switching to enumerate() , if you're simply checking whether it exists, or if 'falhar' in tarefa.items() if you want to make sure if tarefa.get('falhar') is tarefa['falhar'] .

14.02.2014 / 13:42

Ensuring archiving of files

My first thought when it came to ensuring file processing was buffer .

See what the file.flush() documentation says:


Note: flush () does not necessarily write the file data to disk. Use flush() followed by os.fsync() to enforce behavior. (free translation)

I assume that tasks can have a much larger volume of data than journal.txt . If a power failure occurs at the end of the refazer_journal() method, the journal may have been written to the disk while the task data is still buffered.

I understand that the journal data is sent after , but I do not think there is any guarantee that buffer will be queued sequentially by the system operational and hardware.

14.02.2014 / 14:16

create a journaling based on files in the filesystem, I suggest using an epoch-based nomenclature ... writing and deleting items at one time is always safer than manipulating an archive a thousand times .. journaling itself can be corrupted that python atomicity is hard to guarantee for long processes

14.02.2014 / 15:19