Difference between ways to perform a backup (disk space, buffer, etc.)

1

Suppose I have a machine with 10GB of free disk space, and I have a backup script of a postgresql database, which when run, backs up locally and then copies to another remote server.

The problem is this, when the backup reaches a size equal to or greater than 10GB , it will lack disk space and soon will give problem in everything that runs on that machine.

Question : Using pg_dump and instead of backing up locally, already backing up by pointing to another machine with the -h option, even if the total backup size is greater which% of%, will the buffer size be enough to not lock the machine in a matter of space?

    
asked by anonymous 12.02.2014 / 11:18

3 answers

1

I recommend you take a look at documentation to get some ideas on how to approach the backup , but basically we have three options to back up:

  • dump (via the pg_dump utility)
  • file system (copy of files made via rsync / scp / etc)
  • PITR (made through custom script, pg_rman or until [pgbarma] n 4 )

I recommend you take a look at the fabio telles article that talks about backup: link

    
19.02.2014 / 15:26
0

You can back up from a remote machine that has postgresql-client installed:

pg_dump -h postgres_server dbname >pg_dump.bkp
    
14.02.2014 / 23:39
0

You can create an NFS for the remote machine and back up normally, just as you would in a local directory.

The time of your backup will depend on the speed of the link between your local server and the remote server.

If the speed of this link is less than the speed of your local file system (for example, you would be able to write twice as fast on the local disk as you would on the network), remember to enable maximum compression with the -Z 9 parameter. You only stop using compression if the speed of your link is so good that it does not justify losing processor time by compressing.

    
21.03.2014 / 19:25