How to back up daily Mysql Database?

12

Is there any free tool, or even a script ( .bat ) that I can use to make an automatic backup of my database? >

I would like this backup to be generated by saving a single file .sql .

    
asked by anonymous 31.01.2014 / 14:39

7 answers

10

I have a script that does just that: a backup of MariaDB / MySQL and the entire PostgreSQL and rsync the file to another location.

#!/bin/bash

BASE_DIR=/tmp/backups
TMP_DIR="$BASE_DIR/$(date +%Y%m%d-%H%M%S)-$RANDOM"
DEST_DIR=/mnt/backups/database

if [ -d $TMP_DIR ] ; then
        rm -rf $TMP_DIR
fi;

mkdir -p $TMP_DIR
cd $TMP_DIR

pg_dumpall -U postgres | gzip > postgresql.sql.gz
mysqldump --all-databases -u root | gzip > mysql.sql.gz

if [ -d $DEST_DIR ] ; then
        rsync -a $BASE_DIR/ $DEST_DIR
fi;

And this script is invoked once a day by cron:

$ crontab -l
@daily /usr/local/bin/backup
    
31.01.2014 / 14:45
2

There are a thousand ways. Take a look at this opensource project that can solve your problem:

link

Or read this blog that suggests 10 different ways.

link

    
31.01.2014 / 14:45
2

I'm not a MySQL expert, but here's my answer, being a bit more simplistic, and I think it's more correct as well.

To perform a backup with the online bank, you must have binary logging enabled. Otherwise you will have a problem in integrity between the tables in the restore process (transactions that were in progress during the dump process)

$ mysqldump --all-databases -F > /tmp/backup.mysql-full.sql
$ mysqladmin flush-logs

Then you should copy the binary logs generated during this process. To restore this backup, you must apply the dump and the logs:

$ mysql < backup.mysql-full.sql
$ mysqlbinlog mysql.bin.XXX > dump.XX1.sql
$ mysql -f < dump.XX1.sql

$ mysqlbinlog mysql.bin.XX2 > dump.XX2.sql
$ mysql    < dump.XXX.sql
# e por ai vai... aplicando cada binary log.

Important

I do not remember why, but if I am not mistaken the user grants are not saved in this dump, so you need to generate them manually too! I use this shell function that I picked up on some other stack question here.

mygrants()
{
  mysql -B -N $@ -e "SELECT DISTINCT CONCAT(
    'SHOW GRANTS FOR ''', user, '''@''', host, ''';'
    ) AS query FROM mysql.user" | \
  mysql $@ | \
  sed 's/\(GRANT .*\)/;/;s/^\(Grants for .*\)/##  ##/;/##/{x;p;x;}'
}

Reminder

The dump backup does not copy your configuration file my.cnf , so be sure to copy it manually!

Alternative

I use the percona xtrabackup tool (freeware) to back up the online bank.
But there is no backup in .sql, it's practically a copy of datafiles + binary logs. I did backup and restore tests and it worked perfectly. In addition to having a much better performance than the dump because it is a "raw" copy. But there also has the disadvantage of not having it in TXT format.

For reference, I back up a database that runs the Zabbix monitoring tool, with 7 GB of data (the size of the mysql folder where the data resides). The backup is done in a maximum of 3 minutes. Considering it's a virtual machine (VmWare ESXi) on a desktop, I think it's wonderful this time.

Alternative 2

In my scripts, before I was using xtrabackup, but as my bank grew I started to have some negative performance effects on the environment during the backup. I researched alternative methods and started using snapshots, copying a full image of the database.

The process of locking the bank, generating the snapshot and releasing again takes no more than 5 seconds. For this I used the snapshot feature of LVM I perform the following steps (very simplified version):

  • In MySQL, I force a flush of data to the disk and lock any record in the datafiles:
    FLUSH TABLES WITH READ LOCK
  • Gero snapshot on the mount point of the MySQL datafiles:
    lvcreate --snapshot myvg/lvmysql -n lv_snap1 -L 10G
  • I release the write to MySQL again UNLOCK TABLES
  • Snapshot FS amount
    mount /dev/myvg/lv_snap1 /mnt/bkp
  • I rsync the files to my backup server
  • I unmount the FS and remove the created snapshot.
  • On the target server I still activate MySQL there and run a check before sending it to the final backup.

It takes some care in using snapshot, as it has a useful "life" that is the space allotted to it. This useful life is the time you have to copy the content of it to another place and then destroy the snapshot.

    
31.01.2014 / 18:54
2

There is a (very good, I recommend) class that does this

MySqlBackup.Net

It's very simple to understand and use, and you can configure countless things, such as which tables you want to back up, whether you want to back up the settings, and so on.

    
31.01.2014 / 19:24
1

In linux you can use crontab to perform this task daily.

Type:

crontab -e

Add the line, replacing the values with that of your server:

0 23 * * * mysqldump -h localhost -u usuario -psenha meuBanco > backup_meuBanco_$(date +%s).sql 2>&1

Save (ESC: wq ENTER) and ready, every day at 11 o'clock your dump script will save your database.

    
31.01.2014 / 14:51
1

In Amazon server instances (. For linux, I installed # AWS-CLI and set up the security keys (). From there I created the script below to perform the backup. If you wish, you can also set up custody of backups in the that it purges automatically.

#!/bin/bash
##########################################
# Backup database MySQL enviando para o S3
# Criado por Andre Mesquita
##########################################

#VARIAVEIS
DATAHORA='date +%Y%m%d-%H%M'
FILESDIR='cd /sites/tmp'
TAR=/bin/tar
RM=/bin/rm

IPBANCO="127.0.0.1"
USERBANCO='seu usuario'
SENHABANCO='sua senha'
NOMEDOBANCO='nome do database'
NOMEDOBUCKET='meusbackups'

#Entrando no diretorio de backup
$FILESDIR

#Executa o backup do banco de dados
echo 'Realizando o backup do banco de dados...'
mysqldump --host=$IPBANCO --user=$USERBANCO --password=$SENHABANCO --databases $NOMEDOBANCO >  ./backup_$NOMEDOBANCO.sql

#Compactando o backup
echo 'Compactando arquivos do site...'
$TAR zcf $NOMEDOBANCO_database.bkp_$DATAHORA.tar.gz ./backup_$NOMEDOBANCO.sql

echo 'Enviando para o respositorio de backups...'
aws s3 cp ./$NOMEDOBANCO_database.bkp_$DATAHORA.tar.gz s3://$NOMEDOBUCKET/$NOMEDOBANCO_database.bkp_$DATAHORA.tar.gz

echo 'Excluindo o script descompactado...'
$RM $NOMEDOBANCO_database.bkp_$DATAHORA.tar.gz
$RM backup_$NOMEDOBANCO.sql

echo ' '
echo ' Operacao finalizada. '
echo ' '

I hope it helps.

    
03.12.2015 / 21:43
0

I do not know if this is what you need, but it follows a complete code to make bkp from your database, just save the code in a php file, create a folder called "db_bkp" and do not forget to fill in the data access to the database. inside your hosting server will have an option called "cron task" or something like that, just go there and type the address for that file and you're done

<?php
ini_set('display_errors',1); ini_set('display_startup_erros',1); error_reporting(E_ALL);//force php to show any error message

backup_tables(DB_HOST,DB_USER,DB_PASS,DB_NAME);//don't forget to fill with your own database access informations

function backup_tables($host,$user,$pass,$name)
{
    $link = mysqli_connect($host,$user,$pass);
    mysqli_select_db($link, $name);
        $tables = array();
        $result = mysqli_query($link, 'SHOW TABLES');
        $i=0;
        while($row = mysqli_fetch_row($result))
        {
            $tables[$i] = $row[0];
            $i++;
        }
    $return = "";
    foreach($tables as $table)
    {
        $result = mysqli_query($link, 'SELECT * FROM '.$table);
        $num_fields = mysqli_num_fields($result);
        $return .= 'DROP TABLE IF EXISTS '.$table.';';
        $row2 = mysqli_fetch_row(mysqli_query($link, 'SHOW CREATE TABLE '.$table));
        $return.= "\n\n".$row2[1].";\n\n";
        for ($i = 0; $i < $num_fields; $i++)
        {
            while($row = mysqli_fetch_row($result))
            {
                $return.= 'INSERT INTO '.$table.' VALUES(';
                for($j=0; $j < $num_fields; $j++)
                {
                    $row[$j] = addslashes($row[$j]);
                    if (isset($row[$j])) { $return.= '"'.$row[$j].'"' ; } else { $return.= '""'; }
                    if ($j < ($num_fields-1)) { $return.= ','; }
                }
                $return.= ");\n";
            }
        }
        $return.="\n\n\n";
    }
    //save file
    $handle = fopen('db_bkp/db-backup-'.time().'-'.(md5(implode(',',$tables))).'.sql','w+');//Don' forget to create a folder to be saved, "db_bkp" in this case
    fwrite($handle, $return);
    fclose($handle);
    echo "bkp efetuado com sucesso";//Sucessfuly message
}
?>
    
11.08.2016 / 01:32
___ ___ erkimt Performance: string "concatenated" or all on the same line? ______ qstntxt ___

I have a loop with thousands of results that renders a table. The string is mounted through the StringBuilder in this way (example snippet):

%pre%

Can there be any performance improvement if I put the whole code in one line, as below?

%pre%

And if it were a concatenation, such as the following code?

%pre%

More advanced explanations of how language works are welcome.

    
______ azszpr7738 ___

The best will always be to keep in a String that is unique at compile time rather than at runtime.

And, believe it or not, in C # you can use line breaks in a single String using the character %code% (at) at the beginning of it.

%pre%

Also, according to this topic in SO , if you concatenate String literals in a same command, they will be concatenated at compile time, then it would give the same effect on runtime performance something like:

%pre%

Using %code% to concatenate String literals will decrease program performance, since it will prevent the compiler from optimizing literals.

On the other hand, if the compiler can not know the size of the String at compile time, for example if we concatenate variables, %code% is usually faster than directly concatenating the values. I say "generally" because I do not really know how much the compiler or CPU would be able to optimize certain exceptional cases.

    
______ azszpr7751 ___

In case of substituting multiple calls from %code% to only one, the gain will be minimal.

On one occasion, working with the class %code% , I noticed a great improvement in performance when booting the %code% with a size sufficient to store the entire result. I did this using two routines, one to count the required size, and another to build the string.

To initialize the StringBuilder with a size simply pass an integer in the con- tractor:

%pre%     
______ azszpr21973 ___

It has been properly said that the best way is to use a single %code% in the case presented. It's fast and legible.

But if there is a reason to do sequential concatenation, it can be used without performance problems. It ends up becoming a single %code% at compile time. The only problem is being less readable.

There is optimization of the compiler in many cases. But not at all. If all of the% s sizes involved are not known, there is not so much optimization. In these cases the optimization only transforms the concatenations in the %code% method. It is certainly better because it avoids unnecessary allocations, but the size calculation is still needed.

Although the %code% property is of type %code% , I have doubts if it can be so optimized.

If it is useful a %code% may be appropriate and not have a performance impairment. It may even be faster than a method %code% of %code% . When you know the total size required for all% s of% s in a simple way, possibly as a constant literal, %code% is very fast. Internally %code% uses a %code% so you find that the former is faster that the second one does not make sense. Of course, in cases that have only 4% with% s the concatenation is made simpler without %code% .

But so far little has been added to what has been said in the other answers.

Alternative: Resources

There are cases to make work easier, the text should not be in the code. It should be in an external file, when this is pertinent, or be in Resources . It may be easier to maintain in this and it is very fast in the second case.

For everything there is the best solution applied. This is a case that easier maintenance can be more important than performance. In case of catching the resource the performance will not be much affected either. And even if it is, it will not create problem for the application. This performance concern makes sense for cases of extreme strings manipulations . In the case of file access the performance will obviously be affected by access to mass memory. But it will still not affect the application. Of course this solution should only be chosen when you want to be able to change the text easily after compilation, when you want to give this freedom to the user.

    
___ Format decimal with comma and thousand with dot