MySQL database update

6

I'm working with one database on localhost and the other on production. At various times I need to upgrade my local bank to match that of the server.

I made a .sh script to be able to copy all files from the database table (/ var / lib / mysql) so I can do what I want. However the process is very time consuming due to the size of the files in the tables.

I want to know more professional suggestions for making this copy. Actually, something I could only sync with the modifications.

Q: Another 5 developers are working the same way, using the same server's database.

    
asked by anonymous 18.06.2014 / 13:38

1 answer

4

Then, the process should be inverse, and only in the structure of the tables. The ideal would be to use some system of migrations.

However, to get 'hot' data for testing and the like in a development and / or test environment, I believe that the ideal would be continuous dumps (which are becoming less recurrent with software evolution). The stream would be:

  • Local change in database: gera migration
  • After backup, migration is performed on the production server making the appropriate fixups
  • After migration, mysqldump
  • Run the dump on the local server
  • A mysqldump, even in the way you are working now, is much 'lighter':

    In production: $ mysqldump -uuser -p databasename > dumpYYYYMMDD.sql

    Location: $ mysql -uuser -p databasenamelocal < dumpYYYYMMDD.sql

    There are other forms (many more, I would say) with specific programs including MySQL Workbench, MySQLdiff, but it's usually simple and fast and if you have a really big database in production, I think it's best to just synchronize the structure and create a random data generation script for proper testing.

        
    20.06.2014 / 15:49