Evolution of the code-controlled data model

1

In a scenario in which the application is deployed to production through a pipe line executed by an IC server, where the server performs the following tasks:

  • Installs the front end and back end dependencies. Performs other tasks defined in the task automator, such as change files in code and reduce the quality of images and videos.
  • Run the automated tests.
  • Rotate the migrations in the secondary database.
  • Dumps the primary database server into production.
  • Restore primary database dumping on secondary considering new data model.
  • Update files on application servers.
  • Makes the secondary database primary and updates the old primary.

My question is:

Considering that I am using a relational database, when executing the migrations in the database it can not be in operation? (So I figure I should upgrade a secondary and then turn it to primary). In case I perform the migrations on a bank that is operational receiving writes the bank stops executing my writes operations or will it manage them in an orderly manner through its concurrency control?

I'm trying to find a secure but decentralized methodology to evolve the database continuously with the application, without relying on a team of DBAs, leaving the automated control direct on the CI and Deploy server, and the evolution of the code by the developers themselves using migration classes within the application.

I need tips based on the experiences of peers who have better experience with automated continuous evolution.

    
asked by anonymous 06.10.2016 / 19:46

0 answers