Procedure to store in backup tables?

1

I have two tables, sales and salesProduct on account of the volume of information in these tables read / write operations are slow. I need to create a procedure that stores this data (from sales and sales tablesProduct), from a certain period, in backup tables (bkpVenda and bkpVendaProducto). The period will be informed by the user. How should I do this procedure?

create table venda (
ven_ID   int not null identity (1,1) constraint PKVenda primary key,
cli_ID   int not null,
fun_ID   int not null, 
ven_Data date not null,
constraint fkVenCli foreign key (cli_ID) references cliente(cli_ID),
constraint fkVenFun foreign key (fun_ID) references funcionario(fun_ID))

;

create table vendaProduto (
vpr_ID          int identity (1,1) constraint PKVendaProduto primary key,
ven_ID          int not null,
liv_ID          int not null, 
vpr_Quantidade  int,
vpr_ValorUnit   decimal(15,2),
constraint fkItvVen foreign key (ven_ID) references venda(ven_ID),
constraint fkItvliv foreign key (liv_ID) references livro(liv_ID))

;

    
asked by anonymous 23.11.2016 / 22:06

2 answers

1

You can do this as follows:

if object_id('realizar_backup', 'P') is null
begin
  exec('create procedure realizar_backup as');
end;
go

alter procedure realizar_backup @data_inicio datetime,
                                @data_fim    datetime
as
begin
  set nocount on;

  insert into bkpVenda(ven_ID,
                       cli_ID,
                       fun_ID, 
                       ven_Data)
  select v.ven_ID,
         v.cli_ID,
         v.fun_ID, 
         v.ven_Data
    from venda with(readpast) v
   where v.ven_Data between data_inicio and data_fim;

  insert into bkpVendaProduto(vpr_ID,
                              ven_ID,
                              liv_ID 
                              vpr_Quantidade,
                              vpr_ValorUnit)
  select vp.vpr_ID,
         vp.ven_ID,
         vp.liv_ID 
         vp.vpr_Quantidade,
         vp.vpr_ValorUnit
    from venda with(readpast) v
         inner join vendaProduto with(readpast) vp on vp.ven_ID = v.ven_ID
   where v.ven_Data between data_inicio and data_fim;
end;
go

But I advise you to write to log tables at the time transactions are done ( insert , update , delete ) and then only query by dates. But if it's just to save this procedure it solves your problem.

    
23.11.2016 / 22:28
1

Before modifying the database structure, and its impact on the applications, I suggest reviewing what is causing the slowness. They may be out of date statistics, some query that needs to be optimized, fragmented or absent indexes etc etc. In short, the typical tasks of database administration.

After reviewing, you might be prompted to create additional indexes. For example, if there are a large number of queries in the "sell" table by period, then index creation by the ven_Data column is likely to optimize these queries. Of course, in addition to the column ven_Data, it is necessary to evaluate coverage columns that should be included in this index.

In the product sale table it seems to me that the clustered index should be reevaluated. The vpr_ID column, as clustered index key, seems to me useless. It could even be maintained as the primary key, but in a nonclustered index. Or better yet, just declare this column as unique .

Among the various candidate keys in the "product sale" table, perhaps the ven_ID column is the ideal one to be chosen as the clustered index key. It can be a simple key, accepting duplicates, or a composite key (ven_ID + vpr_ID), unique (without repetitions).

The suggestion is that, before thinking about modifying the database structure, find the current bottlenecks.

Regarding the physical space occupied, analyze how much each column occupies. For example, the Quantity column of the "product sale" table is declared as int . That is, it occupies 4 bytes and accepts values up to 2 million and a few. Did any book have more than 32,000 copies sold in the same transaction? If the quantity of each item sold is less than 32,000 units, you can declare the column as smallint , so the column will occupy half the space (2 bytes).

The same reasoning for the fun_ID column in the "sell" table, which is also declared as int . Are there more than 32,000 employees in this bookstore ?! Of course not! This column could be declared as smallint (up to about 32,000 employees) or even tinyint (up to 255 employees).

The "sale" table shows the date of sale but the time is not shown. If it is necessary to also record the time of the sale, evaluate the use of smalldatetime type. The date type occupies 3 bytes and smalldatetime occupies 4 bytes.

They are details that, in the end, make a difference in performance.

    
24.11.2016 / 19:42