Delete thousands of records from a table

31

I have a table, for example, pessoa , and I would like to all people over 40 , with the script below I do this:

>
DELETE FROM pessoa WHERE pessoa.idade > 40;

The problem that there is more than 2 million records in the table , with people over the age of 40, and how the bank is in production, when this will crash all users.

Is there a more efficient way to solve this problem?

    
asked by David 14.12.2016 в 19:42
source

4 answers

18

You should proceed with parallel operations; if you had a good processor you could run as follows:

ALTER SESSION ENABLE PARALLEL DML;
DELETE /*+ parallel(pessoa, 20) */
  FROM  pessoa
  WHERE idade > 40;

Follow the link Oracle site where you talk about process parallelization , another good reference , plus a reference to how to delete large amounts of information .

In my opinion, I work with very large banks with more than 100 million records. A delete of these you want to do should not take more than 2 or 3 minutes if your server is good.

    
answered by 19.12.2016 / 01:40
source
7

Instead of running the delete to delete all the records, you can create a PL and go deleting little by little and commit every x records, this will prevent them from "locking" other users.

declare
  cursor c is select p.rowid r from pessoa p where p.idade > 40;
  c_commit number:= 0;
  l_commit number:= 10000;
  begin
    for t in c 
    loop
      c_commit := c_commit + 1;
      delete from pessoa p where p.rowid = t.r;

      if mod( c_commit, l_commit ) = 0 then
        commit;
      end if;
    end loop;
    commit;
end;
/
    
answered by 21.12.2016 в 14:01
7

I'm not an oracle expert, but you can also create a table with the records you want to keep .

Something like:

create table pessoa_new as select * from pessoa where idade <= 40;

After that, perform a truncate on the original table:

truncate table pessoa
  

Note that truncate does not create log in oracle, that is, you will not   able to rollback or commit. Once done, it can not be   undone.

After that, copy the data back into the original table:

alter session set rollback_segment = 'HUGE_RBS';   

insert into pessoa as select * from pessoa_new

Or if you prefer, change the old and new table name:

alter table pessoa rename to pessoa_old;
alter table pessoa_new rename to pessoa;

In any case, remember rebuild the indexes and if you choose to rename the tables, consider rebuilding constraints, triggers, etc.

    
answered by 21.12.2016 в 13:00
1

The Aron Linhares solution works fine, I would just add the use of BULK COLLECT to improve performance.

DECLARE
  CURSOR C_CURSOR IS SELECT P.ROWID AS CHAVE FROM PESSOA P WHERE P.IDADE > 40;
  TYPE TYPE_CURSOR IS TABLE OF C_CURSOR%ROWTYPE INDEX BY BINARY_INTEGER;
  R_CURSOR    TYPE_CURSOR;       
  BEGIN
    OPEN C_CURSOR;
      LOOP
      --preeche a memória de 1000 em 1000 registros
        FETCH C_CURSOR BULK COLLECT INTO R_CURSOR LIMIT 1000;
        EXIT WHEN R_CURSOR.COUNT = 0;
          FOR I IN 1 .. R_CURSOR.COUNT
          LOOP 
              DELETE FROM PESSOA P WHERE P.ROWID = R_CURSOR(I).CHAVE;
          END LOOP;
          --VAI APLICAR COMMIT DE 1000 EM 1000;
          COMMIT;
     END LOOP C_CURSOR;
    CLOSE C_CURSOR;
END;
/

In the Oracle documentation, you will find more details on how to work with large mass of data in PL / SQL.

link

    
answered by 23.01.2017 в 13:23