Postgresql Autoincrement skips 1 value when the insert is made by a function

2

I have the following function in Postgresql :

CREATE OR REPLACE FUNCTION ve.geraHistorico()
RETURNS VOID AS 
$$
DECLARE
    i INTEGER := 1;
    id INTEGER;
    dataCad TEXT;
    dataFec TEXT;
    qtd INTEGER := (SELECT COUNT(id) FROM ve.vendas);
BEGIN
    WHILE i <= qtd LOOP
        SELECT id, datacadastro, datafechamento INTO id, dataCad, DataFec FROM ve.vendas WHERE id = i;

        IF dataFec IS NULL THEN
            INSERT INTO ve.historico(id, data)
            VALUES(id, GENERATE_SERIES(dataCad::DATE, CURRENT_DATE::DATE, '1 day'::interval));
        ELSE
            INSERT INTO ve.historico(id, data)
            VALUES(id, GENERATE_SERIES(dataCad::DATE, dataFec::DATE, '1 day'::interval));
        END IF;
        i = i + 1;
    END LOOP;
END
$$ 
LANGUAGE plpgsql;

The ve.historico table consists of:

CREATE TABLE ve.vendas
(
    idAI SERIAL PRIMARY KEY,
    id INTEGER,
    data DATE
);

The problem happens when I run the ve.geraHistorico() function, autoincrement should create a unique and sequential id for each inserted row, but when you change the id of the sale in the function, autoincrement skips 1 value, being as follows:

The first column (gray) is the pgAdmin console counter, the second column is SELF-OPENING, the third is the sale id and the fourth column is the dates I am generating. If you notice it, you will see that on line 31 the SELF-CIRCLE should be 31 as well, plus it jumped by 32, well where it swaps.

Why does this happen? Is it a problem in the function?

    
asked by anonymous 04.07.2018 / 19:42

1 answer

5

This is completely normal.

When you create a field in a table of type SERIAL or BIGSERIAL , a SEQUENCE is implicitly created to do the auto-increment control of that field.

Fields of type SERIAL and BIGSERIAL have a value DEFAULT which consists of the callback of the function nextval() .

In a real scenario, several single transactions may be calling nextval() to the same SEQUENCE , simultaneously, some will be executed and others will be discarded.

Even the discarded transactions (which have suffered% with%) do not decrease the ROLLBACK , causing that sequential number to be lost forever.

The SEQUENCE you are looking at are actually transactions that have called gaps , however, have suffered nextval() at some point.

The% is never decremented , not even if there was a ROLLBACK in the transaction that incremented it (and making this decrease manually is not a good idea!).

According to the documentation of SEQUENCE , in the section on ROLLBACK , there is an explanation about this behavior in the PostgreSQL function reference, see:

  

nextval

     

Advance the sequence object to its next value and return that value.   This is done atomically: even if multiple sessions execute    Manipulação de Sequências concurrently, each will receive a distinct sequence   value.

     

[...]

     

Important

     

To avoid blocking concurrent transactions that obtain numbers from the   same sequence, a nextval() operation is never rolled back; that is   once a value has been fetched it is considered used and will not be   returned again. This is true even if the surrounding transaction later   aborts, or if the calling query ends up not using the value. For   example an nextval with an nextval clause will compute the   to-be-inserted tuple, including doing any required INSERT calls,   before detecting any conflict that would cause it to follow the ON CONFLICT    nextval rule instead. Such cases will leave unused "holes" in the   sequence of assigned values. Thus, PostgreSQL sequence objects can not   be used to obtain "gapless" sequences .

Have you thought about re-evaluating the real need to have a field with no gaps sequence?

Consider the following:

CREATE TABLE tb_historico
(
  id BIGSERIAL PRIMARY KEY,
  valor INTEGER,
  datahora TIMESTAMP DEFAULT NOW()
);


BEGIN; INSERT INTO tb_historico ( valor ) VALUES ( 100 ); ROLLBACK;
BEGIN; INSERT INTO tb_historico ( valor ) VALUES ( 100 ); ROLLBACK;
BEGIN; INSERT INTO tb_historico ( valor ) VALUES ( 100 ); COMMIT;
BEGIN; INSERT INTO tb_historico ( valor ) VALUES ( 100 ); ROLLBACK;
BEGIN; INSERT INTO tb_historico ( valor ) VALUES ( 100 ); COMMIT;
BEGIN; INSERT INTO tb_historico ( valor ) VALUES ( 100 ); ROLLBACK;
BEGIN; INSERT INTO tb_historico ( valor ) VALUES ( 100 ); COMMIT;
BEGIN; INSERT INTO tb_historico ( valor ) VALUES ( 100 ); ROLLBACK;
BEGIN; INSERT INTO tb_historico ( valor ) VALUES ( 100 ); COMMIT;
BEGIN; INSERT INTO tb_historico ( valor ) VALUES ( 100 ); ROLLBACK;
BEGIN; INSERT INTO tb_historico ( valor ) VALUES ( 100 ); COMMIT;

You can use the ON function to enumerate your records, see:

SELECT
    row_number() OVER (ORDER BY id) AS id_gapless,
    id,
    valor,
    datahora
FROM
    tb_historico;

Output:

| id_gapless | id | valor |                    datahora |
|------------|----|-------|-----------------------------|
|          1 |  3 |   100 | 2018-07-04T19:07:11.651281Z |
|          2 |  5 |   100 | 2018-07-04T19:07:11.654413Z |
|          3 |  7 |   100 | 2018-07-04T19:07:11.657451Z |
|          4 |  9 |   100 | 2018-07-04T19:07:11.660584Z |
|          5 | 11 |   100 |  2018-07-04T19:07:11.66475Z |

See working at Ideone.com

    
04.07.2018 / 20:41