It may be. It depends! It is not the same on all database management systems.
In a sense it is, but it does not mean that it has to be physically like that. It's an implementation detail.
In databases that use MVCC is like this (a lot of DBs are like this or use a similar mechanism), it never writes over what exists, always creates a new one, so it greatly facilitates competition and isolation. / p>
And may be more suited to handle changes in the key, after all the data is usually organized in trees with keys and values (the value is usually a tuple, term officially used in the area) and if the key changes the location of the tree where the tuple should be moved, so delete it from where it is and insert in a new place. It may be the primary key tree, or the secondary key tree.
But there are schemes that this is not so true. I know some banks that are not like that. But they have other problems. The deletion itself is complicated and often it does not even happen online , so the record to be deleted is just marked with invalid and in the future it becomes a clean batch in everything that is marked, a species of garbage collection.
But in secondary indexes I think it's always like this. I do not see how not to be in normal conditions. Not that it is not possible, but it is weird and it does not seem to me a very realistic solution. Understand that the normal table that everyone calls is usually the primary key itself, contrary to what many people think, it is also usually an index.
But in general it is a INSERT
followed by DELETE
, even to be easier to deal with atomicity. Only when the insertion is complete and done all it needs is that it may consider that the old one should be deleted.
In case of trigger is inserted something new and after the end of the operation is deleted the old, so both are visible. This is one more reason to be separate operations.
See more about indexes: