How to Delete Data From a DB2 Table and Release Disk Space

You may also like...

8 Responses

  1. Norm Wong says:

    I wrote this little sp for deleting based on committing every x rows. This avoids problems with active log full conditions and lock escalations when millions of rows are being deleted.

    CREATE PROCEDURE DELETE_TABLE_COMMIT_COUNT (
    IN TABSCHEMA VARCHAR(128)
    ,IN TABNAME VARCHAR(128)
    ,IN DELETE_PREDICATES VARCHAR(256)
    ,IN COMMIT_COUNT BIGINT
    )
    DYNAMIC RESULT SETS 1
    P1: BEGIN
    DECLARE AT_END INTEGER DEFAULT 0;
    DECLARE del_stmt VARCHAR(512);

    DECLARE CONTINUE HANDLER FOR NOT FOUND
    SET AT_END = 1;

    SET del_stmt = ‘DELETE FROM ( SELECT 1 FROM “‘||TABSCHEMA||'”.”‘||TABNAME||'” WHERE ‘||
    DELETE_PREDICATES ||’ FETCH FIRST ‘||CHAR(COMMIT_COUNT)||’ ROWS ONLY)’;

    PREPARE prepared_delete FROM del_stmt;

    delete_loop:
    LOOP
    EXECUTE prepared_delete;
    COMMIT;
    IF AT_END = 1 THEN
    LEAVE delete_loop;
    END IF;
    END LOOP delete_loop;

    END P1

    • sanchayan sen says:

      Hi

      We are using a similar query to delete data from a db2 table . Every day we delete half a million records from the table everyday. What would be the ideal commit points in such cases?

      While we run this delete, there are 7-8 sessions which run in parallel.

      • Ember Crooks says:

        I think that you simply have to experiment to find the point that is ideal for you in terms of speed. I don’t think there is a solid answer to your question.

  2. Frederik Engelen says:

    Hello Ember,

    Nice post. Regarding the deletes, I advice you to look up Serge Rielau’s SQL On Fire presentation, there’s some good info there.

    Regarding the runstats, why do you perform one _before_ the reorg?

    Kind regards

    • Ember Crooks says:

      I guess I’m in the habit of the before runstats – for normal regular maintenance, it would go runstats, reorgchk, reorg only tables needing it, runstats(on reorged tables), rbind, but I guess if you take the reorgchk out of the mix (as in this scenario), then you might not need the before runstats either.

  3. Ian Bjorhovde says:

    To save you some time from Frederik’s post, Serge suggested doing:

    delete from (select * from YOUR_TABLE fetch first 10000 rows only)

    Add predicates to the select statement and change the fetch first clause to taste. It’s pretty easy to wrap this into a loop that repeats until you get SQLSTATE 02000.

    • Ember Crooks says:

      If anyone wants to know why I blog, this is one reason. It makes me question my own long-standing practices and find new and better ways. So much faster this way – thanks, Ian!

  4. Kevin Ovenden says:

    I have refactor’d DBClean. I use session.InStorageTables to store the Primary Keys of the selected ‘eXpired data’. Spin around the session table of PK’s PrepareStmt & commit in a LUW. Nice & fast, central logging, limited concurrency issues.

Leave a Reply

Your email address will not be published. Required fields are marked *