Loading Data into DB2 BLU Tables

You may also like...

14 Responses

  1. Brenda Boshoff says:

    I can only agree with your findings. In our case we have settled on load from cursor for BLU as we load about 200gb from z/OS DB2 v10.1 row based tables to BLU 10.5.4 every night . Not much but nevertheless. The loads to BLU v10.5.4 are on average a third faster than the load times (also using load from cursor) to the old row based v9.7.5 database pre BLU. The utility heap size was the biggest challenge as you found as well. Ours is now set to a “trial and horror” minimum with automatic on. I fully expected the load from cursor to take longer loading into BLU with the extra phase but this has not been our experience at all. All the load jobs run faster. We have kept the primary keys.
    I need a t-shirt that says “I ❤️ BLU”

  2. Tomas says:

    Load from cursor from remote database without federation? Mind = blown. Funny how the syntax diagrams for DECLARE CURSOR make no mention of this functionality… I’ve gone through every infocenter from 9.1 thru 10.5. Not a peep. This is some seriously useful functionality, why hide it?

    “I learned something today.” My favorite kind of day – thanks Ember!

  3. Dmitry says:

    Hi, Ember

    Thank’s for sharing your BLU expirience
    Did you try IMPORT/INSERT to column-tables ?

    On my 10.5.4 env (IBM say’s what in this release INSERT performance improved in times) i see what IMPORT is 15 times slowly (with any numbers of commitcount) than LOAD (this was 17M rows table import/load test from file). Also after insert’s into column-table compression is poor (8 times compared to LOAD)

    Best regards, Dmitry

    • Ember Crooks says:

      I did not try IMPORT/INSERT for two reasons. First, I know them to be slower than nonrecoverable LOAD in just about every situation. Second, the ONLY way you can build a compression dictionary for column-organized tables is through the LOAD command – you cannot do it with reorg. So if you must use IMPORT/INSERT, you must first attempt to generate a representative compression dictionary with a LOAD command using the RESETDICTIONARYONLY option to not actually load any data into the table.

  4. Norman Wong says:

    You can load with keepdictionary to avoid the analyze phase. If you have a very large table to load, load a representative sample first, then reload with the keepdictionary option. This can cut a huge amount of time.

    Check your percent column compressed to see if the sample is representative.

    • Ember Crooks says:

      The scenarios in which I would use this method are:

      1. The data for a table is over 128 GB
      2. For some reason I really cannot take the time for a full analyze

      I really prefer to use the full data whenever possible, because getting the compression right is so critical to future performance.

  5. Kulvant says:

    Hi Ember,

    We are offloading data into files but during offloading we are facing issues as below.

    The data stored in Database

    The data offloaded in a file

    Column is of datatype DOUBLE

    Could you please help here ?

    • Ember Crooks says:

      I don’t know that answer, nor can I find it in quick searching. I would try a different export format if it works (IXF if your’re using DEL for example) and see if that changes things at all. I would also verify that the query doesn’t constrain the value returned in any way. It does seem the sort of thing that a PMR would be able to resolve rather easily.

  6. Luis Reina says:

    Hi Ember,
    Great post thanks!, there is one more possibility that is even faster than the LOAD from Cursor: You can use db2hpu to get the data out in the source machine and instead to send the output to a file (if BLU is in a diffferent machine) use ssh to send over the network to a pipe and then LOAD from the pipe in BLU machine.
    I will write a post with an example and if you want I can share it here.
    By the way I got to know you in IOD last year, you were so kind to come to the presentation I did with Raul Baron.
    Luis Reina

    • Ember Crooks says:

      True. HPU is a additionally licensed feature, so I don’t tend to think of it. I found traditional load from pipe slower than load from cursor. Would be happy to see what you would write up on it.

  7. Ram says:

    Hi Ember,

    Thanks for sharing insights in a very easy and understandable way. Your articles have been very useful for me. I am working on a script to automate declare and load from cursor. This is needed as the load has to be done frequently. I have created a part of the script where in it takes the list of both source and target tables. But I am unable to proceed to the declare cursor and load cursor part. This is what I have done so far

    # cursor_load.ksh
    # parm1: Database name required
    # parm2: Source schema name required
    # parm3: Target schema name required
    . ~hipdvins/sqllib/db2profile


    if [[ $# < 3 ]]
    echo "Usage: cursor_table.ksh"

    echo " ”

    exit 1

    db2 connect to $DBNAME
    db2 -x “select rtrim(tabschema) || ‘.’ ||tabname from syscat.tables where tabschema = ‘$SRCNAME’ and type = ‘T’ and OWNER =’$OWN'” > $SRCNAME.tables
    db2 -x “select tabname from syscat.tables where tabschema = ‘$TARSNAME’ and type = ‘T’ ” > $TARSNAME.tables

    Could you please guide me on how to generate a declare cursor stmt followed by load from cursor stmt

    • Ember Crooks says:

      Is the issue associating the source name with the target name? Is the target table name ever going to be different than the source table name? Is it just the schema name that changes? If the answer is that you’re just trying to copy all tables from one schema to another, and the source and target table names are the same, just with different schemas, I think this would work:
      while read tab
      db2 “DECLARE C1 CURSOR DATABASE $DBNAME user using for select * from ${SRCNAME}.${tab} with ur”
      db2 “LOAD FROM C1 of CURSOR MESSAGES $tab.msg replace into ${TARSNAME}.$tab nonrecoverable”
      done < $TARSNAME.tables The changes to this sample code would be replacing and
      with variables or values that make sense (if it’s a remote database for the source). You might also have to use “copy yes to ” if this is an HADR environment (where is on a shared filesystem).

      Does that help?

  1. January 27, 2015

    […] Loading Data into DB2 BLU Tables […]

Leave a Reply

Your email address will not be published. Required fields are marked *