Attack of the Blob! (Blobs in a Transaction Processing environment)

You may also like...

12 Responses

  1. Steve Ling says:

    I’m a Websphere user and the WAS connect to a DB2 database and acess a table session:
    CREATE TABLE “SESSIONS” (
    “ID” VARCHAR(128) NOT NULL ,
    “PROPID” VARCHAR(128) NOT NULL ,
    “APPNAME” VARCHAR(128) NOT NULL ,
    “LISTENERCNT” SMALLINT ,
    “LASTACCESS” BIGINT ,
    “CREATIONTIME” BIGINT ,
    “MAXINACTIVETIME” INTEGER ,
    “USERNAME” VARCHAR(256) ,
    “SMALL” VARCHAR(31794) FOR BIT DATA ,
    “MEDIUM” LONG VARCHAR FOR BIT DATA ,
    “LARGE” BLOB(2097152) LOGGED NOT COMPACT )

    Can I change the column LARGE to “NOT LOGGED” and avoid log write?
    Create this table using DMS tablespaces data, index and long is a good practice?

    • mkrafick says:

      When you change to NOT LOGGED, the Blob data will avoid a log write but the drawback is that the data for the blob is non-recoverable. If you had to restore and roll forward, your data would be recovered but the BLOB data would be restored with binary zero’s. I got into depth on doing this in the DB2 Night Show presentation, I have a few slides on how to implement it and the affect of turning it on. Check it out – DBA Night Show Replay

      Also, be careful with “LONG VARCHAR FOR BIT DATA” it is deprecated in v9.5 and also is treated as a LOB – so it can’t be pulled into memory and causes you to spin disk. So your MEDIUM and LARGE elements are both bypassing memory and going to disk (slow). If you convert LARGE to INLINE you can pull the rows into memory but still spin disk on MEDIUM (better, but still slow). If you convert the MEDIUM and LARGE elements to an INLINE BLOB you go into memory more often (fastest).

      As for the last piece – using best practice with DMS table spaces for data and indexes should be applied if you are able.

  2. Anonymous ISV DBA says:

    Meh – just put it on SSD, Krafick. Great article.

  3. Sandy says:

    Hi Michael Krafick,
    In our environment recently we created a table with BLOB Column size 10MB and we are uploading a file(appx 20kb) through our application, then DB2 container size is increases up to 30MB for every 2 files.
    Why the DB2 container taking more space and how can we reclaim the space of container. Can you help me.

    • mkrafick says:

      This is an interesting situation and to be honest I can’t explain the container size increasing 30mb for every two files. I spent some time bouncing this off Ember as well and short of the generic “I wonder how much whitespace may be attached to that BLOB” or “What else is being loaded in at the same time” – I can’t offer a real explanation. This may be a good one for DB2 Support, and if you do crack it, please let us know here.

      As for reclaiming space. You may want to watch my 4 minute presentation on http://www.dbisoftware.com/blog/db2nightshow.php?id=484 – Go to the 35min mark.

      In short, it would be beneficial to reorg the table using the LONGLOBDATA option. this will rearrange extents and give you space to reclaim. Then you can use a feature like the REDUCE or RECLAIM command to release space back to the file system. If you use DB2DART like in my presentation, you can see exactly what is holding the extent.

      Also, if this tablespace is in 9.7, or has been built in 9.7 and migrated to a later version such at 10.1 or 10.5 – do not use the reorg command with USE IN clause. Such as “Reorg table A USE IN Tempspace”. It won’t reorg as efficiently as it could. Allow it to reorg in it’s tablespace naturally. If the tablespace was created in v10.1 or later, this isn’t a issue.

  4. Felix says:

    Hi Michael
    We have our tablespaces in Storage Groups which are definded across multiple filesystem’s.
    Does it make sense to define a tablespace for BLOB’s in a Storage Group across various filesystem’s? I guess no because I don’t think that the Blob’s will be “split” across the filesystem’s but not sure about it.
    Thank you in advance.

  5. Brian Fairchild says:

    I’m a little late here. But i wanted to mention that I think this is an excellent article. Thank you very much Mr. Krafick!

  6. dbplatz says:

    Hi Michael,

    Regarding tablespaces and LOBS, woudn’t be a best practive to set Filesystem Caching OFF for those tablespaces containing LOB tables, therefore LOBS would use OS cache instead as they can’t be brought up into memory (bufferpool)?

    What are your thoughts?

    Thanks.

    Regards,

    • mkrafick says:

      I see what you are asking, why not let the OS level cache take the load instead of filesystem caching? My answer would be that best practice specifically mention filesystem caching and I can only theorize is that you want to be caching closer to where the bottleneck lie. You could cache higher up at the OS, but I would think you are still working awfully hard where the expensive operation is.

  1. February 24, 2015

    […] a skill builder, not a trip to Vegas. Why won’t you just die?! (Cleaning DB2 Process in Memory) Attack of the Blob: Blobs in a Transaction Processing Environment Automatic Storage Tablespaces (AST): Compare and Contrast to DMS DB2 v10.1 Column Masking Automatic […]

  2. January 5, 2016

    […] a skill builder, not a trip to Vegas. Why won’t you just die?! (Cleaning DB2 Process in Memory) Attack of the Blob: Blobs in a Transaction Processing Environment Automatic Storage Tablespaces (AST): Compare and Contrast to DMS DB2 v10.1 Column Masking Automatic […]

Leave a Reply

Your email address will not be published. Required fields are marked *