IDUG NA 2013 Brain Dump

You may also like...

5 Responses

  1. kshitij kohli says:

    Thanks for the Kind words above Ember and let me take the honor to explain on what you showed in your presentation.
    Ember’s Presentation was like a seasoned pro’s presentation should be like, very through, meticulous and entertaining, it was about the 10 commandments of supporting e commerce databases. I would say 10 commandments to support any DB. So here are the commandments.
    1) We shall Backup
    2) We shall collect stats
    3)We shall Reorg
    4) We shall Prune
    5) We shall test our recovery strategy
    6) we shall collect performance data
    7) we shall poke around ( DBA need to be proactive thats what poking means : ) )
    8) We shall know our data model .
    9) we shall index thought fully
    10) we shall be nosy

    She didnt only tell us these commandments but also gave the meaning on how to go about these. Like a best practice.

  2. Ember, you are an excellent note taker! Thanks for a nice summary. To be clear, with respect to my session C09, I used 11 different business queries, referencing literal values having both high and low cardinalities, across 5+ different index design solutions (including compression tests), and DB2 9.7.7 consistently had an Index Logical Read cost that was 25-50% LOWER than DB2 10.1.2. Since logical reads consume CPU, the attendees (and DB2 community at large) were encouraged to “look before they leap”, to carefully measure the performance impacts of upgrading. Later on, in the DB2 LUW panel, I asked a question that gave attendees some insight on DB2 10.1 higher LRead costs — IBM said they more aggressively prefetch or examine additional index pages in DB2 10.1. I don’t know that I agree this was a good idea, and I wise there was a registry variable to disable this, for I fear DB2 users, particularly OLTP, are going to see their CPU costs/consumption increase proportionally to the increased LReads. Of course, IBM would be delighted to sell you more CPUs.

    I’ll be giving this IDUG presentation again on The DB2Night Show 21 June 10am CDT:
    https://www2.gotomeeting.com/register/192388514

    With higher LRead and CPU costs on the road ahead with DB2 10.1, tuning to minimize costs is more important than ever. A typical DBI customer reduces CPU consumption by 25-60% in the first week. DBI also makes it very easy to compare database and SQL workload performance across two different timeframes – this will help assess the cost impact of DB2 10.1 upgrade. Learn more at:
    http://www.DBISoftware.com

    Oh, and for those that missed Ember’s excellent IDUG presentation, Ember was a guest on The DB2Night Show and offered a similar presentation. See Top 10 tips for e-Commerce databases at:
    http://www.dbisoftware.com/blog/db2nightshow.php?id=309

    Best regards,
    Scott

  3. David M says:

    Allow me to slightly dispute one of your points:

    If using TRACKMOD=ON, then any change to a LOB will cause the entire tablespace to be backed up in full, which may lead to larger backups.

    The way it works is:
    * If the tablespace is totally clean (i.e., no changes since previous backup), the incremental backup utility will skip it entirely.
    * If the tablespace is not clean, incremental backup will store all LOB/LF pages and scan all DAT/INX/etc pages and store any that are dirty. (This applies to both SMS and DMS tablespaces).

    So any change to any data (not just changes to LOBs) in the tablespace will cause the entire tablespace to be scanned, but only LOB/LFs will be backed up in full (other data will be backed up incrementally).

    The advice to take away from this is that if you can put your historical, unchanging, data into its own tablespaces, incremental backup can do a smarter job of backing it up.

Leave a Reply

Your email address will not be published. Required fields are marked *