STMM Analysis Tool

You may also like...

6 Responses

  1. Somraj Chakrabarty says:

    Hi,

    I am a regular follower of your blog.I am getting some warning relating to STMM in my environment(DB2 V9.7 FP 9 running on Windows 2008 R2) as below.Could you please provide some information on this

    APPHDL : 0-38581 APPID: 10.248.204.91.57366.140807093552
    AUTHID : MXDB2USER
    EDUID : 12540 EDUNAME: db2agent (MAXDB71) 0
    FUNCTION: DB2 UDB, Self tuning memory manager, stmmSimulateSortMerge, probe:3388
    MESSAGE : ZRC=0x87AE0165=-2018639515=STMM_SEVERE_ERROR
    “Severe error encountered in self tuning memory manager”
    DATA #1 : String, 91 bytes
    ERROR simulation of real configuration has failed – not reporting benefit for this operator

  2. Isaac Munoz says:

    Hi Ember

    Great article thanks very much for the info about the stmm log parser… i will give it a try since we have large DBs using STMM for all the auto managed DB config parms.

    However I got a question when you mention “… for well-run OLTP databases, that much package cache does not help performance.”. Could you please elaborate bit more what performance cons has a big package cache then?. If you prefer we can discuss it offline via email.

    Thanks!

    • Ember Crooks says:

      The performance focus for OLTP databases is small, fast queries. New queries are not often introduced – the SQL run is known and tested. This also means there is a smaller subset of queries. Thus for the most part, the package cache needs to be on the small side. Most of the large queries are either non-critical for performance or extremely rarely run (montlhy and the like). I need a package cache that is large enough to hold those known and tested small queries. The memory is more useful for bufferpools or even sort areas and such.

      If I have an oversized system (I often have systems sized for Black Friday loads, but only get that kind of load once or twice a year), DB2 STMM seems to give me package cache to hold every query I ever execute even once. There are plenty of queries that I execute once that I don’t really care if the access plans are kept around – they won’t be used again, or won’t be used again for another month, or their performance is non-critical. Even during peak periods, my performance is focused on that small, known set of (often singleton row) queries.

      Furthermore, because the SQL it stores in the package cache is so infrequently executed, I can still get over 99% hit ratios with a smaller package cache.

      The other con that a large package cache has, besides using memory I could still better use elsewhere, is to make analysis more difficult. I find it much harder to store and work with 4 GB snapshot files or that much data stored in tables (especially when I take hourly snapshots), than more manageable sizes. I keep a shorter period of data or have seen my clients disable my hourly snapshots altogether. Depending on whether I have tools to deal with the data, I also have to transfer files from the server, import into Excel, and then deal with Excel’s slowness with so much data that I don’t really need.

      Obviously, it’s a personal choice and not a performance imperative, but I like a smaller package cache that still allows me to get over 99% hit ratios.

  3. Chaitanya says:

    Hi Ember, We in our OLTP environment have this issue coming very frequently these days..”Time out based commit point ” when application team run their jobs.. can u please let me know how to deal with this issue to have this solve permanently.

    Thanks so much ,
    Chaitanya

  1. July 15, 2014

    […] STMM Analysis Tool […]

Leave a Reply

Your email address will not be published. Required fields are marked *