DB2 LUW – What is a Page?

You may also like...

16 Responses

  1. Brian says:

    Interesting article. So often, I have just used page sizes that were used in previous projects without giving them sufficient thought.

  2. Shivraj says:

    Very helpful and informative stuff.

  3. Anik says:

    Thanks again teacher 🙂

  4. Ted says:

    Hello, just wanted to take a quick moment to tell you how much I appreciate your website. I have been a db2 dba for 20 years and I still find myself learning something new on your website and refer to it quite often 🙂 Thanks!

  5. Aditya says:

    Its really informational and most important is the way of expressing the stuffs.
    Thanks a lot 🙂

  6. Pouria says:

    Thanks for your informative post. It clarifies a lot of details about the physical storage in DB2.
    I just have a quick question/confusion:
    (Assuming I am using DB2 10.1 – with *regular* table spaces) I have read in the IBM documentation that there cannot be more than 255 rows on a page. (for example here: http://www.ibm.com/developerworks/data/library/techarticle/0212wieser/ ). I am just trying to compare this one with your formula in “Choosing a page size” which seems to consider the whole space in a page (minus 100 bytes of overhead). I am just wondering if that formula is only correct for large tablespaces, or if there is sth about regular tablespaces that I am missing here.

    • Ember Crooks says:

      I would never use regular tablespaces anymore – only large ones.

      Large tablespaces have varying possible numbers of rows per page:
      Max rows per page

      There is a maximum number of rows per page, and for particularly narrow tables, that has to be considered.

  7. Hi Ember! This article you wrote still hot after 3 years. I have an additional question about performance when using larger tablespaces. Although I administrate a DB2 10.5 OTLP database with small tables that fit into a 4k pagesize tablespace, when I transfered them to a 32k prior to a compress and evaluated some performance tests, all DML operations performed a lot faster. It was not a problem with the old tablespace because I already have recreated maintaining 4k pagesize only changing from regular to large in order to allow more rows – it has reached regular maximum capacity. I have tested this approach in another databases, all of them OLTP and observed a reduction in I/O and faster SQL responses. All tablespace creation was made with default EXTENTSIZE and PREFETCH size, and all bufferpools are AUTOMATIC with STMM. I really cannot understand this, because it goes in the wrong way of all DB2 documentation. Is there any reason to see this increase in performance? BTW, all tables runs a lot of INSERT, UPDATE and DELETE and the tests considered them without compression.

    • Ember Crooks says:

      Without asking a lot of questions, I could not speculate as to why. There certainly could be reasons that things would work that way, but you’re right that generally, for OLTP, smaller page sizes are thought to perform better. EVERY single recommendation depends on so many factors, that when you can do testing like that, it is excellent to do so. It could be something as simple as a 32K page size matching up better with your storage stripe, strip, or LUN sizes or any one of a dozen other things.

      • Thanks for the quick response. I take care of 4 different customers running the same ERP software with DB2 9.7 and DB2 10.5. They’re all different in hardware specs, and I performed a test with the same table in all environments migrating to a 32k large tablespace without compress. At one case, a very complex ETL SQL SELECT that take more than 1 hour to execute turns out to finish in 3 minutes after the change (and I have migrated to a new 4k tablespace before the tests). I have access to an test environment pretty near to the production and trying different combinations of EXTENT and PREFETCH sizes to find out what’s going on.

        • Ember Crooks says:

          I am not surprised that a complex ETL SQL would perform better. However, the number one performance priority would usually be the transactions, and I’m curious as to the performance difference, if any for them.

          Queries dealing with larger amounts of data will generally benefit from a larger page size, particularly if the table is not very contiguous on disk.

  8. Aamir Malik says:

    I am in process of database migration from DB2/VSE 7.5.0 to DB2 LUW 10.5. How can i optimize the selection of page size, extent size, prefetchsize for migrated database.

    • Ember Crooks says:

      If the database being moved is a Data Warehouse or Analytics environment, then 32K is generally the page size to go with. If it’s OLTP, you may want to consider a smaller page size (4K or maybe 8K) for the tables that are particularly likely to singleton-row updates, inserts, and deletes. If you have the opportunity, it is best to test and compare configurations to see which is actually faster. Extent and prefetch size are going to be more dependent on your storage characteristics. If you can actually get details on your storage such as the strip size for a RAID array, then matching your extent size to to that is very useful. Prefetch size is often smaller for OLTP, and larger for DW/Analytics.

  9. Bobby Thomas says:

    All your posts are simple, informative and it leads to further investigation and research for me. Thanks a lot.

    In this post you have a statement “It is also important to think of the geometry of the table when choosing a page size. If you have a row size of 2010 bytes, that means that only one row will fit on every 4K page, and therefore nearly 50% of the space allocated to your table will be empty and wasted.”. I assume that you meant to say 2100 instead or 2010 as row size. 2100 row size of a record makes the 4K page small enough to accommodate 2 such records and that is why around 50% of storage a waste. Is my understanding correct?

Leave a Reply

Your email address will not be published. Required fields are marked *