What does a Commit mean in DB2?

You may also like...

15 Responses

  1. Roland says:

    Hi Ember,
    I recently had a customer who wanted to explicitly turn off autocommit for his sessions, as he prefers the explicit commit when needed. 😉

    There are three ways to configure this:
    – Start a db2 command shell with “db2 +c” or append a “+c” to all db2 invocations
    – In a db2 command script you can use “update command options using c off” to switch it off on demand.
    – Or you could define an environment variable DB2OPTIONS=”+c” and configure this in the users environment.

    Cheers
    Roland

  2. Toben Nelson says:

    Very clear, succinct job reviewing the main interactions of a commit. I just thought I would add a note for anyone using broad “currently committed” data access, which a lot of folks seem to be liking these days. If you do, and you’re in a highly volatile OLTP environment where you likely have infinite logging enabled, then you’ll run into problems if you don’t use an extremely large (read: unreasonably large) log buffer given the following restriction on the currently committed access level:

    “A transaction that has made an uncommitted modification to a row forces the currently committed reader to access appropriate log records to determine the currently committed version of the row. Although log records that are no longer in the log buffer can be physically read, currently committed semantics do not support the retrieval of log files from the log archive. This only affects databases that are configured to use infinite logging.”

    Basically, just note that w/currently committed, you need to be as sure as you can be that the rows you need to affect/undo/redo/whatever, are found in the buffer and not the log archive on disk. More explication in supposedly well-understood areas like this are needed so keep it up. Great post, Ember.

  3. Isaac Munoz says:

    Great Article Ember, thanks very much!.

    I’d like to ask if anybody has run into applications staying on “Commit Active” state for seconds and how did they solve/root cause it?. I’ve ran into 2 different customers having this strange issue; eventually the issue disappeared but during the time it last (days) we never identified the root cause.

    Regards

    • Roland Schock says:

      Hi Isaac,

      without having had this problem myself, you should look at logbuffer sizing, mincommit and disk contention on your logdevice.
      Also quite small transaction log sizes can tend to lead to much archiving action, slowing down commit processing sometimes.
      But beware, changing mincommit requires a lot of knowledge about your environment. It is a kind of running gag of the lab/pmr guys to change mincommit to a higher value than 1 and having problem with commit processing.

      Kind regards
      Roland

  4. Suvradeep Sensarma says:

    Hi Ember,
    If i set the MINCOMMIT to be 20 and in the application just after 19 COMMIT operations the database server crashes. So, it means the commited data are yet to be written to disk from Log buffer. Will CRASH RECOVERY be able to recover the 19 COMMITs that were performed prior to crash? or all of them are lost?

    • Ember Crooks says:

      Yes, they would likely all be lost. MINCOMMIT controls when data is externalized to disk, so the commits would not be on disk, and everything is memory is lost in a crash. I’ve heard the rule of thumb is to increase MINCOMMIT very slowly and it generally shouldn’t be set to more than three.

  5. AVManoj says:

    Hi Ember,
    I always prefer ur articles when i don’t know any concept,Because ur articles wil starts from Basic->High.I had a small dbt,if log files are deleted and i didn’t updated the log mirroring parameter,then how can i get those deleted logs.Can transaction log space wil be full in Circular logging?

    • Ember Crooks says:

      Log space can absolutely be full with circular logging. One transaction must fit into the active log space. All active transactions must fit into active log space. If they do not, you will get a log file full error message.

      If transaction log files are deleted, then your only option is generally to get them from an os-level backup if you happen to have one. If active log files are deleted, you will likely corrupt your database. In a few rare circumstances, DB2 may be able to recreate the log files, but you shouldn’t count on it.

      Does that answer your questions?

  6. Elsje says:

    Hi Ember

    Thank you for your post but perhaps you can answer a question for me. If I have to separate modules running and updating the same table at the more or less the same time. Would a commit by 1 module commit changes by the other? There is no way they can be updating the same record at the same time.

    Module 1 will update certain fields on Table A and based on these updated values Module 2 will process the record and update whatever values it needs to.
    In my e.g.

    Module 1 Retrieves and updates row 1 on the table & commits the change
    Module 2 Retrieves Row 1 & based on the values will process the row but will not commit immediately.
    Module1 retrieves & updates Row 2 on the table and commits the change

    Will this last commit from Module 1 commit the changes from Module 2?

    • Ember Crooks says:

      This is why we have row locking in DB2 and while it can be a pain sometimes, module 2 would not be able to update the row until module 1 had committed or rolled back, releasing its exclusive row lock. Depending on the isolation levels, module 2 may not even be able to read the row while module 1 is working on it. What you describe is part of ACID processing – Isolation. Every database platform that ensures ACID should separate the transactions – how that occurs depends on the platform.

      Let me know if that does not answer your question.

  7. Rag says:

    Hi Ember,

    Thanks for the detailed explanation, I just have one question. We are trying to load millions of records, where I only see a throughput of 5k records/sec running insert statement. I know the Load Utility can run much faster. But to me 5k seems to be very less and i only see Log disk write wait. We tried all the parameters related to Log but nothing changes. Also the Avg Write(ms) is just 1 ms. is 5k the optimal through put? are there anything that I’m missing?

    thanks
    Rag

    • Ember Crooks says:

      Did you increase LOGBUFSZ? The default is far too small for it for most situations.

      Insert is also affected by things like triggers and indexes on the table – consider if those might be impacting insert speed.

      You’ll get better throughput with import over insert, because you can use the commitcount parameter to commit a group of records at a time.

      If insert performance is a top priority, placing your active transaction logs on the fastest possible disk can also be really useful.

  1. May 20, 2014

    […] What does a Commit mean in DB2? […]

Leave a Reply

Your email address will not be published. Required fields are marked *