Backup Compression Consideration – DB2 COMPRESS vs. GZIP

You may also like...

5 Responses

  1. Chris Aldrich says:

    Interesting. I wonder if they saw the GZIP as “faster” since the regular backup completed in 42M as opposed to 2H 39M for a DB2 compressed backup, even if the overall process was slower.

    And to add to the compressed backup over compressed tables discussion….I think the best answer is to try it yourself. The “WAREHOUS” database that IBM’s Tivoli Monitoring uses can get rather large. So can several of its tables. We have several tables compressed within the database, and I still compress the backup. In this particular case I tested it both ways and the compressed backup still won out on disk space.

    But that may not be true in all cases. As has often been my experience with compression in other areas, compressing compressed things tends to explode the size. But it may depend on what is compressed and with what algorithm, etc.
    So again, experimentation is the key to discovering what is right for you.

  2. Interesting. You must take into account where the ‘pain’ of time&cpu is suffered. Does the gzip process take place on the database server itself? Consuming CPU-cycles which should be used to handle all those nice SQL’s?

    Or, is the backup image stored on a file server and that file server is handling the gzip-workload? In that case DB2 has finished the backup and does not care about back-end processes taking place somewhere else.
    In case of a restore: just copy the zipped-image to the database server and unzip it there. All that CPU power is idle because do not have a database running at that time 🙂

  3. Hi Michael,
    Great post. I like the detail. I was looking into this stuff sometime back, but didn’t have enough resources (disk space and big enough DB). I went through several forums and realised that backup with compression is better if you plan to use it for restore in the future. The logic is, as in your case, backup with DB2 compression is 84.8 GB and without it is 538GB, when we try to restore these backup, in the first case DB2 will have to read only 84.8GB of data from the disk as compared to 538GB in the second case, thereby saving on some costly I/O cycles. Wondering if it intrigues you to go and test it yourself…:). I request you to please check it if you have the capacity. Would love to look at the results.

    • Yogesh says:

      Hi Saurabh

      Did You ever got Your answer ?

      I am also have been doing some reserch and wondering how to reduce the time to to offline backup of 600 GB of DB2 and again restore it back faster as and when needed to

      Yogesh

  4. Andrew McLauchlan says:

    We’ve started using pigz as opposed to gzip. It does consume massive CPU but the elapsed time is a fraction. Of course the unzip is still the same time.

Leave a Reply

Your email address will not be published. Required fields are marked *