Robert's Blog


Thursday, March 25, 2010

A Closer Look at DB2 9 for z/OS Index Compression

In May of last year I posted a blog entry that included some information about the index compression capability introduced with DB2 9 for z/OS. It's a good time, I think, to add to that information, and I'll do that by way of this post.

How does DB2 do it? In that entry from last year, I noted that DB2 9 index compression is not dictionary-based, as is DB2 data compression (with dictionary-based compression, commonly occurring strings of data values are replaced with shorter strings, and this replacement is reversed when the data is accessed). For a tablespace defined with COMPRESS YES, DB2 will place as many compressed data rows as it can into a 4K page in memory (or an 8K or 16K or 32K page, depending on the buffer pool to which the tablespace is assigned), and the page size in memory is the same as the page size on disk. Index compression reduces space requirements on disk but not in memory: the size of a leaf page in a compressed index will be smaller on disk than in memory (only leaf pages are compressed, but the vast majority of most indexes' pages are leaf pages). Index compression is based on getting the contents of an 8K or 16K or 32K index leaf page in memory into a 4K page on disk, without using a dictionary (an index has to be assigned to an 8K or 16K or 32K buffer pool in order to be compressed). To do this, DB2 uses a combination of three compression mechanisms:
  • Prefix compression: Suppose you had a 3-column key on state, city, and telephone number. You might then have a LOT of duplicates of some combinations of column 1 and column 2 values (e.g., state = 'Texas' and city = 'Houston'). If compression is used for this index, DB2 will not repeatedly store those duplicate “prefix” values in leaf pages on disk; instead, DB2 will store a given key prefix once in a compressed leaf page on disk, and store along with that prefix the part of the whole-key key value that is different from one entry to the next (e.g., phone number = '713-111-2222', phone number = '713-222-3333', etc.). Note that while this example (for the sake of simplicity) presents a prefix that breaks along key-column lines, this is not a restriction. In other words, a prefix, in the context of prefix compression, can include just a portion of a key column value (for example, 'ROBERT' could be a prefix for the last names 'ROBERTS' and 'ROBERTSON').
  • RID list compression: If a given index key value has many duplicates, several of these duplicate values could be in rows that are located in the same page of a table. If the index is not compressed, the full RID (row ID) of each of these rows will be stored following the key value in a leaf page on disk, even though only one byte of that four- or five-byte RID (the byte that indicates the row's position in the data page) will be different from one value to the next in the RID chain (the page number, occupying 4 bytes for a partitioned tablespace with a DSSIZE of 4 GB or larger, and 3 bytes otherwise, will stay the same). If that index were to be compressed, DB2 would save space on disk by storing the multi-byte page number once in the RID chain, followed by the single-byte row location indicators, until the page number (or the key value) changes. This compression technique is particularly effective for indexes that
    have relatively low cardinality and are either clustering or have a high degree of correlation with the table's clustering key.
  • In-memory-only key map: An uncompressed index leaf page contains a key map, which itself contains a 2-byte entry for each distinct key value stored in the page. If the index is compressed, this map will not be stored on disk (it will be reconstructed, at relatively low cost, when the leaf page is read into memory). This compression technique nicely complements the RID list compression mechanism, as it is most effective for high-cardinality indexes (especially those with short keys, as the more distinct key values a page holds, the more space the key map occupies).
These compression techniques often deliver impressive results, with plenty of DB2 9 for z/OS users reporting disk savings of 50-70% after enabling compression for an index. Still, they have their limits, and when DB2 determines that a leaf page in memory already holds as much as can be compressed onto a 4K page on disk, it will stop placing entries in that page, even if that means letting a good bit of space go unused in the in-memory page. This is why you want to run the DSN1COMP utility for an index prior to compressing it. DSN1COMP will provide estimates of disk space savings on the one hand, and in-memory page space wastage on the other hand, that you could expect to see based on your choice of an 8K, 16K, or 32K page size for the to-be-compressed index. The right index page size will be the one that maximizes disk space savings while minimizing in-memory page space wastage.

Index compression overhead: it's about I/Os, not access. The differences in the way that data and indexes are compressed in a DB2 9 for z/OS environment lead to differences regarding the associated CPU overhead. First of all, data compression is hardware-assisted (it takes advantage of a microcode assist built into the System z server line) while index compression is not. Second, in the case of data compression, the overhead cost is paid when data is accessed in a buffer pool in memory, as rows are not decompressed until they are retrieved by DB2 on behalf of an application process (similarly, new or changed rows are compressed and placed in pages in memory as part of insert and update operations). For a compressed index, the overhead cost is incurred at I/O time, since pages are decompressed when read into memory and compressed when written to disk. So, once a leaf page of a compressed index is in memory, repeated accesses of that page will not involve additional overhead due to compression, whereas data compression overhead is incurred every time a row is retrieved from, or placed into, a page in memory. With respect to the I/O-related cost of compression, the situation is reversed: there is no additional overhead associated with reading a compressed data page into memory from disk, or writing such a page to disk, while for a compressed index the CPU cost of reading a leaf page from disk, or writing a changed leaf page to disk, will be higher than it would be for a non-compressed index. One take-away from this is that large buffer pools are a good match for compressed indexes, as fewer disk I/Os means lower compression overhead.

This "pay at I/O time" aspect of the CPU cost of index compression has implications for where that cost shows up. If the I/O is of the prefetch read variety, or a database write, the leaf page compression cost will be charged to the DB2 database services address space (aka DBM1). If it's a synchronous read I/O, index compression overhead will affect the class 2 CPU time of the application process for which the on-demand read is being performed. Thus, for an application that accesses index leaf pages that are read in from disk via prefetch reads (as might be the case for a batch job or a data warehouse query), the cost of index compression may appear to be close to zero because it's being paid by the DB2 database services address space.

So, what kind of overhead should you expect to see, in terms of the in-DB2 CPU cost of applications that access compressed indexes? Because of the multiple variables that come into play, mileage will vary, but my expectation is that you'd see in-DB2 CPU consumption that would be higher by some single digit of percent versus the non-compressed case. Remember: keep the I/Os down to keep that cost down.

1 Comments:

Anonymous Anonymous said...

I have not been able to identify any way to get compression statistics for DB2 Index Compression, short of creating 2 indexes which are identical except for compression and comparing High Used RBA values.

I/Os use CPU time. I have found that Sequential CPU use drops when I've activated Hardware Data Compression for QSAM, IMS, and DB2 since 1990.

March 12, 2014 at 5:07 PM  

Post a Comment

Subscribe to Post Comments [Atom]

<< Home