Oracle Pushes Compression to Scale Databases

Advanced Compression: Not a Cure-All

Oracle acknowledges that Advanced Compression isn't a cure-all. For instance, while large table scans "are a whole lot faster, compression doesn't make random-access reads that much faster," Loaiza said. Also, data that has already been compressed, such as a JPEG image, can't be compressed further, according to Kumar.

Oracle's claim of 4:1 compression also isn't the highest level in the database industry. Database analyst Curt Monash pointed out in an online post this week that analytic database start-up Vertica Inc. claims compression ratios from 5:1 to as much as 60:1, depending on the type of data.

Kumar declined to comment about Vertica. But during his OpenWorld presentation, he claimed that Oracle's variable-length, block-level compression is more efficient than what is offered in IBM's rival DB2 9 database, not to mention faster. "Because DB2 is so inefficient to begin with, Oracle is the winner any day," Kumar said. He also called the compression offered by data warehousing database vendor Teradata Corp. "very primitive."

But users haven't flocked to Advanced Compression yet. One reason is that it's a paid add-on. A license costs $11,500 per processor, with updates and support adding an additional $2,530 per CPU. Also, it's available only to users of 11g Enterprise Edition, and Oracle hasn't seen much adoption of 11g thus far. According to Andrew Mendelsohn, Oracle's senior vice president of server technologies, 75% of its customers are running 10g, and another 20% are still running 9i.

Take what is likely Oracle's biggest customer, LGR Telecommunications, which develops data warehousing systems for telecommunications companies. LGR has built two 300TB data warehouses for AT&T Inc. for use in storing and managing the carrier's caller data records, according to Paul Hartley, general manager of LGR's North American operations in Atlanta. The databases, which run concurrently with one another, can scale up to a total of 1.2PB, Hartley said during a presentation at OpenWorld.

But the two data warehouses are based on Oracle 10g, so they can't take advantage of Advanced Compression. LGR does "use compression to some extent today, but we plan to use it extensively in the future," Hannes van Rooven, a manager at LGR, said during the same presentation.

Another Oracle customer, Intermap Technologies Corp., is using the spatial-data version of 11g for its 11TB database of digital mapping and imagery data, which is expected to grow to 40TB by the first quarter of 2010, according to Sue Merrigan, senior director of information management at the Englewood, Colo., company. Intermap isn't in the compression camp now. "We don't compress the data because we are concerned it would lose its accuracy," Merrigan said.

That isn't true, responded Kumar, who said that Advanced Compression is a so-called lossless compression scheme.

Rivals such as John Bantleman, CEO of archiving software vendor Clearpace Software Ltd., argue that sending old data to archives will continue to boost database performance more than compressing information. Moreover, it isn't much more complicated to do so, Bantleman claims. And using tools such as Clearpace, users can search and extract data archived outside of the database as quickly and conveniently as if the information was stored in it, according to Bantleman.

"A telco might need to maintain its caller data records for years," Bantleman said. "But does it really make sense to keep all of that in your database if regulations only require you to keep access to it for 90 days?" He added that it might seem better "emotionally" to maintain a single data storage environment. "But I think you want to segment the live part of your data for OLTP performance from your highly compressed historical data. These two schemas don't meld well in the same box."

Subscribe to the Power Tips Newsletter

Comments