Storage

Oracle Pushes Compression to Scale Databases

Oracle's powerful new HP Oracle Database Machine comes with 168TB of storage, a new method of retrieving data more quickly and intelligently, and -- wait for it -- a $2.33 million price tag.

It's the turbocharged option for the database administrator with money to burn and a need for speed.

But most DBAs don't get to drive in the fast lane -- especially not with IT budgets the way they are. So as a less lavish option for enterprise users, Oracle is touting another approach.

That one involves data compression, which has long been a popular way to save storage space and money. Traditionally, though, the trade-off has been high: Gobs of memory and processing power typically are needed to compress data and write it to disks. Even more is needed when the information is later extracted.

Now Oracle claims to have solved this thorny problem with a feature it first introduced in its Oracle 11g database, which was released last year.

By using the Advanced Compression option in 11g, Oracle says, DBAs can shrink database sizes by as much as three-fourths and boost read/write speeds by three to four times, no matter whether they're running a data warehouse or a transaction-processing database -- all while incurring little in the way of processor utilization penalties.

Oracle claims the storage and speed gains are so dramatic that companies using Advanced Compression will no longer need to move old, seldom or non-used data to archives. Instead, they can keep it all in the same production database, even as the amount of data stored there grows into the hundreds of terabytes or even the petabyte range.

"This works completely transparently to your applications," Juan Loaiza, Oracle's senior vice president of systems technologies, said during a session at the company's OpenWorld conference in San Francisco last week. "It increases CPU usage by just 5%, while cutting your [database] table sizes by half."

Oracle says it's responding to the demands of enterprise customers with fast-growing databases (download PDF). "The envelope is always being pushed," Loaiza said. "Unstructured data is growing very quickly. We expect someone to be running a one-petabyte, 1,000-CPU-core database by 2010."

It's also responding to the fact that storage technology, one of the keys to database performance, has made little progress from a speed standpoint, according to Loaiza. "Disks are getting bigger, but they're not getting a whole lot faster," he said.

Subscribe to the Power Tips Newsletter

Comments