Oracle CEO Steps Down Amid Titanic Shift

In-Memory DatabaseI usually don’t write about news, or in this case olds, but in this case the technology is amazing enough to merit mention. As most of the tech world knows, Oracle CEO Larry Ellison stepped down from his post earlier this week, after a 37 year career leading the company to become one of the dominant players in the IT industry. With all of the publicity and commotion that this move made, it had completely overshadowed an even more titanic shift happening in the database world: Oracle’s June launch of its In-Memory Database for 12c.

Although in-memory databases have been around for a long time, with databases using large stores of memory for speed optimization since at least the 90′s, the scale of the new In-Memory database is groundbreaking. When running on SPARC M6-32 machines, the database can utilize 32 TB of memory. That’s 32 Terabytes – the equivalent of the amount of memory in 8,000 standard PCs. This scale of memory enables enterprise database administrators to keep the entire database in-memory, and entirely ignore caching data to the disk (the largest bottleneck in relational databases). No tables or data are stored on the disk – instead, the disk is only used for storing transaction logs and offline copies in case of power outage or machine reboot.

The power of the new In-Memory databases have increased the speed of some applications by over 1000x. According to tests run by Oracle, complex applications such as JD Edwards EnterpriseOne that took 22.5 minutes to run an order management database job, could now run in less than 1 second. The thousand-fold query performance increase suddenly makes database analytics easy to run, even on production systems. Business analysts could perform analytics on live, production data, instead of requiring a separate data warehouse for many DB operations.

The speed of In-Memory databases could also have far-reaching impacts in other parts of the database world. Traditional big-data NoSQL databases might find themselves once again pushed to the sidelines, as relational databases could soon rival or exceed their performance in large-scale web applications. Systems such as Facebook, that currently require large clusters of machines to provide their services, could someday soon be handled by even a single machine.

Although a 32 TB computer might seem revolutionary, looking back on computer history, it should not be surprising. Thanks to Moore’s Law, computers have been regularly doubling in processing power since Intel’s first chips. Memory has kept up that pace, with lower cost and faster speeds each year.

The choice between NoSQL and Relational databases, on the other hand, has been one of hardware constraints. Whenever user needs exceeded hardware capabilities, developers moved to NoSQL databases. As hardware improved and grew to meet the challenge, developers moved back to relational databases for their flexibility and rich feature set.

With all the excitement about the new In-Memory databases, it will still be some time before the technology becomes truly mainstream. Although the sales literature describes it as “low-cost”, a 32 TB system still comes with a $2,500,000+ price tag. Until the technology matures to more moderate pricing, NoSQL databases are here to stay.

Written by Andrew Palczewski

About the Author
Andrew Palczewski is CEO of apHarmony, a Chicago software development company. He holds a Master's degree in Computer Engineering from the University of Illinois at Urbana-Champaign and has over ten years' experience in managing development of software projects.
Google+

RSS Twitter LinkedIn Facebook Email

One thought on “Oracle CEO Steps Down Amid Titanic Shift”

  1. Don’t forget in order to see the label reropt in its full glory that you use the print preview mode and not the other views as they will only show one column no matter how many columns you had chosen for your label reropt.

Leave a Reply

Your email address will not be published. Required fields are marked *