Oracle’s Timeline, Copious Benchmarks And Internal Deployments Prove Exadata Is The Worlds First (Best?) OLTP Machine – Part I

I recently took a peek at this online, interactive history of Oracle Corporation. When I got to the year 2008, I was surprised to see no mention of the production release of Exadata–the HP Oracle Database Machine. The first release of Exadata occurred in September 2008.

Once I advanced to 2009, however, I found mention of Exadata but I also found a couple of errors:

  • The text says “Sun Oracle Database Machine” yet the photograph is that of an HP Oracle Database Machine (minor, if not comical, error)
  • The text says Exadata “wins benchmarks against key competitors” (major error, unbelievably so)

What’s First, Bad News or Good News?

Bad News

The only benchmark Exadata has ever been used in was this 1TB scale TPC-H in 2009 with HP Blades.  Be aware, as I pointed out in this blog post, that particular TPC-H was an in-memory Parallel Query benchmark. Exadata features were not used. Exadata was a simple block storage device. The table and index scans were conducted against cached blocks in the Oracle SGAs configured in the 64 nodes of the cluster. Exadata served as nothing more than persistent storage for the benchmark. Don’t get me wrong. I’m not saying there was no physical I/O. The database was loaded as a timed test (per TPC-H spec) which took 142 minutes and the first few moments of query processing required physical I/O so the data could be pulled up into the aggregate SGAs. The benchmark also requires updates. However, these ancillary I/O operations did not lean on Exadata feature nor are they comparable to a TPC-H that centers on physical I/O.  So could using Exadata in an in-memory Parallel Query benchmark be classified as winning “benchmarks against key competitors?” Surely not but I’m willing to hear from dissenters on that. Now that the bad news is out of the way I’ll get to what I’m actually blogging about. I’m blogging about the good news.

Good News

The good news I’d like to point out from screenshot (below) of Oracle’s interactive history is that it spares us the torment of referring to the Sun Oracle Exadata Machine as the First Database Machine for OLTP as touted in this press release from that time frame.  A system that offers 60-fold more capacity for random reads than random writes cannot possibly be mistaken as a special-built OLTP machine.  I’m delighted that the screen shot below honestly represents the design center for Exadata which is DW/BI. For that reason, Exadata features have nothing at all to do with OLTP. That’s a good readon the term OLTP is not seen in that screen shot. That is good news.

OLTP does not trigger Smart Scans thus no offload processing (filtration,projection, storage index, etc). Moreover, Hybrid Columnar Compression has nothing to do with OLTP, except perhaps, in an information life cycle management hierarchy. So, there’s the good news. Exadata wasn’t an OLTP machine in Oracle’s timeline and it still is not an OLTP machine. No, Oracle was quite right for not putting the “First OLTP Machine” lark into that timeline. After all, 2009 is quite close to 40 years after the true first OLTP Machine which was CICS/IMS.  I don’t understand the compulsion to make outlandish claims.

Bad News

Yes, more bad news. Oracle has never published an Exadata benchmark result even with their own benchmarks. That’s right. Oracle has a long history of publishing Oracle Application Standard benchmarks–but no Exadata results.

I’ve gone on the record as siding with Oracle for not publishing TPC benchmarks with Exadata for many reasons.  However, I cannot think of any acceptable excuse for why Oracle would pitch Exadata to you as best for Oracle Applications when a) there are no OLTP features in Exadata*, b) Oracle Corporation does not use Exadata for their ERP and c) there is no benchmark proof for Exadata OLTP/ERP capabilities.

Closing Thoughts

Given all I’ve just said, why is it that (common knowledge) the majority of Exadata units shipping to customers are quarter-rack for non-DW/BI use cases? Has Exadata just become the modern replacement for “[...] nobody ever got fired for buying [...]?” Is that how platforms are chosen these days? How did we get to that point of lowered-expectations?

Enjoy the screen shot of memory lane, wrong photo, bad, good and all:

* I am aware of the Exadata Smart Flash Log feature.

20 Responses to “Oracle’s Timeline, Copious Benchmarks And Internal Deployments Prove Exadata Is The Worlds First (Best?) OLTP Machine – Part I”


  1. 1 Noons May 2, 2012 at 5:01 pm

    Welcome to history re-writing, comrade…

  2. 7 Oracle/Exadata Expert May 3, 2012 at 2:29 pm

    You give reference to a TPHC that used Exadata and then say no Exadata features were used. How can you say such crazy things? You obviously don’t know what you are talking about

    • 8 kevinclosson May 3, 2012 at 3:20 pm

      So you want me to prove there were no Exadata features being used in the TPC-H I referred to, is that correct? Also, am I tasked as such because you know for a fact that said TPC-H did use/benefit from Exadata features? If yes, which features do you think were most beneficial to an in-memory Parallel Query result?

      BTW, I already have your answer. I just need to make sure you know what question you are asking.

    • 10 Clock$peedy June 14, 2012 at 6:15 am

      Dear Mr. “Oracle/Exadata Expert”, don’t you think you need to have some generalized database expertise as a prerequsite to claiming to be an Oracle expert? Let me share some with you.

      When you keep all of your database in DRAM, then there’s no need to do disk IOPS (except for checkpointing DRAM contents). Remember Gene Amdahl said “the best IO is the one you don’t have to do”?

      This is what IMDB is all about. Go ask Larry, he’ll tell you it’s all the rage now.

      What Kevin said should have been easily comprehensible to you. Since it wasn’t, allow me to paraphrase.

      When you have a 1TB dataset, and you have 2TB of DRAM, even SQL Server is smart enough to load the ENTIRE DATASET into DRAM. When the entire dataset is in DRAM, you are not running queries against data on disk, you are running queries on data in DRAM. Hence, no disk IO, hence no need (nor even any opportunity) to “accelerate” the disk querying process. Hence, no Exadata features were used.

      Got it now?

      Since I’ve done you the courtesy of educating you, would you mind now commenting on why Oracle refuses to benchmark Exadata?

  3. 11 John McCann May 4, 2012 at 7:34 am

    Hi,

    We have very heavy batch processing on our system. Would that change your opinion about whether Exadata is a good choice on which to implement our EBS?

    • 12 kevinclosson May 4, 2012 at 8:05 am

      Hi John,

      That is an excellent question.

      As we know, there is no such thing as a pure OLTP system. These system generally have batch work and reporting–as is the case with EBS. This aspect of the topic deserves attention. I’m aware that Oracle sales are calling out the non-OLTP aspects of ERP systems in their Exadata sales motions. It’s probably good for folks to hear both sides of that story.

      I actually have the material to address your question prepared but will need to make it a follow-on blog post. To that end, I just renamed this “Part I.” I will post Part II, and (hopefully) address your question at that time. After reading Part II, please feel free to drill in for more clarification if needed.

      Thanks for visiting my blog.

  4. 15 Amir Hameed May 9, 2012 at 7:18 am

    Hi Kevin,
    It is my understanding that Exadata is more suited for DW/BI types of activities which are read intensive. However, these data warehouses get loaded with tons of data in their fact tables on daily basis with write intensive operations. So, if Exadata is susceptible to IO saturation with writes then wouldn’t it have this issue during data loads?

    Thanks
    Amir

    • 16 kevinclosson May 9, 2012 at 7:37 am

      Hi Amir,

      DW/BI I/O patterns are not patently “read intensive.” They are large operations in general and not latency-sensitive. That is the opposite of OLTP/ERP. But, on to your question.

      Bulk data loading operations generate large streaming writes. A single 6Gb SAS drive can sustain about 1.4TB/hour with such an I/O profile.

      Remember, terabytes per hour is megabytes per second. Even with normal redundancy ASM mirroring the write payload is just under 600MB/s at 1TB/h. Since the highest majority of Exadata sales are quarter-rack, I’ll put 1TB/h into perspective from that viewpoint. A quart-rack X2 Exadata Database Machine is sustains only 16MBPS/disk while ingesting bulk-loaded data at a rate of 1TB/h.

      That’s noise.

  5. 17 rsiz December 19, 2013 at 5:28 am

    Kevin, as you know, I’m a big fan. Sometimes I get just a trifle lost in the details and humor of your irony (which always seems technically accurate and actually funny [perhaps only to a few of us]) so that I miss the true point of the overall post.

    So please do correct my synopsis if I’m wrong:

    1) Oracle’s Exadata machines (probably all of them) have WAY MORE (about 60x for the model discussed) read capability than write capability.
    2) Even though the write capability is sufficient to handle very many OLTP workloads, it seems willfully misleading to call a machine built with an Exadata architecture an “OLTP Machine” because:
    a) The benchmark cited has pretty much nothing to do with the claim
    b) You inherently have to pay for a whole slew of read performance features bundled into the box that are not needed to achieve OLTP functionality.

    So make no claim that an Exadata won’t accomplish the requirements of an OLTP system, but rather that it seems like an inappropriate application of the (expensive) features to a target task.

    Did I get that about right?

    Now, time moves on and Oracle is using Exadata and a few other “engineered” systems for the in house ERP. The power of the combined systems so far exceeds the requirement that they don’t even need to use application affinity to overcome the RACTAX ™ any more. So that bit of your post about not using it for their own ERP has become time inaccurate (but was true at the time of the post).

    Whether a system assembled similar to Oracle’s would ever be cost effective without free software (I don’t know whether Oracle shuffles money from one account to another for this – and I suppose they and some others effectively have a worldwide site license, so differential license costs do not apply) and the value to marketing as a reference site, I don’t know.

    I tend to think it would be difficult to justify the expenditure versus other plausible and reliable ways to achieve the goal. I do think putting Oracle’s ERP load on this engineered system complex has removed any question that an impressive OLTP load can be executed on Exadata.

    That does not contradict what I *think* is the point of your post.

    • 18 kevinclosson December 20, 2013 at 12:41 pm

      >So make no claim that an Exadata won’t accomplish the requirements of an OLTP system, but rather that it seems like an inappropriate application of the (expensive) features to a target task.

      You’ve pretty much got it. Exadata is full of features that require a full table scan. I don’t deny there are full table scans in OLTP but how many are large (more than a single multiblock read), how many are transactional? Smart Scan is not aimed at OLTP. Oracle will tell you it is and you are more than welcome to believe them.

      So, yes, Oracle finally got rid of a 7 year old EMC Symmetrix and deployed SPARC stuff for their ERP. That was years after they started telling you that Exadata is for OLTP. Study the chronology. A bit of a bait and switch in my book. Good for gander, not goose.

      Yes, the later Exadata models have closed the gap on the disparity between read and write bandwidth but they still have a massive cache miss cliff in X4 (2.4M RIOPS flash but only 50K RIOPS from HDD). Performance minded people care about such things.

      I’ve grown to accept the fact that Exadata customers do not buy Exadata for technical reasons. We are 4 generations into Intel QPI which is the core technology that obviates this historical need to scale out or offload due to historical front-side bus bottlenecks. Even 2S Sandy Bridge servers can handily process 6GB/s from any storage be it FCP or IB (e.g., iSER). There are 4S EP and EX servers and there are soon to be Ivy-EX servers that scale very nicely and have 12TB DRAM. What we know in IT has a shelf life. I was well aware of the era when Exadata solved problems and I know now that it is still solving the same problems–in spite of the fact that those problems no longer exist. That’s my opinion based on my knowledge, it’s my blog, and that’s what I say.

      To close I’ll share a quote from an internal email. EMC is in nearly (if not all) accounts where Oracle pushes Exadata. I hear about the gore. This is a snippet of honesty:

      [cust] is doing an on-site Exadata POC, after three weeks Oracle has not been successful in standing up the Exadata. Also, the on-site Oracle consultant just informed [cust] that HCC, which Oracle touted as the competitive tip of the spear, will provide no performance benefit in their environment.


  1. 1 Log Buffer #270, A Carnival of the Vanities for DBAs | The Pythian Blog Trackback on May 3, 2012 at 11:01 pm
  2. 2 Oracle’s Timeline, Copious Benchmarks And Internal Deployments Prove Exadata Is The Worlds First (Best?) OLTP Machine – Part 1.5 « Kevin Closson's Blog: Platforms, Databases and Storage Trackback on May 6, 2012 at 8:43 pm

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




EMC Employee Disclaimer

The opinions and interests expressed on EMC employee blogs are the employees' own and do not necessarily represent EMC's positions, strategies or views. EMC makes no representation or warranties about employee blogs or the accuracy or reliability of such blogs. When you access employee blogs, even though they may contain the EMC logo and content regarding EMC products and services, employee blogs are independent of EMC and EMC does not control their content or operation. In addition, a link to a blog does not mean that EMC endorses that blog or has responsibility for its content or use.

This disclaimer was put into place on March 23, 2011.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,937 other followers

Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

Follow

Get every new post delivered to your Inbox.

Join 1,937 other followers

%d bloggers like this: