BLOG UPDATE 2013.05.03:
NOTICE! SLOB 2 is now available.
Please follow this link.
BLOG UPDATE 2013.03.21: See flashdba’s SLOB testing how-to/tips page here.
BLOG UPDATE 2013.02.15: See Brian Pardy’s SLOB usage notes here.
BLOG UPDATE 2012.05.12: For convenience I’ve made the SLOB README available in PDF form here. The README adds more introductory information to what I’ve written in this original blog entry.
BLOG UPDATE 2012.06.29: Karl Arao has been putting SLOB to good use. I urge you to visit: Karl’s set up “cheat sheet.”
BLOG UPDATE 2012.06.11: A Simple init.ora sufficient for SLOB physical I/O model testing: init.ora
BLOG UPDATE 2012.08.15: I’m behind on folding in improvements to the SLOB Kit. Just no time for that these days. That’s sad. However, Pythian’s Yury has done a really good job at documenting his testing and some kit improvements you should investigate (improvements are just that, not fixes). See: Yury’s Page On the Matter
We’ve all been there. You’re facing the need to assess Oracle random physical I/O capability on a given platform in preparation for OLTP/ERP style workloads. Perhaps the storage team has assured you of ample bandwidth for both high-throughput and high I/O operations per second (IOPS). But you want to be sure and measure for yourself so off you go looking for the right test kit.
There is no shortage of transactional benchmark kits such as Hammerora, Dominic Giles’ SwingBench, and cost-options such as Benchmark Factory. These are all good kits. I’ve used them all more than once over the years. The problem with these kits is they do not fit the need posed in the previous paragraph. These kits are transactional so the question becomes whether or not you want to prove Oracle scales those applications on your hardware or do you want to test IOPS capacity? You want to test IOPS. So now what?
What About Orion?
The Orion tool has long been a standard for testing Oracle block-sizes I/O via the same I/O libraries linked into the Oracle server. Orion is a helpful tool, but it can lead to a false sense of security. Allow me to explain. Orion uses no measurable processor cycles to do its work. It simply shovels I/O requests into the kernel and the kernel (driver) clobbers the same I/O buffers in memory with the I/O (read) requests again and again. Orion does not care about the contents of I/O buffers and therein lies the weakness of Orion.
At one end of the spectrum we have fully transactional application-like test kits (e.g., SwingBench) or low-level I/O generators like Orion. What’s really needed is something right in the middle and I propose that something is SLOB—the Silly Little Oracle Benchmark.
What’s In A Name?
SLOB stands for Silly Little Oracle Benchmark. SLOB, however, is neither a benchmark nor silly. It is rather small and simple though. I need to point out that by force of habit I’ll refer to SLOB with terms like benchmark and workload interchangeably. SLOB aims to fill the gap between Orion and full function transactional benchmarks. SLOB possesses the following characteristics:
- SLOB supports testing Oracle logical read (SGA buffer gets) scaling
- SLOB supports testing physical random single-block reads (db file sequential read)
- SLOB supports testing random single block writes (DBWR flushing capacity)
- SLOB supports testing extreme REDO logging I/O
- SLOB consists of simple PL/SQL
- SLOB is entirely free of all application contention
Yes, SLOB is free of application contention yet it is an SGA-intensive workload kit. You might ask why this is important. If you want to test your I/O subsystem with genuine Oracle SGA-buffered physical I/O it is best to not combine that with application contention.
SLOB is also great for logical read scalability testing which is very important, for one simple reason: It is difficult to scale physical I/O if the platform can’t scale logical I/O. Oracle SGA physical I/O is prefaced by a cache miss and, quite honestly, not all platforms can scale cache misses. Additionally, cache misses cross paths with cache hits. So, it is helpful to use SLOB to test your platform’s ability to scale Oracle Database logical I/O.
What’s In The Kit?
There are no benchmark results included. The kit does, however, include:
- README files. A lot of README files. I recommend starting with ~/README-FIRST.
- A simple database creation kit. SLOB requires very little by way of database resources. I think the best approach to testing SLOB is to use the simple database creation kit under ~/misc/create_database_kit. The directory contains a README to help you on your way. I generally recommend folks use the simple database creation kit to create a small database because it uses Oracle Managed Files so you simply point it to the ASM diskgroup or file system you want to test. The entire database will need no more than 10 gigabytes.
- An IPC semaphore based trigger kit. I don’t really need to point out much about this simple IPC trigger kit other than to draw your attention to the fact that the kit does require permissions to create a semaphore set with a single semaphore. The README-FIRST file details what you need to do to have a functional trigger.
- The workload scripts. The setup script is aptly named setup.sh and to run the workload you will use runit.sh. These scripts are covered in README-FIRST.
- Init.ora files. You’ll find test.ora under ~/misc/sample_data. The purpose of this init.ora is to show just how little tweaking Oracle Database requires to scale physical I/O and logical reads. The directory is named sample_data because I originally intended to offer pairs of init.ora and AWR reports so folks could see what different systems I’ve tested, what performance numbers I’ve seen and the recipe I used (the combination of connected pseudo users and init.ora parameters). The name of the directory remains but I pulled the content so as to not excite Oracle’s lawyers.
The size of the SGA buffer pool is the single knob to twist for which workload profile you’ll generate. For instance, if you wish to have nothing but random single block reads you simply run with the smallest db_cache_size your system will allow you to configure (see README-FIRST for more on this matter). On the other hand, the opposite is what’s needed for logical I/O testing. That is, simply set db_cache_size to about 4GB, perform a warm-up run and from that point on there will be no physical I/O. Drive up the number of connected pseudo users and you’ll observe logical I/O scale up bounded only by how scalable your platform is. The other models involve writes. If you want to drive a tremendous amount of REDO writes you will again configure a large db_cache_size and execute runit.sh with only write sessions. From that point you can reduce the size of db_cache_size while maintaining the write sessions, which will drive DBWR into a frenzy.
Who Has Used The Kit?
Several folks from the OakTable Network and other friends. Perhaps they’ll chime in on their findings and what they have learned about their platform as a result of testing with SLOB.
What You Should Expect From SLOB
I/O, lots of it! If you happen to be an Exadata user you’ll see about 190,000 physical read I/O (from Exadata Smart Flash Cache) generated from each instance of RAC in your configuration. Oracle does not misrepresent the truth in their datasheets regarding Exadata random cache reads. If you have a full-rack Exadata you too can now study the system characteristics under an approximated 1.5 million read IOPS workload. Testing Exadata with the write-intensive SLOB models will reveal capacities for DBWR and LGWR flushing. If you have conventional storage you’ll drive the maximum it will sustain.
Where Is The Kit?
NOTICE! SLOB 2 is now available. Please follow this link.
For historical purposese I’ll leave the version uploaded to the OakTable.net website at the following URL. Simply extract the gzipped tar archive into a working directory and see README-FIRST.
Simple init.ora parameters fit for high-end read-intensive SLOB (e.g., runit.sh o N)