Oracle Database 10g Automatic Memory Management (AMM) might have been this smart, but I don’t know. I’m playing a bit with Oracle Database 11g AMM and find that so far it is a pretty smart cookie-at least in the sense I’m blogging about in this entry.
Automatic So Long As You Do Everything Correctly
One thing I always hate is a feature that causes the server to not function at all or at some degraded state if the configuration is not perfect for the feature. In my mind, if something is called automatic it should be automatic. Like I say, don’t make me kneel on peach pits to benefit from an automatic feature. So how about a quick test.
I’ve got 11g x86 on a Proliant DL380 fit with 4GB RAM. As I mention in this blog entry, Oracle uses memory mapped files in /dev/shm for the SGA when you use 11g AMM. On a 4GB system, the default /dev/shm size is about half of physical memory. I want to set up a larger, custom size and see if 11g AMM will try to cram 10lbs of rocks into a 5lb bag.
# umount /dev/shm # mount -t tmpfs shmfs -o size=3584m /dev/shm # df /dev/shm Filesystem 1K-blocks Used Available Use% Mounted on shmfs 3670016 0 3670016 0% /dev/shm
There, I now have 3.5GB of space for /dev/shm. I don’t want to use that much because leaving Linux with .5GB for the kernel will likely cause some chaos. I want to see if 11g AMM is smart enough to allocate just enough to fill the virtual address space of the Oracle processes. So, I set AMM larger than I know will fit in the address space of a 32-bit linux processes:
SQL> !grep MEMORY amm.ora MEMORY_TARGET=3500M MEMORY_MAX_TARGET=3500M
So what happened? Well, I haven’t relocated the SGA so I shouldn’t expect more than about 2GB. I wouldn’t expect more than about 1.7GB for buffers. Did AMM try to over-allocate? Did it get nervous and under-allocate? Did it tell me to help it be more automatic through some configuration task I need to perform? Let’s see:
SQL> startup pfile=./amm.ora ORACLE instance started. Total System Global Area 2276634624 bytes Fixed Size 1300068 bytes Variable Size 570427804 bytes Database Buffers 1694498816 bytes Redo Buffers 10407936 bytes Database mounted. Database opened.
Nice, AMM was smart enough to pile in about 1.6GB of buffers and the appropriate amount of variable region to go with it. A look at DBWR’s address space shows that the first 16MB /dev/shm granule file (my term) was mapped in at virtual address 512MB. The last 16MB segment fit in at 2688MB. The top of that last granule is 2704MB. If I subtract off the sum of 2276634624 (show sga) + 512MB (the attach address) I’m left with a little over 20MB that are most likley Oracle rounding up for page alignment and other purposes.
# pmap `pgrep -f dbw` | grep 'dev.shm' | head -1 20000000 4K r-xs- /dev/shm/ora_bench1_1441803_0 # pmap `pgrep -f dbw` | grep 'dev.shm' | tail -1 a8000000 16384K rwxs- /dev/shm/ora_bench1_1507341_7
Summary
I don’t really expect folks to be running Oracle in production on 32bit Linux servers in modern times, but I was pleasantly surprised to see 11g AMM is smart enough to poke around until it filled the address space. I asked for more than it could give me (MEMORY_TARGET=3500M) and instead of failing and suggesting I request less automatic memory, it did the right thing. I like that.
This was a huge help!
Im trying to give oracle 16GB of mem.
I had to allocate 16.5GB in dev/shm for it to work.
Thanks!
It is always interesting to see playing with oracle. Thank you for posting this type of topic.
Did anyone saw swapping when using AMM in linux 64 but. i have total memory 8gb, 7gb assigned to /dev/shm and memory_target and memory_max_target is 5.3gb. But i see swapping though there is lot memory free?
Any one has faced same thing?
Hi Krishnan,
We need more info. What is the hardware platform? How many connections to the database? What is the workload (PQ?)? What does /proc/meminfo and /proc/slabinfo show during the swapping incidents?
without knowing how much PGA is going to get used (and other such memory usage) I think it might be a bit of a squeeze to allocate 66% of physical memory to the SGA. You have to run a kernel, there are page tables and process private memory and stack (aggregate). I’ve done 75% of physical memory before, but only under very well-understood lab conditions. That’s just my experience.
I have same problem of swapping
details :
Oracle RAC 11g version 11.1.0.7
Linux CentOS x86_64
8G ram per instance
alocated 6G for /dev/shm
swapping grow up to 3G ,
shutting down the instance release the swapping
I open a ticket at Oracle , they insist its the OS
one way to solve the problem is to move away from AMM and set
the attribute LOCK_SGA=true .
i also found a bug report at redhat
https://bugzilla.redhat.com/show_bug.cgi?id=160033
The swapping issue on Linux is very common (I still haven’t seen an 11g with AMM that is not swapping some of the SGA).
I speculate it is a problem related to the LRU mechanism to swap in/out contents of the tmpfs in Linux, but haven’t got enough low level systems knowledge to make it an hypothesis, it’s just an speculation of mine.
A bypass for that, is to use ramfs (which doesn’t allow swapping) instead tmpfs, but I don’t know if this is going to be supported by oracle as a filesystem for the /dev/shm mount point for the AMM.
I’ve got an SR opened with Oracle for that issue. If I get some clarifications from it, I promise I’ll reply to this thread 😉
Cheers
As I though, the ml note 749851.1 says ramfs it is *not* supported for AMM.
Regards
production 11.2.0.2.1 machine.plenty of memory and plenty of swap.
25G given for /dev/shm out of which only 12Gb is used by Oracle. Swap usage is zero to begin with.
We have oracle backup via RMAN running to a local ext3 file system. filesystemio_options=none so the backups are definitely using FS buffer cache. below are things which I observed:
FS buffer cache fills up most of the free memory, which is quite normal on Linux.
Reading through several KB articles on redhat, it appears redhat do not free FS buffer cache always when there is a memory pressure. Several tunables determine the tendency to free FS buffer cache.vm.swapiness is one of the tuneable. default is 60.
I observe that even though there is nothing on the system which is using RAM other than oracle, there is about 10% swap used. df -h /dev/shm still shows about 12Gb of 25GB used. This points to the fact that oracle SGA + PGA has not drastically increased. Also with RHEL 5.5 kernel you can look /proc//smaps |grep Swap to actually find the amount of disk swap in the process addrress space. This was found to be zero for all processes which also points to the fact that there has been no process memory pages which have been swapped out to disk. So the bottom line appears than when you use heavy file system on the box, the fs buffer cache gets filled up. Oracle SGA and PGA , part of it can be swapped out to disk swap when there is need for FS buffer cache . This is bad because Oracle SGA and PGA should have been given preference over FS buffer cache. but currently Linux do not do that entirely. You can do some tuneables in Linux like vm.swapiness to less favor FS cache. But /dev/shm being mapped as memory mapped pages into Oracle address space I strongly believe those settings will negatively affect oracle as well because Linux will treat the files under /dev/shm just like a fs mapped file and hence treat it just like a buffer cache. So over all my finding is if your boxes uses heavy file system operation you will certainly see buffer cache pushing /dev/shm pages into disk swap which means parts of pga/sga could be swapped out to disk.
Also the page table size is also large, like 1.2Gb on a 32Gb box. I think it is better to go with ASMM rather than AMM if administrator do not want an oversimplied memory management like AMM.
thanks
Martin Francis K
martin.francis@ferguson.com
Hi Martin,
Use Direct I/O.
Kevin, yes use of “filesystemio_options directio” is in mind. But unless we test it, I am not sure how an RMAN will treat this setting. RMAN will use directio when reading from an ext3 filesystem, but when writing the backup to an ext3 unless we test it i am not sure if it will use directio. The point I wanted to make was if it is only Oracle that is running on a box, then AMM works allright, but when there are different types of apps/worksloads, some of them heavily using filesystem then AMM could suffer. Would you agree?
I do recall RMAN issues where it did not take into consideration filesystemio_options. Neither do external table readers. The latter I know uses different I/O modules (OSDs) than the core server. Unfortunately for this thread I’m not focusing day to day on Oracle and, of course, I no longer have source code access. So, I’ll answer by saying that you are quite right to test. As for AMM, I see no use for it any time, anywhere. It is just fundamentally broken.