Tuesday, October 15, 2013




What is the ZFS ARC in Solaris10?

The ZFS ARC (Adaptive Replacement Cache) contains ZFS data which is on disk but the algorithms have calculated is likely to be read again, all data is read into the ARC and provided to the requesting process from there. The larger the ARC the greater the performance from ZFS filesystems.
The ARC is where ZFS caches data from all active storage pools. The ARC grows and consumes memory on the principle that no need exists to return data to the system while there is still plenty of free memory. When the ARC has grown and outside memory pressure exists, for example, when a new application starts up, then the ARC releases its hold on memory. ZFS is not designed to steal memory from applications. A few bumps appeared along the way, but the established mechanism works reasonably well for many situations and does not commonly warrant tuning.
Different  Situations for different Environments:
Review the following situations:
  • If a future memory requirement is significantly large and well defined, then it can be advantageous to prevent ZFS from growing the ARC into it. For example, if we know that a future application requires 20% of memory, it makes sense to cap the ARC such that it does not consume more than the remaining 80% of memory.
  • Some applications include free-memory checks and refuse to start if there is not enough RAM available - even though the ARC would release its memory based on applications' requests to the OS kernel for memory. Sometimes the ARC can be too slow to release the memory, and better-behaving applications (without preliminary checks) experience longer delays when requesting memory.
  • If the application is a known consumer of large memory pages, then again limiting the ARC prevents ZFS from breaking up the pages and fragmenting the memory. Limiting the ARC preserves the availability of large pages.
  • If dynamic reconfiguration of a memory board is needed (supported on certain platforms), then it is a requirement to prevent the ARC (and thus the kernel cage) to grow onto all boards.
  • If an application's demand for memory fluctuates, the ZFS ARC caches data at a period of weak demand and then shrinks at a period of strong demand. However, on large memory systems, ZFS does not shrink below the value of arc_c_min or currently, at approximately 12% of memory. If an application's height of memory usage requires more than 88% of system memory, tuning arc_c_min would be currently required until a better default is selected as part of 6855793.

For theses cases, you might consider limiting the ARC. Limiting the ARC will, of course, also limit the amount of cached data and this can have adverse effects on performance. No easy way exists to foretell if limiting the ARC degrades performance.

As many other Solaris tunables, ARC size limits can be configured via /etc/system to be applied at every boot (for newer Solaris and OpenSolaris releases), or dynamically reconfigured on a live system with the mdb debugger
There are many parameters which actually control the ARC size, based on a single zfs_arc_max limit value desired by the system's administrator (or, by default, derived by ZFS based on system RAM size). When Solaris is booting, such ARC parameters as p, c, c_min and c_max are initialized, and subsequent changes to zfs_arc_max have no direct effect.
On a running system you can only change the ARC maximum size by using the mdb command. Because the system is already booted, the ARC init routine has already executed and other ARC size parameters have already been set based on the default c_max size. Therefore, you should tune the arc.c and arc.p values, along with arc.c_max, using the formula:
arc.c = arc.c_max

arc.p = arc.c / 2

Check How To Change The ZFS ARC Value


Post a Comment

Design by BABU | Dedicated to grandfather | welcome to BABU-UNIX-FORUM