Paging and swapping

Introduction

The issue of swapping and paging is often misunderstood. Swapping and paging are two totally different things.

Swapping was the first technology used in Unix System V as physical memory fills up with processes there is a problem. What happens when the system runs completely out of RAM? It "grinds to a halt"!

The conservation and correct management of RAM is very important because the CPU can only work with data in RAM, after it has been loaded from the hard disk by the kernel. What happens when the mounting number and size of processes exceeds physical memory? To allow for the situation, and because only one process can ever execute at any one time (on a UniProcessor system), only really that process need to in RAM. However organising that would be extremely resource intensive, as multiple running processes are scheduled to execute on the processor very often (see the section called “Scheduler”)

To address these issues the kernel advertises an abstract memory use to applications by advertising a virtual address space to them that far exceeds physical memory. An application may just request more memory and the kernel may grant it.

A single process may have allocated 100mb of memory even though there may only be 64mb of RAM in the system. The process will not need to access the whole 100mb at the same time this is where virtual memory comes in.

Swap Space

Swap space is a portion of disk space that has been set aside for use by the kernels' virtual memory manager (VMM). The VMM is to memory management what the scheduler is to process management. It is the kernels memory management service for the system.

Swapping

Some systems are pure swapping systems, some systems are pure paging systems and others are mixed mode systems.

Originally Unix system V was a pure swapping system.

To swap a process means to move that entire process out of main memory and to the swap area on hard disk, whereby all pages of that process are moved at the same time.

This carried the disadvantage of a performance penalty. When a swapped out process becomes active and moves from the sleep queue to the run queue, the kernel has to load an entire process (perhaps many pages of memory) back into RAM from the swap space. With large processes this is understandably slow. Enter paging.

Paging

Paging was introduced as a solution to the inefficiency of swapping entire processes in and out of memory at once.

With paging, when the kernel requires more main memory for an active process, only the least recently used pages of processes are moved to the swap space.

Therefore when a process that has paged out memory becomes active, it is likely that it will not need access to the pages of memory that have been paged out to the swap space, and if it does then at least only a few pages need to be transferred between disk and RAM.

Paging was first implemented in system V[?] in 19??

The working sets

For efficient paging, the kernel needs to keep regular statistics on the memory activity of processes it keeps track of which pages a process has most recently used. These pages are known as the working set.

When the kernel needs memory, it will prefer to keep pages in the working sets of processes in RAM as long as possible and to rather page out the other less recently used pages as they have statistically been proven to be less frequently accessed, and therefore unlikely to be accesses again in the near future.

Implementation of swapping and paging in different systems

Current Unix systems use the following methods of memory management:

  • SVR3 and newer based systems are mixed swapping and paging systems, as is FreeBSD. Paging is normally used but if memory usage runs extremely heavy, too quickly for the kernels' pager to page out enough pages of memory, then the system will revert to swapping. This technique is also known as desperation swapping.

  • Linux is a pure paging system it never swaps, neither under normal usage nor does it employ desperation swapping under heavy usage.

  • When the FreeBSD VM system is critically low on RAM or swap, it will lock the largest process, and then flush all dirty vnode-backed pages - and will move active pages into the inactive queue, allowing them to be reclaimed. If, after all of that, there is still not enough memory available for the locked process, only then will the process be killed.

  • Under emergency memory situations when Linux runs out of memory (both physical and swap combined) the kernel starts killing processes. It uses an algorithm to work out which process to kill first - it tries to kill offending memory hogs that have been running for a short amount of time first before less used processes that have been running for a long time, which are most likely important system services. This functionality is known as the out of memory (OOM) killer.[5]

Virtual memory

Virtual memory can mean two different things, in different contexts. Firstly it can refer to only swap memory; secondly it could refer to the combination of both RAM and swap memory.



[5] RAM=main memory=physical memory