Department of Computer Science
Course: CS 3725
Horizontal Bar

next up gif
Next: Segmented memory management Up: The Memory Architecture Previous: Processes

Virtual Memory Management

Because main memory (i.e., transistor memory) is much more expensive, per bit than disk memory (presently, approximately 10 to 50 times more expensive), it is usually economical to provide most of the memory requirements of a computer system as disk memory. Disk memory is also ``permanent'' and not (very) susceptible to such things as power failure. Data, and executable programs, are brought into memory, or swapped as they are needed by the CPU in much the same way as instructions and data are brought into the cache. Most large systems today implement this ``memory management'' using a hardware memory controller in combination with the operating system software.

One of the most primitive forms of ``memory management'' is often implemented on systems with a small amount of main memory. This method leaves the responsibility of memory management entirely to the programmer; if a program requires more memory than is available, the program must be broken up into separate, independent sections and one ``overlaid'' on top of another when that particular section is to be executed. This type of memory management, which is completely under the control of the programmer, is sometimes the only type of memory management available for small microcomputer systems. Modern memory management schemes, usually implemented in mini - to mainframe computers, employ an automatic, user transparent scheme, usually called ``virtual memory''.

In a computer system which supports virtual memory management, the computer appears to the programmer to have its address space limited only by the addressing range of the computer, not by the amount of memory which is physically connected to the computer as main memory. In fact, each process appears to have available the full memory resources of the system. Processes can occupy the same virtual memory but be mapped into completely different physical memory locations. Of course, the parts of a program and data which are actually being executed must lie in main memory and there must be some way in which the ``virtual address'' is translated into the actual physical address in which the instructions and data are placed in main memory. The process of translating, or mapping, a virtual address into a physical address is called virtual address translation. Figure gif shows the relationship between a named variable and its physical location in the system.

   figure4663
Figure: The name space to physical address mapping

This mapping can be accomplished in ways similar to those discussed for mapping main memory into the cache memory. In the case of virtual address mapping, however, the relative speed of main memory to disk memory (a factor of approximately 10,000 to 100,000) means that the cost of a ``miss'' in main memory is very high compared to a cache miss, so more elaborate replacement algorithms may be worthwhile.) In fact, in most processors, a direct mapping scheme is supported by the system hardware, in which a page map is maintained in physical memory. This means that each physical memory reference requires both an access to the page table and and an operand fetch. In effect, all memory references are indirect. Figure gif shows a typical virtual-to-physical address mapping.

   figure4727
Figure: A direct mapped virtual to physical address translation

This requirement would be a considerable performance penalty, so most systems which support virtual addressing have a small associative memory (called a translation lookaside buffer, or TLB) which contains the last few virtual addresses and their corresponding physical addresses, so in most cases the virtual to physical mapping does not require an additional memory access. Figure gif shows a typical virtual-to-physical address mapping in a system containing a TLB.

   figure4786
Figure: A virtual to physical address translation mechanism with a TLB

For many current architectures, including the VAX, INTEL 80486, and MIPS, addresses are 32 bits, so the virtual address space is tex2html_wrap_inline7690 bytes, or 4 G bytes. A physical memory of about 16-64 M bytes is typical for these machines, so the virtual address translation must map the 32 bits of the virtual memory address into a corresponding area of physical memory.

Sections of programs and data not currently being executed normally are stored in disk, and are brought into main memory as necessary. If a virtual memory reference occurs to a location not currently in physical memory, the execution of that instruction is aborted, and can be restored again when the required information is placed in main memory from the disk by the memory controller. (Note that, when the instruction is aborted, the processor must be left in the same state it would have been had the instruction not been executed at all). While the memory controller is fetching the required information from disk, the processor can be executing another program, so the actual time required to find the information on the disk (the disk seek time) is not wasted by the processor. In this sense, the disk seek time usually imposes little (time) overhead on the computation, but the time required to actually place the information in memory may impact the time the user must wait for a result. If many disk seeks are required in a short time, however, the processor may have to wait for information from the disk.

Normally, blocks of information are taken from the disk and placed in the memory of the processor. The two most common ways of determining the sizes of the blocks to be moved into and out of memory are called segmentation and paging, and the term segmented memory management or paged memory management refer to memory management systems in which the blocks in memory are segments or pages.


next up gif
Next: Segmented memory management Up: The Memory Architecture Previous: Processes

Paul Gillard
Mon Nov 24 20:44:06 NST 1997