Memory Management

Secondary Storage
(traditionally hard disks and nowadays
solid-state drives) is the long term store for programs and
data, while main memory holds programs and data
currently in use
Swapping
First major advance in memory management.
Partitions
Split up main memory into a set of static, non overlapping regions.
Instruction Segments
These types of code are execute only.
Segment Table
For segmentation, the processes are split up into different sizes, conveniently, data could be in one segment and code in the other. The operating system maintained a what for each process?
– the starting physical addresses of that segment.
– the length of that segment (for protection)
Each entry into a segment table contains
Segments can start anywhere in memory.
One contrast that segmentation has compared to paging.
– Hard to translate addresses since segments can start anywhere in memory.
– External fragmentation still occurs.
Disadvantages of Segmentation.
Paging
Very little internal fragmentation, no external fragmentation.
-A segment number, used to index the segment table.
– a page number: used to index that page table to obtain the corresponding frame number
– an offset: used to locate the word within the frame frame
For the combination of segmentation and paging. The virtual address consisted of..
Each page table entry
Has a present bit that is set
only if the corresponding piece is in main memory
The Resident Set
• The resident set is the portion of the process that is
in main memory
An advantage of partial loading
it is even possible to use more bits for logical addresses
than the bits needed for addressing the physical memory
Demand Paging
• Pages (of both text and data) are loaded into main
memory as needed
• A page is placed in a page frame of main memory
• What if all the page frames are being used and a
new page needs to be brought in?
– A page that has not been accessed recently is selected
for replacement
– This page needs to be copied out to disk (unless we
already have a copy there) before the new page is loaded
• The selection of such pages is called the page
replacement algorithm
Secondary Storage
Pages selected for replacement need to be stored on
Swap Space
Parts of virtual memory (typically data
areas) are stored in a special area of the disk called the
Swap space
Pages of the process in swap space are accessed directly
by block numbers, as opposed to a hierarchical file system structure
Text Pages
Nowadays, the OS uses the file system copy of text pages for “backing store” of text pages
Data/Stack Pages
Get created in main memory, might get swapped out later.
Thrashing
The process spends most of its time swapping pieces in and out of memory rather than executing user instructions.
Page Size
This is defined by hardware.
– good since it reduces the number of pages, and hence the
size of page tables.
– good since disks are designed to efficiently transfer large
blocks of data
– fewer pages in main memory; this increases the TLB hit ratio
Large Page Size advantages.
Disadvantages (cons) of a small page size is that the number of pages is larger, and hence page tables
are larger. Small pages are also less efficient to transfer to and from secondary storage, and for a given
TLB size, the hit ratio is lower for smaller pages.
On the other hand, with a small page size, each page matches the code that is actually used, that
is, better matching the locality of the program. If for each process we bring into memory only the
text/data needed, more memory will be available for other processes. Also, a small page size reduces
internal fragmentation.
Advantages/disadvantages of small pages sizes.
Page Fault Rate
Determined by the number of page frames allocated per process.
Fetch Policy
Determines when a page should be brought into main memory.
Demand Paging
Many page faults occur, the rate decreases as more pages are loaded into main memory.
Pre-paging
Brings more pages into memory than needed.
Replacement Policy.
Some pages cannot be selected for replacement from main memory, especially much of the kernel.
1) much of the kernel.
2) key control structures
3) I/O buffers
These frames are usually locked into memory.
Optimal Policy
Selects for replacement the page
for which the time to the next reference is the longest
• Produces the fewest number of page faults
– Least recently used (LRU)
– First-in, first-out (FIFO)
– Clock
Type of Page replacement algorithms.
The Least Recently Used (LRU) Policy
Replaces the page that has not been referenced for the longest
time
LRU
This would require expensive hardware and a great
deal of overhead.
Key Support Bits
• R – Referenced
• Obviously can’t be set on every memory reference…
• M – Modified
• Same issue (for data/stack segments)
• Implications of the bit being set?
The First-In, First Out (FIFO) Policy
Treats page frames allocated to a process as
a circular buffer
• When the buffer (resident set) is full, the oldest
page is replaced. Hence: first-in, first-out
FIFO
• Simple to implement
– requires only a pointer that circles through the page
frames allocated to the process
The Clock Policy
The set of frames that are candidates for
replacement is considered as a circular buffer
• When a page is replaced, a pointer is set to point to
the next frame in buffer
• A use bit for each frame is set to 1 whenever
– a page is first loaded into the frame
– the corresponding page is referenced (approximation)
• When it is time to replace a page, the first frame
encountered with the use bit set to 0 is replaced.
– During the search for replacement, each use bit set to 1 is changed to 0
Asterisk
Indicates that the corresponding use bit is set to 1
The clock
protects frequently referenced pages by setting the use bit to 1 at each reference
Clock and LRU
What two algorithm methods are close in production.
Demand cleaning
A page is written out only when its frame needs to be
replaced (now!)
Page Buffering
Pages to be replaced are kept in main memory for a while to
guard against poorly performing replacement algorithms
such as FIFO (bet hedging)
Free Page List
for frames that have not been modified
since brought in (no need to swap out)
Modified Page List
for frames that have been modified
(need to write them out)
When a page is selected for replacement…
• A (pointer to the) frame to be replaced is added to the tail of either the free list or the modified list
• The present bit is cleared in the corresponding page table
entry
Pages on the modified
Are periodically written to disk
– in batches, for efficiency (elevator algorithm)
– then moved to the free list (still in memory!)
Page Faults
If too few frames are allocated to a process, there is a high rate of ?
low multiprogramming level
If too many frames allocated,
Fixed-allocation policy
– allocates fixed number of frames; remains constant over time
– the number is determined at load time and depends on the
type of the application
– difficult to know the best size in advance.
Variable-allocation policy
– the number of frames may vary over time
• may increase if page fault rate is high
• may decrease if page fault rate is very low
– requires some OS overhead to assess behavior of active
processes
Variable-allocation policy
Most operating systems determine resident set size with.
Scope
Refers to the set of frames to be considered
for replacement when a page fault occurs
Local replacement policy
chooses only among the frames that are allocated to
the process that issued the page fault
Global replacement policy
any unlocked frame is a candidate for replacement
Variable Allocation + Global Scope
Relatively simple to implement–adopted by many
operating systems (e.g., various Unix/Linux flavors)
• A list of free frames is maintained
– when a process issues a page fault, a free frame is
allocated to it
– Hence the number of frames allocated to a page
fault process increases (up to a per process limit)
– The choice for the process that will lose a frame is
arbitrary, however (far from optimal)
• But, page buffering can alleviate this problem since a
page may be reclaimed if it is referenced again soon!
Variable Allocation + Local Scope
Used in Windows with FIFO (!) page replacement
• Allocate at load time a certain number of frames to a new
process based on application type
– use either pre-paging or demand paging to fill up the
allocation
• When a page fault occurs, select the page to replace
from the resident set of the process that suffers the fault
• Reevaluate periodically the allocation provided and
increase (up to some limit) or decrease it to improve
overall performance
• This concept has been studied formally since the 1960s