-->

MEMORY MANAGEMENT-2

MEMORY MANAGEMENT-2

Q.1 Explain paged memory allocation. (2012)


Paged memory allocation is based on the concept of dividing each incoming job into pages of equal size. Some operating systems choose a page size that is the same as the memory block size and that is also the same size as the sections of the disk on which the job is stored.The sections of a disk are called sectors, and the sections of mainmemory are called page frames. The scheme works quite efficiently when the pages, sectors,and page frames are all the same size. The exact size is usually determined by the disk’s sector size. Therefore, one sector will hold one page of job instructions and fit into one page frame of memory.

 

Before executing a program, the Memory Manager prepares it by:

1. Determining the number of pages in the program

2. Locating enough empty page frames in main memory

3. Loading all of the program’s pages into them

The primary advantage of storing programs in noncontiguous locations is that main memory is used more efficiently because an empty page frame can be used by any page of any job.

However, with every new solution comes a new problem. Because a job’s pages can be located anywhere in main memory, the Memory Manager now needs a mechanism to keep track of them—and that means enlarging the size and complexity of the operating system software, which increases overhead.

 

The Memory Manager uses tables to keep track of them. There are essentially three tables that perform this function: the Job Table, Page Map Table, and Memory Map Table. All three tables reside in the part of main memory that is reserved for the operating system.

The Job Table (JT) contains two values for each active job: the size of the job and the memory location where its Page Map Table is stored. For example, the first job has a job size of 400 located at 3096in memory. The Job Table is a dynamic list that grows as jobs are loaded into the system and shrinks.

Each active job has its own Page Map Table (PMT), which contains the vital information for each page—the page number and its corresponding page frame memory address. Actually, the PMT includes only one entry per page. The page numbers are sequential(Page 0, Page 1, Page 2, through the last page), so it isn’t necessary to list each page number in the PMT. The first entry in the PMT lists the page frame memory address for Page 0, the second entry is the address for Page 1, and so on.

The Memory Map Table (MMT) has one entry for each page frame listing its location and free/busy status.

 

Job table                                PMT table

Job size

PMT Location

400

3096

200

3100

500

3150

Job Page no

Page frame no

0

8

1

10

2

5

3

11


 

To find the address of a given program instruction, the byte number is divided by the page size, keeping the remainder as an integer. The resulting quotient is the page number, and the remainder is the displacement within that page.

 

Q.2 Explain demand paging.(2012)

Demand paging introduced the concept of loading only a part of the program into memory for processing. It was the first widely used scheme that removed the restriction of having the entire job in memory from the beginning to the end of its processing. With demand paging, jobs are still divided into equally sized pages that initially reside in secondary storage. When the job begins to run, its pages are brought into memory only as they are needed. Demand paging takes advantage of the fact that programs are written sequentially so that while one section, or module, is processed all of the other modules are idle. Not all the pages are accessed at the same time

o  User-written error handling modules are processed only when a specific error is detected during execution.

o  Many modules are mutually exclusive. For example, if the input module is active then the processing module is inactive.

o  Certain program options are either mutually exclusive or not always accessible. This is easiest to visualize in menu-driven programs.

o  Many tables are assigned a large fixed amount of address space even though only a fraction of the table is actually used.

One of the most important innovations of demand paging was that it made virtual memory feasible. The demand paging scheme allows the user to run jobs with less main memory than is required.

The operating system relies on tables (such as the Job Table, the Page Map Table, and the Memory Map Table) to implement the algorithm. These tables are basically the same as for paged memory allocation but with the addition of three new fields for each page in the PMT: one to determine if the page being requested is already in memory; a second to determine if the page contents have been modified; and a third to determine if the page has been referenced recently.

 

Q.3 Explain the First In First Out page replacement policy.

The first-in first-out (FIFO) page replacement policy will remove the pages that have been in memory the longest.

How the FIFO algorithm works by following a job with four pages (A, B, C, D) as it is processed by a system with only two available page frames. displays how each page is swapped into and out of memory and marks each interrupt with an asterisk. We then count the number of page interrupts and compute the failure rate and the success rate. The job to be processed needs its pages in the following order: A, B, A, C, A, B, D, B, A, C, D. When both page frames are occupied, each new page brought into memory will cause an existing one to be swapped out to secondary storage. A page interrupt, which we identify with an asterisk (*), is generated when a new page needs to be loaded intomemory, whether a page is swapped out or not. The efficiency of this configuration is dismal—there are 9 page interrupts out of 11page requests due to the limited number of page frames available and the need for many new pages. To calculate the failure rate, we divide the number of interrupts by the number of page requests. The failure rate of this system is 9/11, which is 82 percent. Stated another way, the success rate is 2/11, or 18 percent. A failure rate this high is usually unacceptable.

 

Q.4 Explain the Least Recently Used.

The least recently used (LRU) page replacement policy swaps out the pages that showthe least amount of recent activity, figuring that these pages are the least likely to be usedagain in the immediate future. Conversely, if a page is used, it is likely to be used againsoon; this is based on the theory of locality, To implement the LRU policy, a queue of the requestsis kept in FIFO order, a time stamp of when the job entered the system is saved, or amark in the job’s PMT is made periodically.

The efficiency of this configuration is only slightly better than with FIFO. Here, there are8 page interrupts out of 11 page requests, so the failure rate is 8/11, or 73 percent. In thisexample, an increase in main memory by one page frame would increase the success rateof both FIFO and LRU. However, we can’t conclude on the basis of only one examplethat one policy is better than the others. In fact, LRU is a stack algorithm removal policy,which means that an increase in memory will never cause an increase in the number ofpage interrupts.A variation of the LRU page replacement algorithm is known as the clock pagereplacement policy because it is implemented with a circular queue and uses a pointerto step through the reference bits of the active pages, simulating a clockwise motion.

 

Q.5 Explain the segment memory allocation.

The concept of segmentation is based on the common practice by programmers of structuringtheir programs in modules—logical groupings of code. With segmented memoryallocation, each job is divided into several segments of different sizes, one for each modulethat contains pieces that perform related functions. Segmented memory allocationwas designed to reduce page faults that resulted from having a segment’s loop split over two or more pages. A subroutine is an example of one such logical group. This is fundamentallydifferent from a paging scheme, which divides the job into several pages all ofthe same size, each of which often contains pieces from more than one program module.

 

Q 6 Explain Segmented/Demand Paged Memory Allocation

It is a combination of segmentation and demand paging, and itoffers the logical benefits of segmentation, as well as the physical benefits of paging.The logic isn’t new. The algorithms used by the demand paging and segmented memorymanagement schemes are applied here with only minor modifications.This allocation scheme doesn’t keep each segment as a single contiguous unit but subdividesit into pages of equal size, smaller than most segments, and more easily manipulatedthan whole segments. Therefore, many of the problems of segmentation(compaction, external fragmentation, and secondary storage handling) are removedbecause the pages are of fixed length.

This scheme requires four tables:

• The Job Table lists every job in process (one for the whole system).

• The Segment Map Table lists details about each segment (one for each job).

• The Page Map Table lists details about every page (one for each segment).

• The Memory Map Table monitors the allocation of the page frames in main memory

(one for the whole system).

 

Q 7 Explain Virtual memory.

Demand paging made it possible for a program to execute even though only a partof a program was loaded into main memory. In effect, virtual memory removed therestriction imposed on maximum program size. This capability of moving pages it will between main memory and secondary storage gave way to a new concept appropriatelynamed virtual memory. Even though only a portion of each program is stored in memory, it gives users the appearance that their programs are being completely loaded in main memory during their entire processing time—a feat that would require an incredible amount of main memory.

During the second generation, programmers started dividing their programs into sections that resembled working sets, really segments, originally called roll in/roll out and now called overlays. The program could begin with only the first overlay loaded in to memory. As the first section neared completion, it would instruct the system to lay the second section of code over the first section already in memory. Then the second section would be processed. As that section finished, it would call in the third section to be overlaid, and so on until the program was finished. Some programs had multipleoverlays in main memory at once.