操作系统第九版部分课后作业习题答案分析解析

操作系统第九版部分课后作业习题答案分析解析

2023年6月24日发(作者:)

CHAPTER 9 Virtual Memory

Practice Exercises

9.1 Under what circumstances do page faults occur? Describe

the actions

taken by the operating system when a page fault occurs.

Answer:

A page fault occurs when an access to a page that has not

been

brought into main memory takes place. The operating system

verifies

the memory access, aborting the program if it is invalid.

If it is valid, a

free frame is located and I/O is requested to read the

needed page into

the free frame. Upon completion of I/O, the process table

and page table

are updated and the instruction is restarted.

9.2 Assume that you have a page-reference string for a

process with m

frames (initially all empty). The page-reference string

has length p;

n distinct page numbers occur in it. Answer these questions

第 1 页 for any

page-replacement algorithms:

a. What is a lower bound on the number of page faults?

b. What is an upper bound on the number of page faults?

Answer:

a. n

b. p

9.3 Consider the page table shown in Figure 9.30 for a

system with 12-bit

virtual and physical addresses and with 256-byte pages. The

list of free

page frames is D, E, F (that is, D is at the head of the

list, E is second,

and F is last).

Convert the following virtual addresses to their

equivalent physical

addresses in hexadecimal. All numbers are given in

hexadecimal. (A

dash for a page frame indicates that the page is not in

memory.)

• 9EF

• 111

第 2 页 2930 Chapter 9 Virtual Memory

• 700

• 0FF

Answer:

• 9E F - 0E F

• 111 - 211

• 700 - D00

• 0F F - EFF

9.4 Consider the following page-replacement algorithms.

Rank these algorithms on a five-point scale from “bad〞 to

“perfect〞 according to their

page-fault rate. Separate those algorithms that suffer

from Belady’s

anomaly from those that do not.

a. LRU replacement

b. FIFO replacement

c. Optimal replacement

d. Second-chance replacement

Answer:

Rank Algorithm Suffer from Belady’s anomaly

1 Optimal no

2 LRU no

第 3 页 3 Second-chance yes

4 FIFO yes

9.5 Discuss the hardware support required to support demand

paging.

Answer:

For every memory-access operation, the page table needs to

be consulted

to check whether the corresponding page is resident or not

and whether

the program has read or write privileges for accessing the

page. These

checks have to be performed in hardware. A TLB could serve

as a cache

and improve the performance of the lookup operation.

9.6 An operating system supports a paged virtual memory,

using a central

processor with a cycle time of 1 microsecond. It costs an

additional 1

microsecond to access a page other than the current one.

Pages have 1000

words, and the paging device is a drum that rotates at 3000

revolutions

第 4 页 per minute and transfers 1 million words per second. The

following

statistical measurements were obtained from the system:

• 1 percent of all instructions executed accessed a page

other than the

current page.

Of the instructions that accessed another page, 80 percent

accessed

a page already in ce Exercises 31

When a new page was required, the replaced page was modified

50

percent of the time.

Calculate the effective instruction time on this system,

assuming that the

system is running one process only and that the processor

is idle during

drum transfers.

Answer:

effective access time = 0.99 × (1 sec + 0.008 × (2 sec)

+ 0.002 × (10,000 sec + 1,000 sec)

+ 0.001 × (10,000 sec + 1,000 sec)

= (0.99 + 0.016 + 22.0 + 11.0) sec

第 5 页 = 34.0 sec

9.7 Consider the two-dimensional array A:

int A[][] = new int[100][100];

where A[0][0] is at location 200 in a paged memory system

with pages

of size 200. A small process that manipulates the matrix

resides in page

0 (locations 0 to 199). Thus, every instruction fetch will

be from page 0.

For three page frames, how many page faults are generated

by

the following array-initialization loops, using LRU

replacement and

assuming that page frame 1 contains the process and the

other two

are initially empty?

a. for (int j = 0; j < 100; j++)

for (int i = 0; i < 100; i++)

A[i][j] = 0;

b. for (int i = 0; i < 100; i++)

for (int j = 0; j < 100; j++)

A[i][j] = 0;

第 6 页 Answer:

a. 5,000

b. 50

9.8 Consider the following page reference string:

1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3,

6.

How many page faults would occur for the following

replacement

algorithms, assuming one, two, three, four, five, six, or

seven frames?

Remember all frames are initially empty, so your first unique

pages will

all cost one fault each.

LRU replacement

• FIFO replacement

Optimal replacement32 Chapter 9 Virtual Memory

Answer:

Number of frames LRU FIFO Optimal

1 20 20 20

2 18 18 15

3 15 16 11

4 10 14 8

第 7 页 5 8 10 7

6 7 10 7

7 77 7

9.9 Suppose that you want to use a paging algorithm that

requires a reference

bit (such as second-chance replacement or working-set

model), but

the hardware does not provide one. Sketch how you could

simulate a

reference bit even if one were not provided by the hardware,

or explain

why it is not possible to do so. If it is possible, calculate

what the cost

would be.

Answer:

You can use the valid/invalid bit supported in hardware to

simulate the

reference bit. Initially set the bit to invalid. On first

reference a trap to the

operating system is generated. The operating system will

set a software

bit to 1 and reset the valid/invalid bit to valid.

第 8 页 9.10 You have devised a new page-replacement algorithm that

you think may

be optimal. In some contorted test cases, Belady’s anomaly

occurs. Is the

new algorithm optimal? Explain your answer.

Answer:

No. An optimal algorithm will not suffer from Belady’s

anomaly because

—by definition—an optimal algorithm replaces the page that

will not

be used for the longest time. Belady’s anomaly occurs when

a pagereplacement algorithm evicts a page that will be

needed in the immediate

future. An optimal algorithm would not have selected such

a page.

9.11 Segmentation is similar to paging but uses

variable-sized“pages.〞Define

two segment-replacement algorithms based on FIFO and LRU

pagereplacement schemes. Remember that since segments are

not the same

size, the segment that is chosen to be replaced may not be

big enough

第 9 页 to leave enough consecutive locations for the needed

segment. Consider

strategies for systems where segments cannot be relocated,

and those

for systems where they can.

Answer:

a. FIFO. Find the first segment large enough to accommodate

the

incoming segment. If relocation is not possible and no one

segment

is large enough, select a combination of segments whose

memories

are contiguous, which are “closest to the first of the

list〞 and

which can accommodate the new segment. If relocation is

possible,

rearrange the memory so that the firstNsegments large enough

for

the incoming segment are contiguous in memory. Add any

leftover

space to the free-space list in both ce

Exercises 33

第 10 页 b. LRU. Select the segment that has not been used for the

longest

period of time and that is large enough, adding any leftover

space

to the free space list. If no one segment is large enough,

select

a combination of the “oldest〞 segments that are

contiguous in

memory (if relocation is not available) and that are large

enough.

If relocation is available, rearrange the oldest N segments

to be

contiguous in memory and replace those with the new

segment.

9.12 Consider a demand-paged computer system where the

degree of multiprogramming is currently fixed at four. The

system was recently measured to determine utilization of

CPU and the paging disk. The results

are one of the following alternatives. For each case, what

is happening?

Can the degree of multiprogramming be increased to increase

the CPU

第 11 页 utilization? Is the paging helping?

a. CPU utilization 13 percent; disk utilization 97 percent

b. CPU utilization 87 percent; disk utilization 3 percent

c. CPU utilization 13 percent; disk utilization 3 percent

Answer:

a. Thrashing is occurring.

b. CPU utilization is sufficiently high to leave things alone,

and

increase degree of multiprogramming.

c. Increase the degree of multiprogramming.

9.13 We have an operating system for a machine that uses

base and limit

registers, but we have modified the machine to provide a page

table.

Can the page tables be set up to simulate base and limit

registers? How

can they be, or why can they not be?

Answer:

The page table can be set up to simulate base and limit

registers provided

that the memory is allocated in fixed-size segments. In this

way, the base

第 12 页 of a segment can be entered into the page table and the

valid/invalid bit

used to indicate that portion of the segment as resident

in the memory.

There will be some problem with internal fragmentation.

er a demand-paging system with the following

time-measured utilizations:

CPU utilization 20%

Paging disk 97.7%

Other I/O devices 5%

Which (if any) of the following will (probably) improve CPU

utilization? Explain your answer.

a. Install a faster CPU.

b. Install a bigger paging disk.

c. Increase the degree of multiprogramming.

d. Decrease the degree of multiprogramming.

e. Install more main memory.

f. Install a faster hard disk or multiple controllers with

multiple hard disks.

g. Add prepaging to the page fetch algorithms.

h. Increase the page size.

第 13 页 Answer: The system obviously is spending most of its time

paging, indicating over-allocation

of memory. If the level of multiprogramming is reduced

resident processes

would page fault less frequently and the CPU utilization

would improve. Another way to

improve performance would be to get more physical memory

or a faster paging drum.

a. Get a faster CPU—No.

b. Get a bigger paging drum—No.

c. Increase the degree of multiprogramming—No.

d. Decrease the degree of multiprogramming—Yes.

e. Install more main memory—Likely to

improve CPU utilization as more pages can

remain resident and not require paging to or from the disks.

f. Install a faster hard disk, or multiple controllers with

multiple hard disks—Also an

improvement, for as the disk bottleneck is removed by

faster response and more

throughput to the disks, the CPU will get more data more

quickly.

第 14 页 g. Add prepaging to the page fetch algorithms—Again,

the CPU will get more data

faster, so it will be more in use. This is only the case

if the paging action is amenable

to prefetching (i.e., some of the access is sequential).

h. Increase the page size—Increasing the page size will

result in fewer page faults if

data is being accessed sequentially. If data access is more

or less random, more

paging action could ensue because fewer pages can be kept

in memory and more

data is transferred per page fault. So this change is as

likely to decrease utilization

as it is to increase it.

10.1、Is disk scheduling, other than FCFS scheduling,

useful in a single-user

environment? Explain your answer.

Answer: In a single-user environment, the I/O queue usually

is empty.

Requests generally arrive from a single process for one

block or for a

sequence of consecutive blocks. In these cases, FCFS is an

第 15 页 economical

method of disk scheduling. But LOOK is nearly as easy to

program

and will give much better performance when multiple

processes are

performing concurrent I/O, such as when aWeb browser

retrieves data

in the background while the operating system is paging and

another

application is active in the foreground.

n why SSTF scheduling tends to favor middle

cylinders over the

innermost and outermost cylinders.

The center of the disk is the location having the smallest

average distance to all other the disk head

tends to move

away from the edges of the is another way to think

of

current location of the head divides the cylinders into two

the head is not in the center of the disk and a new request

arrives,the

第 16 页 new request is more likely to be in the group that includes

the center

of the disk;thus,the head is more likely to move in that

direction.

10.11、Suppose that a disk drive has 5000 cylinders,

numbered 0 to 4999. The drive is currently serving a request

at cylinder 143, and the previous request was at cylinder

125. The queue of pending requests, in FIFO order, is

86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130

Starting from the current head position, what is the total

distance (in cylinders) that the disk arm moves to satisfy

all the pending requests, for each of the following

disk-scheduling algorithms?

a. FCFS

b. SSTF

c. SCAN

d. LOOK

e. C-SCAN

Answer:

a. The FCFS schedule is 143, 86, 1470, 913, 1774, 948, 1509,

1022, 1750, 130. The total seek distance is 7081.

b. The SSTF schedule is 143, 130, 86, 913, 948, 1022, 1470,

第 17 页 1509, 1750, 1774. The total seek distance is 1745.

c. The SCAN schedule is 143, 913, 948, 1022, 1470, 1509,

1750, 1774, 4999, 130, 86. The total seek distance is

9769.

d. The LOOK schedule is 143, 913, 948, 1022, 1470, 1509,

1750, 1774, 130, 86. The total seek distance is 3319.

e. The C-SCAN schedule is 143, 913, 948, 1022, 1470, 1509,

1750, 1774, 4999, 86, 130. The total seek distance is

9813.

f. (Bonus.) The C-LOOK schedule is 143, 913, 948, 1022, 1470,

1509, 1750, 1774, 86, 130. The total seek distance is 3363.

12CHAPTER

Implementation

Practice Exercises

12.1 Consider a file currently consisting of 100 blocks.

Assume that the filecontrol block (and the index block, in

the case of indexed allocation)

is already in memory. Calculate how many disk I/O

operations are

required for contiguous, linked, and indexed (single-level)

allocation

第 18 页 strategies, if, for one block, the following conditions

hold. In the

contiguous-allocation case, assume that there is no room

to grow at

the beginning but there is room to grow at the end. Also

assume that

the block information to be added is stored in memory.

a. The block is added at the beginning.

b. The block is added in the middle.

c. The block is added at the end.

d. The block is removed from the beginning.

e. The block is removed from the middle.

f. The block is removed from the end.

Answer:

The results are:

Contiguous Linked Indexed

a. 201 1 1

b. 101 52 1

c. 1 3 1

d. 198 1 0

e. 98 52 0

f. 0 100 0

第 19 页 12.2 What problems could occur if a system allowed a file

system to be

mounted simultaneously at more than one location?

Answer:

4344 Chapter 12 Implementation

There would be multiple paths to the same file, which could

confuse

users or encourage mistakes (deleting a file with one path

deletes the

file in all the other paths).

12.3 Why must the bit map for file allocation be kept on mass

storage, rather

than in main memory?

Answer:

In case of system crash (memory failure) the free-space

list would not

be lost as it would be if the bit map had been stored in

main memory.

12.4 Consider a system that supports the strategies of

contiguous, linked,

and indexed allocation. What criteria should be used in

deciding which

第 20 页 strategy is best utilized for a particular file?

Answer:

Contiguous—if file is usually accessed sequentially, if file

is

relatively small.

Linked—if file is large and usually accessed sequentially.

• Indexed—if file is large and usually accessed randomly.

12.5 One problem with contiguous allocation is that the

user must preallocate enough space for each file. If the file

grows to be larger than the

space allocated for it, special actions must be taken. One

solution to this

problem is to define a file structure consisting of an initial

contiguous

area (of a specified size). If this area is filled, the

operating system

automatically defines an overflow area that is linked to the

initial

contiguous area. If the overflow area is filled, another

overflow area

is allocated. Compare this implementation of a file with the

standard

第 21 页 contiguous and linked implementations.

Answer:

This method requires more overhead then the standard

contiguous

allocation. It requires less overheadthan the standard

linked allocation.

12.6 How do caches help improve performance? Why do systems

not use

more or larger caches if they are so useful?

Answer:

Caches allow components of differing speeds to communicate

more

efficiently by storing data from the slower device,

temporarily, in

a faster device (the cache). Caches are, almost by definition,

more

expensive than the device they are caching for, so

increasing the number

or size of caches would increase system cost.

12.7 Why is it advantageous for the user for an operating

system to

第 22 页 dynamically allocate its internal tables? What are the

penalties to the

operating system for doing so?

Answer:

Dynamic tables allow more flexibility in system use growth

— tables

are never exceeded, avoiding artificial use limits.

Unfortunately, kernel

structures and code are more complicated, so there is more

potential

for bugs. The use of one resource can take away more system

resources

(by growing to accommodate the requests) than with static

ce Exercises 45

12.8 Explain how the VFS layer allows an operating system

to support

multiple types of file systems easily.

Answer:

VFS introduces a layer of indirection in the file system

implementation.

In many ways, it is similar to object-oriented programming

techniques.

第 23 页 System calls can be made generically (independent of file

system type).

Each file system type provides its function calls and data

structures

to the VFS layer. A system call is translated into the

proper specific

functions for the target file system at the VFS layer. The

calling program

has no file-system-specific code, and the upper levels of the

system call

structures likewise are file system-independent. The

translation at the

VFS layer turns these generic calls into file-system-specific

operations.

第 24 页

发布者:admin,转转请注明出处:http://www.yc00.com/xiaochengxu/1687604057a23911.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信