Memory Flashcards

1
Q

What is a logic level?

A

In analogue, 1s and 0s are divided by a threshold voltage. SO whilst 0 and 1 are convenient abstractions they do not reflect reality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the main criteria for designing a memory system?

A

Rapid data access and it should take as little space as possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What assumptions must be made to model digital discrete circuits as networks?

A

The nodes are unidirectional, instantaneous, isopotential nets interconnecting devices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How are digital ‘networks’ analysed?

A

Events (logic transitioned) are generated on the outputs at some future date in response to events asserted on the inputs.

As such analysis is centred on an event queue; a list of events ordered by time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How is the system state of a discrete system represented?

A

The state can be summarised as the logic values in the nets and the event queue, i.e. events that will happen.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What happens when you assert an input?

A

Logic values don’t propagate instantaneously, although there will be an output change, for a brief time, the event is hidden from the device, but it is still part of the stae.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How are logic drivers modelled?

A

Logic drivers can be modelled as a voltage source with an ‘internal’ impedance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What effect does modelling the driven gate as a resistor have?

A

Since the driven gate is a resistor, it means that the voltage value on the interconnect is loaded. This means that if multiple gates read from the same interconnect, they may pull down the voltage value, leading to a misrepresentation of the intended logic value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does fanout measure?

A

Fanout is a measure of the max number of gates an output can drive before the voltage value is pulled down to the point where the voltage may be incorrectly read.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What happens when interconnects are driven at high speeds?

A

The interconnect begins to behave more like a transmission line, The signal can be sent, reach the destination and reflect. The reflected signal returns after bouncing off the origin leading to a second logic event.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the forbidden zone in circuits?

A

The forbidden zone refers to either

(0threshold < V < 1threshold)

Or,

out of range (< 0V or > Vcc)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How does CMOS deal with voltages in the forbidden zone?

A

CMOS devices appear to behave as simple threshold devices so it can be assumed that voltages less than 0 are treated as logic ‘0’ whilst voltages greater than 1<strong>threshold</strong> are treated as logic ‘1’. For voltages in between manufacturers make no promises.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How does TTL deal with the forbidden zone?

A

Holding the voltage at 1V will lead to a 1MHZ oscillation on the output, the exact value depends on the system

Thus max transit time, may be specified, as a measure of how fast you are expected to transit through the zone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Describe the logic states high and low

A

These are ‘disconnected but still have a value’.
They are capable of being read, like a 0 or 1, but will not fight if connected to a forcing
value. 0, 1, and X are the low impedance equivalents of L, H and Z.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe how the effective output impedance relates to logic values?

A

The effective output impedance of the driving gates: for ‘0’, ‘1’ and ‘X’, Rforce is as low as can be made practical; the corresponding fanout is high because the low value of Rforce allows Vhigh (or Vlow) to be asserted easily on the driven gates. For ‘H’, ‘L’ and ‘Z’, the output impedance (Rweak) is quite high so that if any other gate asserts a ‘0’ ‘1’ or ‘X’, it easily overcomes the high impedance

version.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How are events generated?

A

Outputs are generated in response to changes on inputs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How do you determine the difference between 1 and high?

A

To detect the difference between ‘1’ and ‘H’, . You have to have some means of loading the signal and measuring
d(signal)/d(load).

As measuring the voltage is insufficient to tell the difference.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Describe high impedance Z in computing

A

The voltage value is usually stored on some parasitic capacitance and will decay with time. This might sound a
bit tenuous, but it is the fundamental underlying physical principle of DRAM.

A ‘control’ or bus-driver gate have (conventional) data inputs, plus a ‘control’ input, which may be used to put the output into a high impedance state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Describe the conflict logic state

A

This is an abstract logic state that unambiguously indicates a problem. It may be derived, for example, if two non-high impedance gates are connected together.

If they both assert the same value, there
is no problem: the drive capability is increased, If the two gates attempt to assert different low-impedance states (‘0’ and ‘1’, for example), then the resultant logic value will depend on the relative values of the two output impedances, but more importantly the current flowing through the two output circuits can
be arbitrarily high - the physical behaviour goes way outside the modelling intent, and in
some logic families device destruction will result. Even if a device can withstand such an insult transiently, the lifetime will be considerably compromised. Ideally, a simulation would tell you this.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Describe X (Don’t Know) State

A

This is a low-impedance signal (forcing) but we don’t know what the value is. Again, it cannot occur in reality - a ‘0’ is a ‘0’, a ‘1’ is a ‘1’, but a ‘X’ will be either a ‘0’ or a ‘1’.

Its existence may or may not indicate a problem; if it propagates through a system it is indicative of subtle (i.e. bad) design.

Like the conflict, you cannot measure it directly, but
also like the conflict, you want to know if you have one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Describe U, uninitialised

A

This is again something you cannot measure in reality, it is a way for a
simulation to say “nothing that has ever happened has caused me to compute a value for
this signal”. Which begs the question “why is it here?”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Should you have more logic states?

A

Yes, VHDL has eleven distinct logic states, for example - but the more states, the more information you can extract about your design, for only a modest increase in compute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the source of the Z persistence?

A

Most of the ‘unseen’ components that make up a system - nets, pins and so on -have a
parasitic capacitance associated with them. This capacitance stores charge which is the source.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Effect of capacitance on alogic driver

A

To assert a logic value usually means the driver has to
inject or remove that charge. This takes time and requires energy. It is easier, from a practical perspective, to produce a driver that is good at either injection or removal, as opposed to both.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are the goals for memory design

A
  • To make the user believe processes are executing simultaneously
  • To make each process appear infinite memory is available and can be accessed infinitely quickly.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Describe the memory hierarchy

A

The memory hierarchy refers to the order of primacy for memory, i.e. which memory is used first..

Disk→Memory→Cache→Register

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is swapping in Memory?

A

• The OS stores a queue of processes waiting to
execute.
• As space becomes available, processes are loaded.
When a process finishes, it is removed.
• A process may be swapped out if stalled (e.g.
waiting for I/O).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Describe the idea of optimum swapping

A

Optimum swapping refers to maintaining a balance between too little and too much swapping. Too much and the processor is spending too much time with memory transfers.

Too little, stalled processes aren’t removed at the correct time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is partitioning?

A

Partitioning is simply a method of splitting memory into pieces. There are two types, fixed size and variable size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Describe Fixed Size Partitioning

A

Fixed-size memory partitions: Easy to administer (logarithmic distribution is usually best)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Describe Variable Size Partitions

A

Variable-size memory partitions:

Memory allocated as required.

Fragmentation makes it harder to load incoming
processes

This leads to memory wastage..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Define Process Topology

A

Refers to what processes are running at what time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Describe Physical Address

A

Physical addresses refer to absolute locations in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Describe Logical Addresses

A

Logical addresses are relative to the address of the beginning of the program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is paging?

A

Paging is the process of dividing the memory of a process into fixed-size pages. Whilst physical memory is in turn divided into frames.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

How are pages mapped onto frames

A

Pages can be mapped non-contiguously to
frames, and so does not result in memory gaps, unlike with partitioning.

Noncontiguous meaning, not placed in a consecutive manner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Pages and Logical Addressing

A

Logical Addressing is more complicated, as

• For a process, a single offset is not enough anymore.
• Each process has a page table.
• Logical addresses now refer to a page, and the page table
translates the base address (logical page) to the physical
address (frame)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Why are page tables useful?

A

They are vital for logical addressing. As processes are divided into multiple pages. Pages that require a corresponding logical address for safe storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

How is swapping used with pages?

A

Pages, not processes, are swapped in and out.

Thus, we’ve created the illusion of “infinite” memory, from the perspective of the process.

However, a paging supervisor is required for controlling swap rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Describe Mapping Logical to physical addressing

A

Logical addresses, which are used by software to reference sections of memory used by a process, function. differently in a paging system.

Logical addresses refer to a page number, which is used as a key in the page table.

The page table identifies the frame, from which a base address can be derived. The base address is combined with the offset to obtain the full physical address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is Page Thrashing? Why is this bad?

A

Page thrashing is when page faults happen too often. This refers to when a process wants a page that has been
swapped out. The OS needs to go fetch it.

This is a problem as the processor spends
a lot of time re/mapping pages to frames, as opposed to computing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Page Table is Swapped Out

A

One method is, each logical memory access can require two physical accesses:
1st search through memory if page table is swapped out
Then, look up data using page table, however, this method is slow so, Translation Lookaside Buffer(TLB) is used.
The TLB is a cache that stores page table entries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is the Translation Lookaside Buffer?

A

Cache which stores recent page table entries used as pages can be swapped out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Purpose of virtual memory

A

To pre-emptively get the required data into the
most accessible place, without
the user having to care.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Where can a page table be?

A
  • The TLB
  • Main memory
  • Disk
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Where can the data be stored in?

A
  • Memory cache
  • Main memory
  • Disk
47
Q

Describe Segmentation

A

Segmentation is another memory management mechanism

The application splits memory into segments of variable size at compile time (unlike paging, where the user only sees one
contiguous block).
• The compiler produces addresses of the form {segment,oset}
• The linker reconciles this with the memory subsystem.
• Each segment has different access writes and privileges:
- Processes can be run in isolation, safely. (Segmentation Fault)
- Self-modification can be prevented (Harvard).
- Data/cache can be shared between processes.

48
Q

How common is segmentation

A

Most x86 machines use segmentation with paging.

49
Q

How is segmentation implemented with paging?

A

Segmentation can be combined with paging. The compiler divides the program into segments at compile time, and the paging supervisor divides each segment into pages at run time.

A page table exists for each segment,
and entries are stored in the translation lookaside buer as before

50
Q

Aims of Cache Systems

A

• Provide access speed close to that of the fastest
memory.
• Provide capacity close to that of the most
capacious memory.
• The cache contains a local copy of parts of the
main memory.
• If the CPU wants some memory in the cache, it is
returned quickly.
• If the CPU wants some memory, not in the cache,
the cache fetches it.

51
Q

What do caches store?

A

The cache contains a local copy of parts of the
main memory.

52
Q

Are there levels to cache?

A

Yes, the levels of caches depend on access speed and memory size.

53
Q

Structure of Memory Vs Cache

A

Memory is divided into blocks

Caches are divided into lines

Blocks and lines both store words.

We assume lines and blocks are the same sizes.

Each line has a tag corresponding to a block.

Memory has more blocks than the caches have lines

54
Q

Can caches be filled and fetched from at the same time?

A

Yes, caches can be filled and fetched form in parallel as they typically involve different processes.

55
Q

Are cache misses errors?

A

No, at times they must happen such as the first time a program or process is run. As if not run before then the process will not be stored in the cache.

56
Q

Are caches always shared?

A

No, caches are often split into levels based on the speed of access and size of memory.

In addition, these levels may be divided into instruction caches and data caches.

57
Q

Define Block in memory

A

Block: A set of words in main memory.

58
Q

Define Line in Memory

A

Line: A tagged entry in the cache, which holds a set of words.

59
Q

Define tag in memory

A

Tag: An ld associated with the cache line and its contents.

60
Q

Define cache hit

A

Cache hit: Process wants to read from memory, and the cache has
it, resulting in a faster lookup compared to a cache miss.

61
Q

Define Cache Miss

A

Cache miss: Process wants to read from memory, but the cache
does not have it, so the cache has to fetch it, resulting in a slower
lookup than a cache hit.

62
Q

Why are caches separated?

A

In part because different processes require different access patterns. As such more efficient for different caches to correspond to different access patterns. Caches can be designed based on the type of instruction used for the cache.

63
Q

What are tradeoffs to consider for caches

A

Should the cache behave more like a register or memory, i.e. access speed vs storage size

64
Q

What are mapping functions?

A

Caches are smaller than memory, so blocks need to be mapped onto caches. A mapping function tells the cache line to which memory block is related.

There are three types:

  • Direct
  • Associative
  • Set Associative
65
Q

What is direct mapping?

A

• Each block of main memory maps onto only one line.
A set of blocks are designated to the same line.

66
Q

Benefits and flaws of direct mapping

A

• Quick to calculate which line a given block should be placed in Cache line ID = Block number (in main memory) % Number of lines in the cache

• Cache Thrashing: High cache miss rate -
different blocks are addressed by the process, which
are alternately mapped and evicted. With direct mapping this can easily happen

67
Q

How to break direct mapping?

A

To cause thrashing can make alternate references to blocks in the same set. As the blocks are never stored in the cache at the same time since each line can only hold one memory block.

68
Q

What is associative mapping?

A

• Any block can go into any cache line.
• The tag is the address of the block that’s currently loaded.
• To check if the cache holds a block, the cache needs to
(simultaneously) compare all cache tags with the desired block address.
• Expensive lookup, but flexible for a greater variety of access patterns.

69
Q

How to break associative mapping?

A

Break by a loop. The number of blocks in the loop should be greater than the number of lines in the cache.

70
Q

What is set-associative mapping?

A
  • Compromise between direct and associative.
  • The cache is divided into a set of subcaches.
  • Each memory block can go into only one subcache (ala direct), but it can into any line of that subcache (ala associative).
  • As subcache size → cache size (i.e. only one subcache), the cache becomes fully associative.
  • As subcache size → 1 (i.e. each subcache holds only one line), the cache becomes fully direct.
71
Q

What are replacement algorithms?

A

Replacement Functions are used to determine which line is overwritten when a new block is loaded.

Algorithms include:

  • Least recently used
  • First-in, first-out
  • Least frequently used
  • Random
72
Q

What is cache coherence?

A

The problem of keeping main memory and the
cache synchronised.

73
Q

What is cache coherence?

A

The problem of keeping main memory and the
cache synchronised.

74
Q

What problems are associated with cache coherence?

A

Read access: No problem with a single access pathway, but DMA exists.

Write access: If a cache line has been written to (dirty), it must be pushed to the main memory before it’s replaced.

75
Q

What are the solutions to cache coherence?

A

Write Through: Update main memory whenever a cache line is written.

Write Back: Update main memory whenever a cache line is evicted.

76
Q

What is write through?

A
  • All write operations are performed in parallel.
  • All modules with cache access must monitor all reads to maintain coherency. This leads to substantial memory traffic!
77
Q

What is Write Back?

A
  • When a line is modified, a “dirty” bit is set, and the main memory is only updated when a “dirty” line is replaced.
  • All access through one cache.
78
Q

What are the solutions to cache coherency for multiprocessor machines?

A

Bus watching, Hardware transparency: and Non-cacheable segments:

79
Q

Describe Bus watching

A

A solution to cache coherency in multiprocessors

In a write-through system, any write to the main
memory causes the immediate removal of cache lines containing the clobbered data.

80
Q

Describe Hardware transparency

A

A solution to cache coherency in multiprocessors

Used for write-through systems, each write to main memory causes an immediate update of all cache lines. Doesn’t delete and re-load entire cache.

81
Q

What are non-cacheable segments

A

A solution to cache coherency in multiprocessors

Prevent caching of certain regions of
(shared) memory.

82
Q

Large Vs Small Line Sizes

A

Smaller blocks/lines better exploit locality, result in more fetch/replace operations, and
have less data transfer for those operations.

Larger Lines mean less fetch operations are required however, means less lines can be used in a cache.

In general line size is between 8-32 bytes.

83
Q

What is cache granularity?

A

How to and if to use cache levels, i.e. how to stagger cache levels.

84
Q

Why use multilevel caches?

A

Different Caches are suitable for different processes.

85
Q

Why use a unified/Split Cache?

Split meaning between instruction and data not in storage size

A
  • Unified cache has a better overall hit rate because the code/data imbalances even out.
  • For fast (pipeline) architectures, a Harvard cache eliminates data/code bus contention (in a pipeline, data and code can be requested out of order).
86
Q

How do caches and virtual memory interact?

A

• Cache systems and virtual memory systems operate largely orthogonally to each other - the virtual
memory system exists \after” the caching system.

Addresses may be taken from the Virtual memory, however, caches are not aware of pages, they simply see memory blocks.

87
Q

How do caches relate to Translation Lookaside Buffer(TLB)?

A

• The interactions between cache systems and the translation lookaside buer (TLB) are more complicated; either the tags in the cache hold

  • physical addresses (in which case they need to be translated by the TLB prior to lookup), or
  • logical addresses (in which case cache tags need to be kept in coherence with the state of the TLB)
88
Q

How do disks relate to memory analysis?

A
  • They typically have their own processor and memory.
  • They also have a layered cache system - certain “bits of magnetic space” are easier to get at than others.
  • Spatial locality really matters.
  • They typically have their own RAM caches.
89
Q

How is the response generated?

A

The response is generated as a sequence of events appearing on the output of each gate

90
Q

How has the performance of modern systems affected the analysis of systems

A

Can’t as easily assume voltage signals reach gates at the same time. A difference in path length causes a significant time difference.

91
Q

The voltage at receiver equation

A

Vr =( VwRr)/(Rw +Rr)

92
Q

What happens if Rr → infinity

A

Vr = Vw

Rr >> Rw

93
Q

What happens if Rr → 0?

A

Vr →0, with Rw >> Rr

94
Q

How can the voltage/logic level be corrupted?

A

VOltage van be pulled down to 0 if Rr→0.

95
Q

What is TTL?

A

Transistor–transistor logic (TTL) is a logic family built from bipolar junction transistors. Its name signifies that transistors perform both the logic function (the first “transistor”) and the amplifying function (the second “transistor”), as opposed to resistor–transistor logic (RTL) or diode–transistor logic (DTL).

96
Q

Can boolean algebra always apply to a bus?

A

No, when the nodes are driven in different directions, simple ones and zeros don’t describe the outputs of the bus. SO we also observe the impedances.

97
Q

What should you do if multiple sources are driving a bus?

A

For multiple sources sat all other drives to weak values states i.e. Rweak is high impedance, so source with low forcing impedance dominates for the bus.

98
Q

What is a passive pull-up/pull-down device?

A

Basically, a resistor is a device that without extra Work, will pull up or pull down the voltage of the net.

Designed to have said voltage easily overwritten by active inputs to the net.

99
Q

What are the benefits of passive pullup/pulldown devices?

A

Boost efficiency of up-down behaviour.

Allows the driver to be smaller, and more efficient as only either removes charge or only adds charge.

100
Q

How are pull up/ pull down incorporated into a device?

A

The effect is created simply by connecting a resistor between the net and either ground (for
passive pull-down) or the supply rail (for passive pull up).

Often rather than a resistor a MOS transistor with the gate and source shorted, which requires
no extra processing steps which makes better use of the space..

101
Q

How is the virtual driver designed?

A

1: Two inputs the same, output:=input
2: Forcing 0 and forcing 1 is a conflict
3: Forcing 0 and X might be a conflict
4: Conflicts propagate: anything and a conflict is a conflict
5: Z is always overridden

?: 1 for pull up, 0 for pull-down

102
Q

What are the logic families?

Features

A
  • RTL - resistor transistor logic (now obsolete)
  • DTL - diode transistor logic (now obsolete)
  • TTL - transistor transistor logic (extremely common, becoming less so)
  • ECL - emitter coupled logic (fast, heavy on power)
  • MOS - metal oxide semiconductor (low power)
  • CMOS - complementary metal oxide semiconductor (even less power)
  • BiCMOS - Bipolar CMOS (fast, low power, good driving capability)
  • GaAs - Gallium Arsenide (very fast)
  • SiGe - Silicon Germanium heterojunction (currently fastest)
103
Q

What is the fastest current logic family?

A
  • Fastest logic made to date:
  • SiGe bipolar heterojunction stacked current logic
  • Clocks at 80GHz
  • fT 450GHz
  • Gate delay 6 ps
104
Q

Flaws of SiGe bipolar heterojunction stacked current logic

A

•The logic thresholds are different for each input to each gate

105
Q

Describe Static RAM Cell

A

● Large (poor packing density)
● Fast
● Power hungry
● Persistent (the data is stable as long as the power is maintained)

Bistable memory.

106
Q

Describe Dynamic RAM Cell

A

Holds Bit Value for roughly 1 millisecond

  • Highest packing density of all memory topologies
  • Three components form the cell
  • Two are parasitic
107
Q

Describe Response of non-inverting amplifier

A

Ideal, linear increase, however, Output saturates after a certain value as output voltage reaches supply voltage..

108
Q

Describe the response of inverting analogue amplifier.

A

Negative Linear. Saturates at high voltage, near supply, and low voltage when close to 0.

109
Q

The equation for output of inverting analogue amplifier

A
110
Q

What happens when we combine two amplifiers back to back

A

connect them back-to-back so that the input of each is driven by the output of the other, we can superpose the
transfer characteristics, and if the stage gains are high enough, we get the situation shown
in the figure.

111
Q

What happens if the stage gain when combining the two amplifiers is not high enough?

A

If the gains are insufficient for the transfer characteristics to intersect like this, we get a
situation where there exists just one stable equilibrium point, at the single crossing point.
(The transition between these two states occurs when the slope of the linear region = 1, i.e.
Rb = βRc)

112
Q

Define Paging

A

Paging is a mechanism to separate data and can be used to implement a virtual memory system. It is
architecture-sympathetic ie. architecture side memory management.

113
Q

Define Segmentation

A

Segmentation is a memory management mechanism by which data/instructions can be split into segments to divide programs in a user-sympathetic manner.

114
Q

How are multiple pages implemented?

A

Page directory is implemented, stores page tables and is never swapped out. It is stored in the OS block.