Module 1 Theory Flashcards
What are the three main types of devices in the current computing landscape?
- Personal computers – general purpose, low computation machines. Mainly focused on input-
output (I/O) for user.
- Servers and Supercomputers – Built for more specific purposes: storage, computation (IE.
Amazon Cloud, IBM BlueGene)
- Embedded Computers – special purpose computers which are embedded into a larger system.
(IE. Smart TV, DVR, anti-lock brakes in car, network router)
Computer Architecture
Computer Architecture refers to those aspects of the hardware that are visible to the “programmer”
e.g., instructions the computer is capable of executing, word size (native unit of data of CPU), data
formats
Computer Organization
Computer Organization (also called microarchitecture) refers to how the physical components of the machine interact to implement the architecture (think hardware)
8 Great Ideas in Architecture
- Moore’s Law
- Use abstraction to simplify design
- Make the common case fast
- Increase performance via parallelism
- Increase performance via pipelining
- Increase performance via prediction
- Implement a hierarchy of memories
- Increase dependability via redundancy
Moore’s Law
circuit complexity/speed doubles
every 18-24 months
We have reached the limits of Moore’s Law. To
“keep up” manufactures have found other
mechanisms – ie. multi-core
Explain Abstraction
Abstraction refers to ignoring irrelevant
details and focusing on higher-level
design/implementation issues
Part of software development too
Explain making the common case fast
Enhance the performance of those operations
that occur most frequently
Explain increasing performation via parallelism
Perform operations in parallel (simultaneously)
when possible
Explain increase performance via pipelining
A form of parallelism in which single
instructions are broken into multiple stages and
executed in parallel
Explain increase performance via prediction
The computer will “guess” which operation
will be executed next and start executing it
Explain implementing a hierrachy of memories
Fastest, smallest and expensive memory at the top; slowest, largest and cheapest at the
bottom
Explain increasing dependability via redundancy
Include redundant components that can take over when a failure occurs
Of particular importance in cloud computing systems and other server technologies
von Neumann Architecture
• a particular computer hardware design model for a stored-program digital
computer (e.g., PCs)
• Named for Hungarian-American mathematician John von Neumann, but
others participated in the original design
• Separate central processing unit (CPU) and random-access memory
(RAM)
• Both instructions and data stored in RAM
• Data to be processed is transferred from RAM to CPU, and results are
transferred back to RAM

What is a CPU and its 3 main components?
- CPU (Central Processing Unit): performs the actual processing.
3 main components
Control Unit: performs instruction decoding and control
Arithmetic Logic Unit (ALU): performs basic arithmetic and logical operations
Registers: small amount of memory used to hold the information needed to process
WHat are the levels of Abstraction on an electronic computer system?
* In this class, we will focus on the Architecture, Mirco-architecture, Logic, and Digital Circuits layers of absctraction

RAM
RAM (Random Access Memory): larger memory used to store both program instructions and
data.
Explain Stored-Program Computer
• The program to be executed is stored in RAM along with the data to be processed
• A program consists of binary instructions stored in RAM
• Each instruction or small piece of data in RAM has an associated memory address to indicate its
location
• A program counter (or instruction pointer) register in the CPU stores the memory address of the next
instruction to be executed
Explain Each step in the Fetch/Decode/Execute Cycle
The basic cycle of operation of a von Neumann-style computer:
o Fetch: the next instruction is retrieved from RAM
o Decode: the instruction is examined to determine what the CPU should do:
Opcode: field that determines the type of instruction
Operand(s): fields that determine the source and destination of data to be operated on
o Execute: the operation specified by the instruction is performed, which may involve one or more
memory references

What is an Instruction Set?
• The instruction set of a computer is the repertoire of instructions that the CPU can perform
o Determined by the computer architects/designers
o “Hard-wired” as part of the computer design
o Different for each type of CPU
CISC Processors
o Emerged before the early 80s
o Memory in those days was expensive
bigger program->more storage->more money
Hence needed to reduce the number of instructions per program
o Number of instructions are reduced by having multiple operations within a single instruction
o Multiple operations lead to many different kinds of instructions that access memory (addressing modes)
o In turn making instruction length variable and fetch-decode-execute time unpredictable – making it
more complex
o Example: x86 ISA
RISC Processors
Original idea to reduce the ISA
Provide minimal set of instructions that could carry out all essential operations
o Instruction complexity is reduced by
1. Having few simple instructions that are the same length
2. Allowed memory access only with explicit load and store instructions. Hence each instruction
performs less work but instruction execution time among different instructions is consistent. The
complexity that is removed from ISA is moved into the domain of the assembly
programmer/compiler
o Examples: LC3, MIPS, PowerPC (IBM), SPARC (Sun)
What are the major differences between RISC and CISC
o RISC systems shorten execution time by reducing the clock cycles per instruction (i.e. simple
instructions take less time to interpret) at the cost of increasing the number of instructions per program
o CISC systems shorten execution time by reducing the number of instructions per program at the cost of
increasing the number of cycles per instruction

Where are all program information (instruction and data values) stored?
RAM - Random Access Memory
Explain the format/regions of MIPS memory
o The instructions of the program from the .text section are stored in the Text Segment.
o The static variables (data) of the program declared in the .data section are stored in the Data
Segment.
o The Stack Segment is a region of memory which your program can use to temporarily hold data.
The stack grows downward. We will learn more about the stack later in the semester.
o The Dynamic Data is for heap allocations (in C++, new). This space is allocated upwards into the
same space as the stack. The stack and dynamic data must share the same total space.

Explain what a word is in MIPS
• A word is the unit of data used natively by a CPU. In
MIPS a word is 32-bits.
• Each word is divided into smaller segments called bytes.
A byte holds 8 bits. There are 4 bytes per word.

How many bits of data is in each row of MIPS memory? What kind of architecture is it?
32 bits, s MIPS is a 32-bit architecture that is byte-addressable, meaning each byte has its own memory address
Little-endian vs. Big-endian
• Little-endian: byte numbers start at the little (least
significant) end – right side
• Big-endian: byte numbers start at the big (most
significant) end – left side
• Word address is the same in either cases
*Modernly almost all architectures adapt Little Endian.

Why and when does little-endian vs. big endian matter?
o When communicating between machines of the same endianness, there are no problems. However,
when you transfer data between machines of different endianness, there can be issues.
o Data transferred byte by byte will be stored in reverse when transferred. However, data transferred in
larger sizes at once, like words at once, will be correctly transferred.