7.0 Interval modeling Flashcards
What are some requirements when doing evaluation
Validity, reliability and reproducability
implementation must be correct
Models must be sound (critical components modelled in sufficient detail, other components need to be relatively realistic)
What is validity in evaluation?
How well are you measuring what you want to measure
What is reliability in evaluation?
The result should be stable and consistent
What is reproducability in evaluation
Do you provide enough information so that independent researchers can verify the same findings.
What are the 3 options when evaluating computer architecture
Analytical modeling
Simulation
Experiments on real hardware
What is analytical modelling
Suitable for early studies and large design spaces (black- and white-box models)
Often a set of mathematical equations. Use parameters to compute performance
Difficult to make sure we are collecting realistic data
Validated against simulators, if validated at all
What is simulation?
Can model complex interactions between independent units
Often model clock cycle accuracy
Relatively quick simulation
Easy to change - change in software
Cons:
- rarely validated against real hardware (difficult to recreate all the corner cases in hardware)
- How can we verify validity across design space
What is evaluation on real hardware?
Run experiments on real hardware
Then we know, that for this particular instance is correct because there aren’t any modeling abstractions
Difficult to control for all parameters, reliability is difficult
Difficult to be reliably, hard to say what we are measuring
Takes a lot of time to change the structure
What is interval modelling?
An analytical performance model for out-of-order processors
A mechanistic model (white box):
- Models the mechanisms in the processor that leads to a performance-related behaviour
-
What is a white box model?
Mechanistic modelling.
Models the mechanisms in the processor that leads to a performance-related behaviour.
Often leads to a better understanding of the architecture, compared to empirical models (black box)
What is a black box model?
Empirical model
Get a lot of data that relates program behaviour to performance, and use machine learning to find relationships
What is the power law for IPC?
The IPC has a power law dependency with the instruction window.
A power law relationship means that if the instruction window is plotted on the x axis logarithmicly, and IPC on the log-scale on the y axis, the line will be straight.
This means that if you scale up the instruction window, you will get more intruction level paralellism
Is ILP a limiting factor for performance
No, there is enough instruction level parallelism provided that the processor is balanced. This means that the size of the instruction window, need to match the number of instructions can be executed at once
Maximum IPC can be almost achieved in the absence of miss events
For balanced designs:
- B: Width of processor
- N: Total dynamic instruction count
- C: Execution time in cycles
C = N / B
How many instructions can a balanced processor execute at once?
Almost the processor width, as long as there are not any miss events
What does an interval analysis look like for I-cache misses?
IPC on y-axis, time on x
IPC is max. Then a miss occurs.
IPC continues on max dx amount of time whilw instructions in front-end pipeline are drained into the instruction window. This means that dx is the frontend pipeline depth.
IPC is 0 when dispatch drops to zero.
The cache miss latency is dx_2 from the time it occured
The IPC stays 0 until dx after the latency has finished.
during this dx, the frontend pipeline gets filled with instructions
IPC is back to max after dispatch resumes