Wk 5 Program Optimization (Bryant Ch 5) Flashcards
What does a compiler choose every time, being right or being optimal? Where are some examples that it does so?
A compiler will always choose being correct. This means that if it can’t ensure correctness it won’t optimize. This comes into play alot with functions and procedures that it treats like a black box, as well as pointers, that it can’t tell when values don’t change so it assesses the location every time. Likewise memory aliasing is an issue
Why do we have to set compilers up for success? Why can’t we just let them do it?
There are ways that if our code is written stupidly we can actually block the compiler from making its optimizations. Also we want to set it up for success so that the optimizations it makes can be with regard to the architecture underneath in terms of reordering, etc. We can’t necessarily expect our code we’ve written sequentially to run sequentially.
At what levels should we optimize? Why does it matter?
At all levels we have to optimize. 1. Algorithm (can out-optimize a bad algorithm) 2. Code Data Representations, Procedures Loops. BL : Constant factors still matter. We can't change cn^2. But if our c is 100 times bigger than it needs to be that's a significant performance hit. 3. Compiler and execution 4. Hardware
What are the three key parts to understanding system and optimizing peformance?
- Being able to understand how code I write gets compiled and executed.
- Being able to measure performance and find bottlenecks.
- Being able to improve code performance while balancing and not destroying modularity.
What is the biggest tradeoff challenge in optimizing code?
Modularity / ability to read and maintain vs. performance. Key is finding the balance in between, keeping the readability and understandability while simultaneously optimizing performance.
What is the first step of any optimization process, which should be done regardless of what level of optimization we’re looking for?
Removing unnecessary work from loops….
- procedure calls
- conditional tests
- memory references
What parts of understanding the machine you’re writing for are of critical importance in optimizing code?
How does it use parallelization, out of order code, timing and processing.
What is the primary purpose of using graphical data-flow notation?
To understand data dependencies that allow or block increased parallelism.
What does the process for optimizing code look like in practice? Is it cook-book, cookie cutter?
Absolutely not. It’s an art mixed with science. Requires code profilers along with trial and error. It’s an iterative process where changes are made, results are compared and then adjusted.
Explain Amdahl’s Law and how it relates to the optimization techniques in this chapter. What does it do?
[INSERT]
What are the basic things we are looking at when analyzing assembly code for performance indicators?
How do the inner loops work, what about the register use, how often is memory accessed? What are the critical paths and dependencies of a code
Describe the do as much work as necessary philosophy
We should only do as much optimization work as is necessary to let the compiler take over afterwards. Setting the compiler up for success and removing optimization blockers is a good tradeoff between letting the compiler do everything and the programmer do everything.
Why does memory aliasing block optimization? How do we fix?
A compiler can’t know for sure whether two pointers are aliased to the same data, so it cannot make assumptions and optimize them automatically. Reference the twiddle example. [INSERT FIX]
Why do function calls block optimization? How do we fix?
Functions can have side effects that modify part of the global program state. A compiler may not know for sure that a function has no side effects and so they don’t try to figure out and they treat them as a black box. By using inline substitution (putting the function’s code in the loop if it changes a global variable vs. calling the function, we can allow the compiler to do its thing.