Wk7L2 - Futures & Matrix Multiplication Flashcards
What is a Future, and how is it used?
A Future is an object used to retrieve a result from a thread running asynchronously. It represents a result that will be returned at a later time. The Executor Service
returns a Future object for methods that run tasks asynchronously.
How does the Future API simplify multithreading?
The Future API allows a main thread to collect the result from a thread pool, handle synchronization, and retrieve return values. This simplifies managing tasks that must compute many small elements for a larger solution.
What methods are available in the Future API?
Key methods in the Future API include:
- get(): Retrieves the result, blocking if the task is not complete.
- cancel(): Cancels the asynchronous task.
- isDone() / isCancelled(): Check if the task is completed or cancelled.
What is Numerical Linear Algebra, and why is it important in scientific computing?
Numerical Linear Algebra involves solving large systems of linear equations on computers. These equations are often repeated billions of times to solve complex problems like heat conduction, using techniques like Gaussian elimination and matrix multiplication.
How does the heat flow equation relate to Numerical Linear Algebra?
The heat flow equation models how heat moves in a system, with the temperature at each point depending on its neighbors. This is solved using matrices, where each matrix element represents a point on the system.
What is the time complexity of basic matrix multiplication, and how is it calculated?
The time complexity of basic matrix multiplication is O(n^3), where each element in the result matrix requires multiplying a row from the first matrix with a column from the second matrix. This operation is repeated for all n x n elements.
What is Strassen’s Algorithm, and how does it improve matrix multiplication?
Strassen’s Algorithm is a faster matrix multiplication algorithm discovered in 1969, reducing the time complexity to O(n^2.8). It uses fewer multiplications than the traditional method, improving efficiency for large matrices.
What is the most recent advancement in matrix multiplication algorithms, and what is its complexity?
The most recent algorithm (as of 2024) for matrix multiplication can run as fast as O(n^2.37), though it is highly complex and only practical for extremely large matrices.
What factors should be considered when choosing between parallel and single-processor matrix multiplication algorithms?
Factors include the matrix size (n), the number of processors (P), and the time complexity of the algorithm.
For example, with n = 1000 and P = 32, using the parallel O(n^3) algorithm may be faster. But for n = 1,000,000 and P = 128, the single-processor O(n^2.8) algorithm might be more efficient.
Why is cache size important in matrix multiplication performance?
Cache size significantly influences performance. Cache-aware algorithms, which optimize data storage and access, perform better than those that frequently thrash the cache, as shown in performance comparisons.