12 - Shared data between threads Flashcards
What are the characteristics of multicore CPUs?
- Cores are individual processing units within a CPU.
- Evolution from multi-processor setups with CPUs connected by a BUS.
- First dual-core CPU by IBM in 2000; first Intel dual-core in 2006.
- Commodity processors have fewer than 12 cores, specialized processors can have up to 100 cores.
What is the concept of shared memory in multiprocessors?
- All memory locations are accessible to any processor.
- The cost of memory access is constant.
- BUS bandwidth is limited; caches help reduce data transfer.
Explain the concept of false sharing in shared memory multiprocessors.
- False sharing occurs when multiple processors modify shared data, leading to frequent updates within the same cache line.
- This results in the invalidation of entire cache lines, even for logically independent updates.
What is cache coherence in the context of multicore processors?
- Each core has a private cache, making data coherence essential.
- Shared data can lead to cache misses and necessitate data copying across cores.
How can false sharing be reduced in shared memory multiprocessors?
- Use private data to minimize shared data access.
- Employ compiler optimizations to reduce memory loads and stores.
- Pad data structures so each thread’s data resides on a different cache line.
- Modify data structures to reduce data sharing among threads.
What is hyperthreading and its benefits in CPU architecture?
- Hyperthreading allows multiple threads to execute concurrently on the same core.
- It creates virtual cores to increase efficiency.
- First introduced by Intel in 2002 in Xeon and Pentium 4.
- Offers performance gains by utilizing idle resources in the CPU.
What are the considerations for concurrent data access in threads?
- Shared variables (like global variables in C) can be accessed by multiple threads.
- Shared data should be managed carefully to avoid conflicts.
- Accessing and modifying the same variable by different threads requires synchronization
How does instruction ordering differ between single-threaded and multi-threaded applications?
- In single-threaded applications, instruction order follows the C code sequence.
- In multi-threaded applications, instruction order can vary, leading to different execution combinations.
What is a critical region in the context of thread synchronization?
- A piece of code where resources are shared and should be executed by one task at a time.
- It is delimited by read/write instructions to the shared resource.
- Other tasks trying to enter should be blocked if a task is already inside.
What are the requirements for a critical region to be effective?
- Mutual Exclusion: Only one task can be inside the critical region.
- Progress: A task inside cannot block others from entering.
- Limited Wait: A task should wait only a limited time before entering.
What is mutual exclusion and its consequences in concurrent programming?
- Mutual Exclusion ensures that at most one task is inside a critical region.
- Consequences include the risk of starvation, where a task is never scheduled, and deadlock, where tasks wait indefinitely due to coding problems
What are locks in the context of thread synchronization?
- Locks ensure mutual exclusion in critical sections.
- Spin locks involve busy waiting and can be inefficient.
- They are the simplest mechanism for mutual exclusion.
How are mutexes used in concurrent programming?
- Mutexes are assigned to critical regions for synchronization.
- They require specific calls (
mutex_lock
andmutex_unlock
) for entering and exiting the critical region. - Proper usage includes minimizing the duration of the locked state.
What are POSIX mutexes and their functionality?
- POSIX mutexes are associated with Pthreads in programming.
- They require initialization and specific functions for locking and unlocking.
- Mutexes are used for mutual exclusion in multi-threaded environments.
pthread_mutex_t mux; mux=PTHREAD_MUTEX_INITIALIZER; int pthread_mutex_destroy(pthread_mutex_t *);
- Mutex locking
int pthread_mutex_lock(pthread_mutex_t *mutex);
- Blocks thread until resource is available/can enter critical region
- Returns when task enters critical region
- Returns 0 in case of success
- Mutexes should be locked for the minimum amount of time
- Mutex unlock
int pthread_mutex_unlock(pthread_mutex_t *mutex);
- Returns 0 in case of success
- Allow other thread to enter the critical region
- Unblock a thread from
pthread_mutex_lock
What are POSIX spin locks and their use cases?
- Spin locks are low-level synchronization mechanisms suitable for short critical regions.
- They involve the thread spinning in a loop until the lock becomes available.
- Careful use is required to avoid excessive CPU consumption and potential deadlocks.