14. Concurrency And Asynchrony || C# 10 Flashcards
What is ‘concurrency’?
In short: more than one thing happening at a time
What are the most common concurrency scenarios and why the concurrency is used?
- **Writing a responsive user interface **- on WPF, mobile and Window Forms apps you must run time-consuming tasks concurrently with the code that runs your user interface to mainatain responsiveness;
- Allowing requests to process simultaneously - on a server, client requests can arrive concurrently and so must be handled in parallel to maintain scalability (in ASP.NET and Web API this is done automatically but must be aware of it since it can affect when using static variables for caching);
- Parallel programming - Code that performs intensive calculations can execute faster on multicore/multiprocessor computers if the workload is divided between cores;
- Speculative execution - On multicore machines, you can sometimes improve performance by predicting something that might need to be done and then doing it ahead of time (e.g. running a number of different algorithms in parallel that all solve the same task. Whichever finishes first “wins” - this is effective when you can’t know ahead of time which algorithm will execute the fastest.
What does ‘multithreading’ mean?
Multithreading - the general mechanism by which a program can simultaneously execute code. Multithreading is supported both by the CLR and operating system and is a fundamental concept in concurrency.
If both ‘concurrency’ and ‘multithreading’ are about executing code simultaneously - are they actually the same things?
In C#, concurrency and multithreading are related but distinct concepts. Let’s explore the differences between them:
- Concurrency:
Concurrency is a broader concept that refers to the ability of a system to handle multiple tasks or processes simultaneously. It doesn’t necessarily mean that the tasks are executing at the same time, but rather the system can switch between tasks rapidly, giving the illusion of parallel execution. Concurrency can be achieved through various mechanisms, including multithreading, multitasking, and asynchronous programming.
Concurrency is useful for tasks that involve waiting for external resources, I/O operations, or tasks that can be performed independently of each other. It allows a program to efficiently utilize system resources and enhance responsiveness.
- Multithreading:
Multithreading is a specific implementation of concurrency, where multiple threads within a single process execute simultaneously on multiple CPU cores or processor units. Each thread represents an independent sequence of instructions that can perform a separate task or part of the same task. Multithreading is a low-level programming technique that enables parallel execution of code within a single process.
C# supports multithreading through the System.Threading namespace, which provides classes and methods for managing threads and synchronizing their interactions.
Differences:
- Scope:
Concurrency encompasses a broader set of techniques and patterns that allow multiple tasks to be managed effectively. It includes multithreading but also includes other methods like multitasking and asynchronous programming. - Parallelism:
Multithreading is a specific form of concurrency that enables parallelism by utilizing multiple threads to execute tasks simultaneously. Concurrency, on the other hand, doesn’t always guarantee parallel execution; it can switch between tasks rapidly, making it appear as if they are executing in parallel. - Complexity:
Multithreading is generally more complex to manage than simple concurrency. It involves handling synchronization, race conditions, and potential deadlocks, which can be challenging to debug and maintain.
In summary, concurrency is a higher-level concept that encompasses various methods for managing multiple tasks, while multithreading is a specific technique for achieving parallel execution using multiple threads within a single process in C#.
What is a ‘thread’?
A thread is an execution path that can proceed independently of others.
What are single-threaded and multithreaded environments?
Each thread runs within an operating system process, which provides an isolated environment in which a program runs. With a single-threaded program, just one thread runs in the process’s isolated environment, and so that thread has exclusive access to it. With multithreaded program, multiple threads run in a single process, sharing the same execution environment (memory, in particular).
Why multithreading is so useful?
One thread can fetch data in the background, for instance, while another thread displays the data as it arrives.
What is a ‘shared state’ in threading?
In the context of threading, a “shared state” refers to data or resources that are accessible and can be modified by multiple threads concurrently. When multiple threads are running simultaneously in a multithreaded application, they share the same memory space and can access and modify the same variables, objects, or other resources.
Lets assume that we have 2 threads and we want the second one to wait for the first end and only then execute itself, how we should approach this?
By calling method “Join” that will wait untill the thread finished and only then executing the following logic.
Example:
Thread t = new Thread(“Go”);
t.Start();
t.Join();
Console.WriteLine(“The thread has finished!”);
void Go { ... // Do something }
Usually these two processes would execute at the same time, but now only after the “Go” method ends, the Console.WriteLine() will print the text.
Thread.Sleep(0) and Thread.Yeald essentially does the same thing - pauses the current thread for specified period. What is the difference?
Thread.Sleep(0) gives up the thread’s current time slice immediatly, voluntarily handing over the CPU to the threads. Thread.Yield() does the same but it gives the time slice only to threads running on the same processor.
What are Thread.Sleep and Thread.Yield additionally usefull for?
Sleep(0) or Yield is occationally useful in production code for advanced performance tweaks. It’s also an excellent diagnostic tool for helping to uncover thread safety issues: if inserting Thread.Yield anywhere in the code breaks the program, you almost certainly have a bug.
What is a thread ‘blocking’?
A thread is deemed blocked when its execution is paused for some reason, such as when Sleeping or waiting for another to end via Join. A blocked thread immediately yields its processor time slice and for then on it consumes no processor time until its blocking condition is satisfied.
Analogy:
Imagine you are at a buffet restaurant with limited seating, and there’s a rule that only one person can go to the food counter at a time. When you arrive at the counter, you notice that someone else is already selecting their food. In this situation, you’re a “blocked thread.”
Here’s the analogy breakdown:
- Blocked Thread: You (the thread) have reached the food counter but cannot proceed with selecting your food because someone else (another thread) is already there, blocking your access.
- Yields Processor Time Slice: Instead of waiting impatiently and hogging the counter, you courteously decide to yield your turn and let the other person finish selecting their food first. You give up your processor time slice, meaning you stop using the counter and let other threads have their turn.
- Consumes No Processor Time: While you wait for the other person to finish and the blocking condition to be satisfied (the food counter becomes available), you do not use any more of your time or attention. You’re in a state of suspension, not consuming any processor time.
- Blocking Condition Satisfied: Once the person in front of you finishes selecting their food and leaves the counter, the blocking condition is satisfied (the food counter becomes available), and you can proceed with selecting your food.
What are the I/O-bound and compute-bound operations?
An operation that spends the most of its time waiting for something to happen - IO-bound operation (typically involves input or output but its not a hard requirement. Example Console.ReadLine).
In contrast, an operation that spends most of its time performing CPU-intensive work is called compute-bound
If local variables are kept in memory stack, does that mean that with multiple threads all local variables are kept in the same memory stack?
No. The CLR assigns each thread its own memory stack so that local variables are kept separate.
The difference is when at least two threads uses the same variable or object (created outside of runned method) - threads share data if they have a common reference
Consider this block of code:
~~~
class ThreadUnsafe
{
static bool _done;
static void Main() { new Thread(Go).Start(); Go(); } static void Go() { if(!_done){Console.WriteLine("This will be printed twice instead of once"); _done = true;}; } } ~~~ Since two threads are evaluating the if statement at exactly the same time as the other thread is executing the WriteLine statement before its had a chance to set `_done` to true we get unexpected second line of printed text. Can we somehow avoid this?
Yes, by introducing exclusive lock while reading and writing to the shared field.
~~~
class ThreadUnsafe
{
static bool _done;
static readonly object _locker = new object();
static void Main() { new Thread(Go).Start(); Go(); } static void Go() { lock(_locker) { if(!_done){Console.WriteLine("This will be printed only once"); _done = true;}; } } } ~~~ When two threads simultaneously contend a lock (which can be any reference type object), one thread waits or blocks until the lock becomes available. In this case, it ensures that only one thread can enter its code block at a time, and the statement text will be printed just once, as expected
What are the dangers (if there are any) of using x++ expression in multithreaded program?
The expression x++ executes on the underlying processor as distinct read-increment-write operations. So, if two threads execute x++ at once outside of lock, the variable can end up getting incremented once rather than twice (or worse - x could be torn, ending up with a bitwise mixture of old and new content, under certain conditions).
The problem lies in the combination of both reading and writing the shared variable x within an expression that is not atomic. Let’s break it down:
Non-Atomic Expression: The expression x++ is not an atomic operation; it consists of three separate steps: reading the current value of x, incrementing it, and writing the updated value back to x. These three steps can be interrupted and interleaved by other threads.
Concurrent Access: When multiple threads execute the x++ expression simultaneously, they might read the same initial value of x before any increment is performed, even if the variable has a different value in each thread at the beginning.
Race Condition: The race condition occurs when two or more threads attempt to access and modify shared data concurrently without proper synchronization. This situation leads to unpredictable outcomes because the interleaving of read-increment-write steps can result in lost increments or incorrect final values.
So, the problem is not just about reading the same variable value in both threads. Even if the variables have different initial values, the non-atomic nature of the x++ expression can cause it to malfunction due to interleaving of steps when multiple threads access it simultaneously.
Locking is not a silver bullet for thread safety - its easy to forget to lock around accessing a field, and locking can create problems of its own (such as deadlocking)
Will this code run as expected?
~~~
for(int i; i<10; i++)
new Thread(() => Console.Write(i)).Start();
~~~
No, the result might look something like this:
0223557799
The problem is that the i variable refers to the same memory location throughout the loop’s lifetime. Therefore, each thread calls Console.Write on a variable whose value can change as it is running. The solution is to use a temporary variable:
~~~
for(int i; i<10; i++)
{
int temp = i;
new Thread(() => Console.Write(i)).Start();
}
~~~
Variable temp is now local to each loop iteration. Therefore, each thread captures a different memory location and there’s no problem.
Will this exception be handled?
try { new Thread(Go); } catch (Exception ex) { Console.Write("Oh no!"); } void Go() { throw null; }
No. Here’s why:
1. The try block only surrounds the creation of the new thread, not the code inside the thread’s Go method. The exception occurs inside the Go method, which is outside the try block’s scope.
- When an unhandled exception occurs within a thread, it doesn’t propagate back to the thread that created it. Instead, it is treated as an unhandled exception within that specific thread.
The solution would be to wrap the inside of the Go method in the try/catch block.
What is a centralized exception handling?
These are “global” exception handling events, fired after an unhandled exception via message loop. This is useful as a backstop for logging and reporting bugs (although it wont fire for unhadled exception on worker threads that you create). Handling these events prevents the program from shutting down, although you may choose to restart the app to avoid potential corruption of state that can follow or that led to the unhadled exception.
What are the ‘foregrounf threads’ vs ‘background threads’?
Foreground threads keep the application alive as long as any one of them is running, whereas background threads do not. After all foreground threads finish, the application ends, and any background threads that are still running abruptly terminate.
All newly created threads are foreground threads (unless explicitly set otherwise) and if in our one foreground thread we have another thread, then set its property IsBackground = true; the program will stop abruptly, since foreground threads does not wait for any background threads to finish:
~~~
static void Main(string[] args)
{
Thread worker = new Thread (() => Console.ReadLine());
if (args.Length > 0) worker.isBackground = true; // if this is set to true, the app stops immediatly
worker.Start();
}
~~~
What is a ‘thread priority’?
A thread’s Priority property determines how much execution time it is allotted relative to other active threads in the OS
enum ThreadPriority = { Lowest, BelowNormal, Normal, AboveNormal, Highest};
This becomes relevant when multiple threads are simultaneously active. You need to be careful when elevating threads priority since it may starve other threads (particularly those with user interface since it can slow down the entire computer).
What is “signaling” in threads?
Its when one thread waits for receiving notification(s) from another thread.
Simpliest signalign would be calling WaitOne on a ManualResetEvent, then it blocks current thread until another thread “opens” the signal by calling Set
In the example, we start up a thread that waits on a ManualResetEvent. It remains blocked for two seconds until the main thread signals it:
~~~
var signal = new ManualResetEvent(false);
new Thread(() => {
Console.WriteLine(“Waiting for signal”);
signal.WaitOne();
signal.Dispose();
Console.WriteLine(“Got Signal!”);
}).Start();
Thread.Sleep(2000);
signal.Set(); // “open” the signal
~~~
Just for the sake of clarity, here is the order in which this code is executed:
1. New thread starts executing the code inside the anonymous method.
2. The new thread reaches the line signal.WaitOne();.
3. Since the signal is in the initial state of false, the WaitOne() call blocks the new thread, putting it in a wait state.
4. The main thread (the one running the Main method) pauses for 2 seconds due to the Thread.Sleep(2000) call.
5. After the 2-second sleep, the main thread wakes up and calls signal.Set() to open the signal (set it to true).
6. The signal is now in an open state (true).
7. The new thread continues execution and returns from the WaitOne() call, unblocking itself.
8. The new thread disposes of the signal using signal.Dispose().
9. The new thread writes “Got Signal!” to the console.
What is a quite common way of making an application more responsive by using threads?
A popular approach is to start a worker threads for time-consuming operations. The code on a worker runs a time-consuming operation and then updates the UI when complete.
However, all rich client applications have a threading model whereby UI elements and controls can be accessed only from the thread that created them (typically the main UI thread). Hence when you want to update the UI from a worker thread you must forward the request to the UI thread (the technical term is marshal).
The other example is a Single Document Interface (SDI) application, such as Microsoft Word. Each SDI window (page) have its own UI thread, in that way each window can be made more responsive with respect to the others.
What is a ‘synchronization contexts’?
To understand the concept of synchronization contexts in the System.ComponentModel
namespace, let’s use an analogy:
Imagine you are a conductor leading a large orchestra performance. The orchestra represents multiple threads in a multi-threaded application, and each musician plays a specific instrument (represents a task).
- Musicians (Threads): In a multi-threaded application, you have different threads, each responsible for executing a specific task. These threads can perform various actions concurrently.
- Conductor (Synchronization Context): The synchronization context is like a conductor that coordinates the musicians (threads) during the performance. It ensures that the musicians play in harmony, follow the correct tempo, and maintain synchronization with each other.
- Performance (Application Execution): The application’s execution is like the orchestra performance. It involves the interaction of multiple threads performing their tasks concurrently.
- Sheet Music (Tasks): Each musician follows a piece of sheet music representing their task to be executed. The sheet music contains instructions on what to play and when to play it.
Now, let’s explore what the synchronization context does and why it’s important:
Synchronization contexts in the System.ComponentModel
namespace are used to manage the execution flow of tasks and their synchronization in multi-threaded environments. They help ensure that tasks are executed in a controlled and coordinated manner, allowing proper communication and synchronization between threads.
- Flow Control: The synchronization context acts as a conductor, managing the flow of execution among threads. It ensures that tasks are executed in a particular order and that they can communicate with each other effectively.
- Context Preservation: Some tasks require certain execution contexts, such as UI thread context in GUI applications. The synchronization context helps preserve the necessary execution context when tasks are scheduled for execution.
- Thread Safety: Synchronization contexts help enforce thread safety when dealing with shared resources and UI updates. They ensure that only one thread at a time can access critical sections of code, preventing race conditions and data corruption.
- Task Scheduling: The synchronization context schedules tasks for execution based on their priority, availability of resources, and the context in which they need to be executed. It helps manage the execution order and avoid thread contention.
Why is it important?
The use of synchronization contexts is crucial in scenarios where you have concurrent tasks that need coordination, like in GUI applications or multi-threaded environments. Without a synchronization context, the threads might interfere with each other, leading to unpredictable and potentially incorrect behavior.
For example, in a GUI application, the synchronization context ensures that UI updates are executed on the main UI thread, preventing cross-thread violations and providing a smooth and responsive user experience.
In summary, synchronization contexts in the System.ComponentModel
namespace play a vital role in managing the execution flow, preserving execution context, and ensuring thread safety in multi-threaded environments. They are essential for coordinating tasks, maintaining synchronization, and avoiding issues that arise from concurrent execution.
Example how we can post a message from a worker thread directly to UI controls:
~~~
partial class MyWindow: Window
{
SynchronizationContext _uiSyncContext;
public MyWindow() { InitializeComponent(); // Capture the synchronization context for the current UI thread: _uiSyncContext = SynchronizationContext.Current; new Thread(Work).Start(); } void Work() { Thread.Sleep(5000); // Simulate time-consuming task UpdateMessage("The answer"); } void UpdateMessage(string message) { // Marshal the delegate to the UI thread; _uiSyncContext.Post(_ => txtMessage.Text = message, null); } } ~~~