Basics of Threading
Threading is crucial to understand if you want your applications to be responsive to multiple users, to update a user interface while a long-running process proceeds in the background, or just to allow you to process multiple things at the same time. In addition, the concept is very useful for situations where you are awaiting blocking I/O or want to take advantage of multiple CPU cores to get things done.
Threading is a complex subject, especially at the lower levels. It is a necessary one to understand at some level, because problems from threading issues can occasionally bubble up and interfere with higher level coding. It’s also absolutely critical to understand if you need to write high performance code.
“Basically concurrency is two or more things happening concurrently or at the same time.”
Concurrency is the execution of several computations during overlapping time periods, instead of being done sequentially. Note that this definition says nothing about the architecture of the system. It doesn’t matter if it is across threads, multiple CPU cores, multiple processes, or even multiple computers.
Multitasking is the concurrent execution of multiple tasks. New tasks can interrupt running tasks before they finish, instead of waiting. These tasks share common resources such as CPUs, memory, etc. Note that the definition of multi-tasking allows two tasks to advance during the same time period. This does not necessarily imply parallel execution. Pre-emptive multitasking is the type used on windows. This allows a thread to be stopped and another thread to run in its place. Essentially, multitasking divides available processor time amongst a group of processes that need it.
12:35 Interruption of Running Work
The interruption of running work means that the state of the existing process is saved and the state of another process is loaded. This is referred to as a “context switch”. Note that context switches aren’t free. If you have too many of them, you will slow things down. Context switches can happen at any time during the execution of code. Consequently, if you are doing multi-threaded development, you have to be careful about shared state. While context switching makes multiple work threads possible, it also means that weird concurrent problems can crop up due to one thread having stale information.
A process provides the resources needed to execute a program. Things like the virtual address space, environment variables and is usually started with a single thread. These resources include things like environment variables, virtual address space, etc. This thread can create and control other threads of its own. All threads in the process share the same virtual address space.
A thread is the entity within a process that can be scheduled for execution. You create a new thread by specifying the memory address of the code that will be executed when the thread runs. If memory in a process is shared between multiple threads, then you need to make sure access to that memory is synchronized properly.
“This is where your whole synchronization processes come into play.”
You can create a thread in a suspended state so that it doesn’t start executing until you tell it do so. Each new thread gets its own stack space, which you have to specify. This space is de-allocated when the thread exits, but not when it is terminated by another process. A thread will be identified by a handle, which is a pointer to a pointer.
19:05 Thread Scheduler
The thread scheduler handles the scheduling of threads for execution. This is done based on priority, which ranges from 0 to 31 on windows, with higher numbers having higher priority. On windows, the thread that zeroes out free memory pages is the only one that gets a priority of zero. The thread that controls the GUI is typically set to higher priority than background threads, so that it can interrupt their execution so the app remains responsive. You have to do special things, like calling a wait function, using thread.sleep, or use a critical section if your main thread has to wait for the background thread.
24:25 Thread Pool
A thread pool is a collection of worker threads that execute asynchronous callbacks on behalf of an application. The primary purpose of the pool is to reduce the number of threads required by an application for healthy functioning
A fiber is a unit of execution that must be scheduled by the application. This scheduling is controlled by the application and is different from a thread in that it runs in the context of the thread that created it. This is really handy for porting applications that expect to control their own thread allocations and scheduling. Its spending is controlled by the owning thread, but it doesn’t exist apart from it.
26:55 Thread Local Storage
Thread Local Storage is a way to provide unique data for a thread that the owning process can access using a global index. Fibers also have Fiber Local Storage, which serves a similar purpose at the fiber level. Having a separate copy of some data per thread minimizes the areas where a race condition can occur.
28:30 Race Condition
A race condition or race hazard is incorrect application behavior that can occur when the output is dependent on the sequence or timing of other uncontrollable events. A good example of this occurs when you make an asynchronous call across the network to retrieve some data, but continue processing without waiting on the data to return, making the assumption that if you do enough other work in the interim, that the data will have returned by the time you need it. This will work on a development system, but will eventually cause weird behavior in a system having network issues. This can also occur when two threads are trying to access the same object, one for reading from the object and the other for deleting the object (among many other things). The timing of these operations is critical, and may not be the same across systems.
31:35 Critical Section
A critical section is a piece of code that accesses a shared resource that cannot be accessed by more than one thread at a time. The equivalent to this in the previous metaphor might be to set up a system so that only one spouse has their name on the account at a time. That spouse is the only one that can access the account at that time. The other has to wait their turn. This doesn’t require communication between threads, so the overhead is lower. However it can be a problem when a critical section is executed over a long period of time.
A mutex is an object in a concurrent program that serves as a lock, used to negotiate mutual exclusion between threads. The object upon which the lock is placed can be any number of things.
A semaphore is a variable used to control access to a common resource by multiple processes. Think of this as the couple’s checking account balance. The balance is essentially thread-safe, since even if the couple both purchase at the same time, both purchases will go against the account. This tends more towards increment/decrement than value assignment. The reason it works is because a withdrawal is a thread-safe decrement of the value in the account, not setting the value to the value it was last time the thread checked it, minus the purchase price. The order doesn’t matter.
A deadlock is a state in which each member of a group is waiting for some other member to take action, usually by releasing a mutex. Note that a deadlock situation can be fixed by lowering the lock granularity. This, of course, can slow throughput if not all the things in the wallet are needed.
“This is even dumber than a deadlock.”
A livelock occurs when multiple processes change in regard to one another, but neither moves forward.
38:05 Priority Inversion
A priority inversion occurs when a lower priority thread holds a resource required by a higher priority thread, such that the high priority thread can’t continue execution.
This is a concept from the Android Experiments Objects show in Japan. It is an electronic ink calendar that connects to your Google calendar. It looks like a regular wall hanging calendar but is made of an e-paper screen. This would allow it to be on all day with little power usage. The idea involves using a custom Android app to get real-time data from your Google calendar to update the screen with what you have planned. Unfortunately there aren’t any set plans for production as it was just an early concept of what could be done.
Tricks of the Trade
Algorithms are useful tools for understanding how your daily life can be made better. For instance, we talked about bank accounts as shared resources. This is exactly why my wife and I have separate accounts. I recognized this as being a situation where shared state could cause problems – and essentially put the accounts in thread local storage (we each have our own). We still communicate about money, but that communication is not pathological, because one of us doesn’t cause overdrafts for the other. This was especially important in the early stages of when I had a business (and while I’m on this podcast), because it allowed me to spin up and control a separate fiber for managing business expenses.