Concurrency vs Parallelism
"Concurrency is about dealing with a lot of things at the same time. Parallelism is about doing a lot of things at the same time."
At a high level, one can group a set of instructions to perform some operation into a descriptive interruptible task. In order to execute some task, the task will require some resource to continue. This could be CPU, memory, IO. While waiting for such resources, the task can be paused and another task can run. In contrast, parallelism allows multiple tasks to execute simultaneously. This is made possible by executing code on multiple processors or cores.
Operating system threads
As programmers, we are at the mercy of the OS - it abstracts over our physical machine and allows us(in userland) to interact with it via syscalls(libc, winapi...). Threads can be used as a means to achieve concurrency and parallelism - the OS can run threads across multiple cores. Threads need not map 1:1 with the number of cores on a computer: if you create more threads than cores, the OS will schedule each thread to run, switching between them(context switch) so each thread has some time to run.
An Overview of Async
The goal of an async implementation is to provide an interface for the programmer to write non-sequential programs. It allows one to handle many tasks without waiting on completion of a task. This is essential in scenarios such as handling user input, providing a responsive program, or improving the efficiency of an I/O bound program.
Async programming comes in a couple of different forms:
- Cooperative / Non-preemptive multitasking - Here, the developer is responsible for yielding control of the program to the OS.
- Preemptive multitasking - forcing each programmer to be responsible for yielding control of their program leads to issues. Be it simple mistakes or malicious intent, could potentially halt the entire system. Instead, a program can request resources from the operating system or async runtime and the runtime can schedule to halt or run the process.
- cooperative and preemptive async - Async systems can be written to be cooperative or preemptive in nature, for example rust/js tasks are cooperative; the developer writes the task and defines where the task should yield.os-threads and goroutines are examples of preemptive async environments - the os/scheduler is responsible for handling when a task is to be executed.
- coroutines: can be stackful vs stackless. stackful - can suspend execution anytime. The entire stack is preserved. In contrast, stackless coroutines share the same stack and therefore cannot suspend in the middle of a stackframe. This limits their ability to be preempted but makes them more memory efficient.
Threads
A thread of execution - sequence of instructions to be executed sequentially.
OS threads vs userland threads
OS threads map to kernal threads 1:1. They are costly in terms of creation and destruction. Each thread has its own stack. In order to switch between running tasks, a context switch must occur. During this process, the CPU stops running one task, stores its register values, runs the next task and restores that task's register values. Each process can spawn multiple threads that share the same address space. Gives you parallelism for free (except the mental cost designing and synchronising shared resources)
Fibers/Green threads/stackful coroutines: follow an M:N model - there are M tasks which run on N OS threads. They emulate os threads by setting up a stack, saving state, and context switch. Within the runtime, there is a scheduler responsible for running different tasks once a task has yielded control to the scheduler. Since they are stackful, they can be preempted by the runtime also. Each task must setup its stack, potentially costing more than what you actually use. Unlike OS threads however, the stack can grow.
Why Async language constructs exist
If the OS has us covered when it comes to running tasks asynchronously, why go to the trouble of recreating such an async environment?
As discussed, there is more than one way to handle async computation. The abstraction offered by the OS might not effectively cover our usecase. In such cases, it is necessary to have an alternative runtime.
Each task is scheduled to run by a scheduler. When a task yields, it is placed on a queue, another task which is in a ready state can be scheduled to run.
Callbacks
One can achieve async programming via callbacks. Each task consists of a group of callbacks. Typical in javascript code. Requires writing a program in quite a different way than sequentially. Easy to enter "callback hell", where large chains of callbacks become difficult to reason about. Memory increases with the number of callbacks.
Coroutines
Represent each task as a state machine and can either be stackful or stackless.