Async Runtime
The async runtime is the core of Jule’s concurrency programming model. It is built directly into the language, meaning it is designed so that you can start writing asynchronous code without relying on any external libraries. Everything you need, including the scheduler, is provided by the Jule runtime.
async / await
One of the most important characteristics of Jule's concurrency model is that the functions managed by the async runtime, along with their possible suspension points, are explicit. To write asynchronous code, you must define async functions and await them when you call an async function.
For example:
async fn myfunc() {
// Do some work.
}In this example, myfunc is an asynchronous function. Asynchronous functions are compiled into state machines, and the compiler handles this transformation for you. Non-async functions, on the other hand, belong to the synchronous world and are never state machines under any circumstances.
Asynchronous functions must be awaited when they are called.
Here is an example:
myfunc().awaitWhen an async function is awaited, the execution of the calling function is suspended until the awaited async function completes. In essence, this is the same as calling a typical synchronous function. The only difference is that when the called async function suspends, the entire await chain is put on hold until it becomes able to resume from where it left off.
Working with asynchronous functions is based on the following fundamental principles:
- An async function may only be called from another async function.
- The
awaitkeyword may only be used inside async functions. - Every async function call must be awaited.
- Each
awaitpoint is a potential suspension point. - The root async function is always a coroutine scheduled by the runtime.
Using Async Runtime
In the section above, one of the rules stated that async functions can only be called from another async function. This may raise the question: what should you do if your program’s main function is not async?
By default, Jule has a synchronous main function. However, you can also define it as async. A main function defined as async is treated as a coroutine at runtime.
Here is an example of an async main function:
async fn main() {}Scheduler
Scheduler implements a cooperative coroutine execution loop for the Jule runtime. It is designed as a low-level runtime component, not a user-facing abstraction.
The scheduler is designed according to the C:M:P model:
C (Coroutine): A suspendable state machine generated by the compiler. Conceptually, it is an async function, but it is not part of the user's ordinary control flow. It is detached and it may execute concurrently at any time, and its scheduling is fully managed by the scheduler. It behaves similarly to a typical thread.
M (Machine): A real operating system thread. It is responsible for executing a coroutine. Only as many M instances may be created as permitted by COMAXPROCS.
P (Processor): A scheduler processor. It owns its own local state. An M must be paired with a P in order to execute a coroutine and perform scheduling. An M can be paired with only one P at a time, and a P can be paired with only one M at a time.
When a coroutine is created, the scheduler automatically enqueues it and schedules it for execution. Jule provides a set of concurrency primitives that are integrated with the scheduler. In other words, the scheduler tries to do as much work as possible to make your job easier: when necessary, it can suspend a coroutine at a safe point (for example, when contending on a mutex) and reschedule it later. However, because the scheduler is cooperative, it may still require additional help from the developer.
Yield
The scheduler is cooperative, which means there is no preemption. As a result, long-running or CPU-bound tasks may cause starvation for other coroutines. To avoid this, it is the developer’s responsibility to yield at appropriate points.
A yield operation temporarily suspends the currently running coroutine and gives the scheduler an opportunity to run other coroutines. You can do this with the std/runtime package.
Here is an example of yielding:
runtime::Yield().awaitDeadlock Analysis
Jule runtime performs deadlock analysis and results in a panic when a deadlock occurs. This analysis attempts to detect locked states as thoroughly as possible, but it is not an exhaustive or in-depth analysis.
For example, a coroutine might be trying to receive data from an unbuffered channel, but no other coroutine will ever send data to that channel. In this case, the coroutine will remain in an infinite wait state. However, the Jule runtime will not consider this a deadlock because it assumes that another running coroutine(s) may eventually send data to the channel. However, if all coroutines become suspended in a way that they cannot wake each other, it results in a deadlock.
In other words, deadlock analysis only considers it a deadlock when all coroutines are inevitably locked. Otherwise, it is possible to have coroutines waiting indefinitely. For this reason, concurrency requires careful management as always.
The runtime does not always result in a panic due to a deadlock when there is a potential risk of one occurring. During execution, coroutines might not be executed in an order that leads to a deadlock, but this does not mean a deadlock cannot happen. The same program might encounter a deadlock and trigger a panic during a different runtime execution.
This means it is not a debugging mechanism but rather an auxiliary hint. If a program encounters a definitive deadlock, instead of waiting indefinitely, you get a deadlock panic. This indicates that something is fundamentally wrong with the concurrency in your program.
Scheduling of The Main Coroutine
The scheduler considers the program finished when the main coroutine returns. Any other coroutines are ignored; once the main coroutine ends, the program exits. Other coroutines may be terminated without ever being executed or while they are still running.
Blocking Tasks
As a developer, it is your responsibility to keep performance optimal and to prevent the scheduler from experiencing starvation due to blocking operations. The scheduler assumes that a coroutine will typically yield when appropriate.
The scheduler does not detect blocking operations. If an M attempts to perform a blocking operation, the scheduler does not try to hide or tolerate it. This may be efficient for short-lived blocking operations (for example, I/O on a small file), but for long-running operations it can cause an M to remain blocked for an extended period of time and significantly degrade performance.
To prevent this, the runtime provides an additional multi-threaded environment: a blocking-operation thread pool. This is typically a thread pool with a theoretically unbounded job queue. Worker threads execute jobs one by one, and when a job completes, the corresponding coroutine is resumed. All executed jobs are synchronous and do not interact with the scheduler.
To dispatch a blocking task to the thread pool, you use the Blocking function provided by the runtime.
Example:
runtime::Blocking(myJob).awaitIn the example above, you can see that the call is awaited. This is required because the Blocking function is asynchronous.
This is necessary for the following reasons:
- Prevents it from being called from synchronous functions
- Ensures that the coroutine yields after enqueuing the job
- Allows it to be used as a separate coroutine if needed
Example program:
use "std/runtime"
use "std/sync"
async fn main() {
mut wg := sync::WaitGroup.New()
mut i := 0
for i < 64; i++ {
wg.Add(1)
co runtime::Blocking(fn() {
// blocking job...
wg.Done()
})
}
wg.Wait().await
}The example program above invokes runtime::Blocking calls as separate coroutines. The reason for this is that if they were awaited, each blocking job would only be enqueued after the previous one completed. This is because if you do not treat it as a coroutine, you must await it, which means waiting for the job to finish. Depending on the scenario, either approach may be the most optimal choice.
WARNING
The blocking thread pool can be quite aggressive in creating threads and is based on the assumption that jobs in the queue will block for a while. In other words, it is not optimized for short and fast blocking tasks. Having a very large number of blocking jobs can cause the queue to grow, leading to increased memory consumption.