What is the purpose of async/await in Rust?
Solution 1:
You are conflating a few concepts.
Concurrency is not parallelism, and async
and await
are tools for concurrency, which may sometimes mean they are also tools for parallelism.
Additionally, whether a future is immediately polled or not is orthogonal to the syntax chosen.
async
/ await
The keywords async
and await
exist to make creating and interacting with asynchronous code easier to read and look more like "normal" synchronous code. This is true in all of the languages that have such keywords, as far as I am aware.
Simpler code
This is code that creates a future that adds two numbers when polled
before
fn long_running_operation(a: u8, b: u8) -> impl Future<Output = u8> {
struct Value(u8, u8);
impl Future for Value {
type Output = u8;
fn poll(self: Pin<&mut Self>, _ctx: &mut Context) -> Poll<Self::Output> {
Poll::Ready(self.0 + self.1)
}
}
Value(a, b)
}
after
async fn long_running_operation(a: u8, b: u8) -> u8 {
a + b
}
Note that the "before" code is basically the implementation of today's poll_fn
function
See also Peter Hall's answer about how keeping track of many variables can be made nicer.
References
One of the potentially surprising things about async
/await
is that it enables a specific pattern that wasn't possible before: using references in futures. Here's some code that fills up a buffer with a value in an asynchronous manner:
before
use std::io;
fn fill_up<'a>(buf: &'a mut [u8]) -> impl Future<Output = io::Result<usize>> + 'a {
futures::future::lazy(move |_| {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
})
}
fn foo() -> impl Future<Output = Vec<u8>> {
let mut data = vec![0; 8];
fill_up(&mut data).map(|_| data)
}
This fails to compile:
error[E0597]: `data` does not live long enough
--> src/main.rs:33:17
|
33 | fill_up_old(&mut data).map(|_| data)
| ^^^^^^^^^ borrowed value does not live long enough
34 | }
| - `data` dropped here while still borrowed
|
= note: borrowed value must be valid for the static lifetime...
error[E0505]: cannot move out of `data` because it is borrowed
--> src/main.rs:33:32
|
33 | fill_up_old(&mut data).map(|_| data)
| --------- ^^^ ---- move occurs due to use in closure
| | |
| | move out of `data` occurs here
| borrow of `data` occurs here
|
= note: borrowed value must be valid for the static lifetime...
after
use std::io;
async fn fill_up(buf: &mut [u8]) -> io::Result<usize> {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
}
async fn foo() -> Vec<u8> {
let mut data = vec![0; 8];
fill_up(&mut data).await.expect("IO failed");
data
}
This works!
Calling an async
function does not run anything
The implementation and design of a Future
and the entire system around futures, on the other hand, is unrelated to the keywords async
and await
. Indeed, Rust has a thriving asynchronous ecosystem (such as with Tokio) before the async
/ await
keywords ever existed. The same was true for JavaScript.
Why aren't Future
s polled immediately on creation?
For the most authoritative answer, check out this comment from withoutboats on the RFC pull request:
A fundamental difference between Rust's futures and those from other languages is that Rust's futures do not do anything unless polled. The whole system is built around this: for example, cancellation is dropping the future for precisely this reason. In contrast, in other languages, calling an async fn spins up a future that starts executing immediately.
A point about this is that async & await in Rust are not inherently concurrent constructions. If you have a program that only uses async & await and no concurrency primitives, the code in your program will execute in a defined, statically known, linear order. Obviously, most programs will use some kind of concurrency to schedule multiple, concurrent tasks on the event loop, but they don't have to. What this means is that you can - trivially - locally guarantee the ordering of certain events, even if there is nonblocking IO performed in between them that you want to be asynchronous with some larger set of nonlocal events (e.g. you can strictly control ordering of events inside of a request handler, while being concurrent with many other request handlers, even on two sides of an await point).
This property gives Rust's async/await syntax the kind of local reasoning & low-level control that makes Rust what it is. Running up to the first await point would not inherently violate that - you'd still know when the code executed, it would just execute in two different places depending on whether it came before or after an await. However, I think the decision made by other languages to start executing immediately largely stems from their systems which immediately schedule a task concurrently when you call an async fn (for example, that's the impression of the underlying problem I got from the Dart 2.0 document).
Some of the Dart 2.0 background is covered by this discussion from munificent:
Hi, I'm on the Dart team. Dart's async/await was designed mainly by Erik Meijer, who also worked on async/await for C#. In C#, async/await is synchronous to the first await. For Dart, Erik and others felt that C#'s model was too confusing and instead specified that an async function always yields once before executing any code.
At the time, I and another on my team were tasked with being the guinea pigs to try out the new in-progress syntax and semantics in our package manager. Based on that experience, we felt async functions should run synchronously to the first await. Our arguments were mostly:
Always yielding once incurs a performance penalty for no good reason. In most cases, this doesn't matter, but in some it really does. Even in cases where you can live with it, it's a drag to bleed a little perf everywhere.
Always yielding means certain patterns cannot be implemented using async/await. In particular, it's really common to have code like (pseudo-code here):
getThingFromNetwork(): if (downloadAlreadyInProgress): return cachedFuture cachedFuture = startDownload() return cachedFuture
In other words, you have an async operation that you can call multiple times before it completes. Later calls use the same previously-created pending future. You want to ensure you don't start the operation multiple times. That means you need to synchronously check the cache before starting the operation.
If async functions are async from the start, the above function can't use async/await.
We pleaded our case, but ultimately the language designers stuck with async-from-the-top. This was several years ago.
That turned out to be the wrong call. The performance cost is real enough that many users developed a mindset that "async functions are slow" and started avoiding using it even in cases where the perf hit was affordable. Worse, we see nasty concurrency bugs where people think they can do some synchronous work at the top of a function and are dismayed to discover they've created race conditions. Overall, it seems users do not naturally assume an async function yields before executing any code.
So, for Dart 2, we are now taking the very painful breaking change to change async functions to be synchronous to the first await and migrating all of our existing code through that transition. I'm glad we're making the change, but I really wish we'd done the right thing on day one.
I don't know if Rust's ownership and performance model place different constraints on you where being async from the top really is better, but from our experience, sync-to-the-first-await is clearly the better trade-off for Dart.
cramert replies (note that some of this syntax is outdated now):
If you need code to execute immediately when a function is called rather than later on when the future is polled, you can write your function like this:
fn foo() -> impl Future<Item=Thing> { println!("prints immediately"); async_block! { println!("prints when the future is first polled"); await!(bar()); await!(baz()) } }
Code examples
These examples use the async support in Rust 1.39 and the futures crate 0.3.1.
Literal transcription of the C# code
use futures; // 0.3.1
async fn long_running_operation(a: u8, b: u8) -> u8 {
println!("long_running_operation");
a + b
}
fn another_operation(c: u8, d: u8) -> u8 {
println!("another_operation");
c * d
}
async fn foo() -> u8 {
println!("foo");
let sum = long_running_operation(1, 2);
another_operation(3, 4);
sum.await
}
fn main() {
let task = foo();
futures::executor::block_on(async {
let v = task.await;
println!("Result: {}", v);
});
}
If you called foo
, the sequence of events in Rust would be:
- Something implementing
Future<Output = u8>
is returned.
That's it. No "actual" work is done yet. If you take the result of foo
and drive it towards completion (by polling it, in this case via futures::executor::block_on
), then the next steps are:
Something implementing
Future<Output = u8>
is returned from callinglong_running_operation
(it does not start work yet).another_operation
does work as it is synchronous.the
.await
syntax causes the code inlong_running_operation
to start. Thefoo
future will continue to return "not ready" until the computation is done.
The output would be:
foo
another_operation
long_running_operation
Result: 3
Note that there are no thread pools here: this is all done on a single thread.
async
blocks
You can also use async
blocks:
use futures::{future, FutureExt}; // 0.3.1
fn long_running_operation(a: u8, b: u8) -> u8 {
println!("long_running_operation");
a + b
}
fn another_operation(c: u8, d: u8) -> u8 {
println!("another_operation");
c * d
}
async fn foo() -> u8 {
println!("foo");
let sum = async { long_running_operation(1, 2) };
let oth = async { another_operation(3, 4) };
let both = future::join(sum, oth).map(|(sum, _)| sum);
both.await
}
Here we wrap synchronous code in an async
block and then wait for both actions to complete before this function will be complete.
Note that wrapping synchronous code like this is not a good idea for anything that will actually take a long time; see What is the best approach to encapsulate blocking I/O in future-rs? for more info.
With a threadpool
// Requires the `thread-pool` feature to be enabled
use futures::{executor::ThreadPool, future, task::SpawnExt, FutureExt};
async fn foo(pool: &mut ThreadPool) -> u8 {
println!("foo");
let sum = pool
.spawn_with_handle(async { long_running_operation(1, 2) })
.unwrap();
let oth = pool
.spawn_with_handle(async { another_operation(3, 4) })
.unwrap();
let both = future::join(sum, oth).map(|(sum, _)| sum);
both.await
}
Solution 2:
Consider this simple pseudo-JavaScript code that fetches some data, processes it, fetches some more data based on the previous step, summarises it, and then prints a result:
getData(url)
.then(response -> parseObjects(response.data))
.then(data -> findAll(data, 'foo'))
.then(foos -> getWikipediaPagesFor(foos))
.then(sumPages)
.then(sum -> console.log("sum is: ", sum));
In async/await
form, that's:
async {
let response = await getData(url);
let objects = parseObjects(response.data);
let foos = findAll(objects, 'foo');
let pages = await getWikipediaPagesFor(foos);
let sum = sumPages(pages);
console.log("sum is: ", sum);
}
It introduces a lot of single-use variables and is arguably worse than the original version with promises. So why bother?
Consider this change, where the variables response
and objects
are needed later on in the computation:
async {
let response = await getData(url);
let objects = parseObjects(response.data);
let foos = findAll(objects, 'foo');
let pages = await getWikipediaPagesFor(foos);
let sum = sumPages(pages, objects.length);
console.log("sum is: ", sum, " and status was: ", response.status);
}
And try to rewrite it in the original form with promises:
getData(url)
.then(response -> Promise.resolve(parseObjects(response.data))
.then(objects -> Promise.resolve(findAll(objects, 'foo'))
.then(foos -> getWikipediaPagesFor(foos))
.then(pages -> sumPages(pages, objects.length)))
.then(sum -> console.log("sum is: ", sum, " and status was: ", response.status)));
Each time you need to refer back to a previous result, you need to nest the entire structure one level deeper. This can quickly become very difficult to read and maintain, but the async
/await
version does not suffer from this problem.
Solution 3:
The purpose of async
/await
in Rust is to provide a toolkit for concurrency—same as in C# and other languages.
In C# and JavaScript, async
methods start running immediately, and they're scheduled whether you await
the result or not. In Python and Rust, when you call an async
method, nothing happens (it isn't even scheduled) until you await
it. But it's largely the same programming style either way.
The ability to spawn another task (that runs concurrent with and independent of the current task) is provided by libraries: see async_std::task::spawn
and tokio::task::spawn
.
As for why Rust async
is not exactly like C#, well, consider the differences between the two languages:
-
Rust discourages global mutable state. In C# and JS, every
async
method call is implicitly added to a global mutable queue. It's a side effect to some implicit context. For better or worse, that's not Rust's style. -
Rust is not a framework. It makes sense that C# provides a default event loop. It also provides a great garbage collector! Lots of things that come standard in other languages are optional libraries in Rust.