Functions == awesome!
Functional programming
FaaS
Lambdas
Purity
...
...but if this talk was about functions, you would have passed me control when you arrived and you'd have to sit still for the whole talk until I'm done and pass you back control. But since this talk is about coroutines, I can suspend it and you can go and grab a beer, come back when you are done, and let me continue the talk where I was!
co-routine
(ˈkəʊruːˌtiːn)
n
(Computer Science) computing a section of a computer
program similar to but differing from a subroutine
in that it can be left and re-entered at any point
Coroutine. (n.d.) Collins English Dictionary – Complete and Unabridged, 12th Edition 2014. (1991, 1994, 1998, 2000, 2003, 2006, 2007, 2009, 2011, 2014).
Retrieved October 11 2021 from The Free Dictionary
co-routine
Coroutines are computer program components that generalize
subroutines for non-preemptive [== Cooperative] multitasking ,
by allowing execution to be suspended and resumed . Coroutines
are well-suited for implementing familiar program components
such as cooperative tasks, exceptions, event loops,
iterators, infinite lists and pipes.
Wikipedia contributors. (2021, October 11). Coroutine. In Wikipedia, The Free Encyclopedia.
Retrieved 15:32, October 11, 2021, from Wikipedia
Some history
Name coroutine was coined in around 1958-1963 by Melvin E. Conway:
Under these conditions each module may be made into a coroutine;
that is, it may be coded as an autonomous program which
communicates with adjacent modules as if they were input or
output subroutines. Thus, coroutines are subroutines all at the
same level, each acting as if it were the master program when
in fact there is no master program.
Melvin E. Conway: Design of a Separable Transition-Diagram Compiler
Some history
During the 1970s, coroutines were actively researched and
experimented with, but support for them disappeared from
later mainstream general-purpose languages for a long time.
Rejoice now, because the Dark Ages of the coroutines
are coming to an end.
The Monad Reader, issue 19
Monad reader @ October 26, 2011
Pre-emptive
traditional threads (usually)
OS can switch executing threads in any point
multiple threads can mutate a datastructure "at the same time"
need to ensure that only 1 thread can execute a specific block at the same time
need to ensure other threads see changes made (that is: remove some optimizations)
Cooperative
green threads or fibers (usually)
thread can switch only in specified points
need to ensure threads don't block (blocking IO or heavy processing)
still need to ensure other threads see changes made, if implemented as >1 OS threads
a bitcoin mining function would not let other threads to run
Mitigating pre-emption is difficult!
locks
barriers
synchronized blocks
volatiles
atomicy
concurrent datastructures…
-> concurrent programming is generally considered extremely difficult
Cooperative multitasking offers easier path
mutations don't occur "at the same time"
w/ asynchronous IO -> ~no blocking
w/ only 1 OS thread -> no synchronization issues
Synchoronous IO
traditional IO
a.k.a. blocking IO
execution doesn't proceed until IO call returns
Asynchoronous IO
a.k.a. non-blocking IO
execution can proceed until the value is needed
Symmetric coroutines
control flow is passed forward to another coroutine
no "returning" or "yielding"
almost like goto 😱
Asymmetric coroutines
control flow can be
passed to another coroutine (resume)
passed back to the invoker (yield)
more structured -> easier to comprehend
Constrained coroutines
can only be resumed/suspended in specific places the language supports
may be simpler to use
e.g. async-await
First-class coroutines
can be freely passed around and thus invoked whenever and wherever
more powerful
Non-stackful coroutines
can only be suspended directly in coroutine body
e.g. Python generator can only yield in the body
e.g. JavaScript can only await in async body
# Nope:
def myfun():
yield 42;
def mygen():
myfun()
g = mygen()
print(next(g))
Stackful coroutines
can be suspended wherever, also inside function calls
e.g. Lua
function mygen ()
coroutine.yield(42)
end
co = coroutine.create(function ()
mygen()
end)
_,b = coroutine.resume(co)
print(b)
Generator
a limited form of coroutine
always yields to the caller
always returns a sequence one value at a time
first appeared in CLU (1975)
Continuations/CPS
CPS == Continuation-Passing Style
programming style where "rest of the program" is passed on as an argument
requires tail-call optimization to not blow up stack
difficult to use, thus not common
can be used to implement coroutines
func main() {
hello(func() {world(func() {})})
}
func hello(k func()) {
fmt.Println("hello")
k()
}
func world(k func()) {
fmt.Println("world")
k()
}
Call/cc
Call with Current Continuation
programming style where "rest of the program" is passed kind of implicitly to a given function
originated in Scheme
extremely difficult to use, thus not common
can be used to implement coroutines as well as many other things
Please don't ask me to explain what call/cc is. Let's say I'll leave it as an exercise for the interested audience ;)
Async/await
syntax sugar to hide Promise/Future/Task/Async
really common nowadays
tends to spread asynchronity up the stack
turns code kind of upside down
normally everything is blocking except when explicitly executed asynchronously
in async/await everything is async except when explicitly awaited
Goroutines
green threads in Go language
partly pre-emptive, multiple OS threads
runtime moves other goroutines to another OS thread on a blocking call
have nothing to do with coroutines
Kotlin
recent, advanced, popular JVM language
language support for suspending functions
compiler transforms code into CPS behind the scenes
enables implementing coroutines as a library
no support needed from JVM
optionally multiple OS threads
user can select suitable "context" for coroutine execution (e.g. Swing UI thread)
Haskell (GHC)
built-in asynchronous IO manager
by default only runs in single OS thread
foreign calls (not IO) will block all threads
pre-emptive multitasking, but not a problem due to lack of mutation
pre-empts on memory allocation
tight loops might block other threads
Examples, please?
Example application: URL content processor.
Opinionated and imperfect, feel free to disagree!
traditional blocking
String content1 = fetchBlocking(url1);
process(content1);
String content2 = fetchBlocking(url2);
process(content2);
simple, straightforward
no concurrency issues
blocking
reserves OS thread while waiting for IO
"manually asynchronous"
AtomicReference<String> fetchManualAsync(Url url) {
AtomicReference<String> ret;
ExcutorService.newSingleThreadPool().execute( () -> {
ret.set(fetchBlocking(url));
}
return ret;
}
AtomicReference<String> content1 = fetchManualAsync(url1);
AtomicReference<String> content2 = fetchManualAsync(url2);
while (content1.get() != null) {};
process(content1.get());
while (content2.get() != null) {};
process(content2.get());
fetching is non-blocking...
...but reserves OS thread while waiting for IO
processing is blocking
using concurrency primitives -> complex
concurrency issues if primitives not used correctly
horrible example, please don't use!
ExecutorService is... AtomicReference is for...
with callbacks
ExecutorService pool = ExcutorService.newCachedThreadPool();
void fetchCallback(Url url, Consumer<String> callback) {
pool.submit( () -> callback.accept(fetchBlocking(url)) );
}
fetchCallback(url1, content -> process(content));
fetchCallback(url2, content -> process(content));
fetching is non-blocking
processing is non-blocking
processes immediately when first fetch completes
but reserves OS thread while waiting for IO
potential concurrency issues
callback-hell
communicating between tasks is not straightforward
callbacks with async IO
void fetchCallbackAsyncIO(Url url, Consumer<String> callback) {
fetchUrlAsyncIO(url, content -> callback.accept(content) );
}
fetchCallbackAsyncIO(url1, content -> process(content));
fetchCallbackAsyncIO(url2, content -> process(content));
fetching is non-blocking
processing is non-blocking
processes immediately when first fetch completes
potential concurrency issues
callback-hell
communicating between tasks is not straightforward
with Futures
ExecutorService pool = ExcutorService.newCachedThreadPool();
Future<String> fetchFuture(Url url) {
return pool.submit( () -> fetchBlocking(url));}
}
Future<String> content1 = fetchFuture(url1);
Future<String> content2 = fetchFuture(url2);
process(content1.get());
process(content2.get());
fetching is non-blocking
no concurrency issues
but reserves OS thread while waiting for IO
processing is blocking
waits for url1 before processing anything
Executors, Futures…
with Futures, completions
CompletionService<String> comp =
new ExecutorCompletionService(ExcutorService.newCachedThreadPool());
comp.submit(() -> fetchBlocking(url1));
comp.submit(() -> fetchBlocking(url2));
process(comp.take().get());
process(comp.take().get());
fetching is non-blocking
no concurrency issues
processes immediately when first fetch completes
there are many ways to say "execute these futures in parallel", but we must say it somehow
but reserves OS thread while waiting for IO
Executors, Futures, Completions…
I didn't bother to shorten the line...
with async-await
ExecutorService pool = ExcutorService.newCachedThreadPool();
async String fetchFuture(Url url) {
return pool.submit( () -> fetchBlocking(url));}
}
async void foo() {
String content1 = fetchFuture(url1);
String content2 = fetchFuture(url2);
process(await content1);
process(await content2);
}
fetching is non-blocking
no concurrency issues
+- hides Futures, but brings async, await…
waits for url1 before processing anything
processing is blocking
tends to "spread" async up the stack
reserves OS thread while waiting for IO
needs language support or macros
coroutines with blocking IO
ExecutorService pool = ExcutorService.newCachedThreadPool();
Coroutine<String> fetchCoroutine(Url url) {
return new Coroutine<String>() {
String run() {
Future<String> ret = pool.submit( () -> fetchBlocking(url) );
yield(ret.get());
}
}.start();
}
Coroutine.start(() -> {
Coroutine<String> content1 = fetchCoroutine(url1);
Coroutine<String> content2 = fetchCoroutine(url2);
process(content1.resume());
process(content2.resume());
});
fetching is non-blocking
processing is non-blocking if using more threads
communicating between tasks is straightforward
reserves OS thread while waiting for IO
ret.get() blocks other coroutines in the same context
potential concurrency issues if multithread, but no data races
.start() is to start immediately. However the coroutine api would be structured...
coroutines with async IO
Coroutine<String> fetchCoroutineAsyncIO(Url url) {
return new Coroutine<String>() {
String run() {
fetchUrlAsyncIO(url, content -> yield(content));
return block(); // point of no return
}
}.start();
}
Coroutine.start(() -> {
Coroutine<String> content1 = fetchCoroutineAsyncIO(url1);
Coroutine<String> content2 = fetchCoroutineAsyncIO(url2);
Coroutine.start(() -> process(content1.resume()));
Coroutine.start(() -> process(content2.resume()));
});
fetching is non-blocking
processing is non-blocking if using more threads
doesn't block if compiler/runtime provides enough possibilities
communicating between tasks is straightforward
processing is non-blocking if using more threads
potential concurrency issues if multithread, but no data races
littered with "Coroutines", unless language support
coroutines introduce additional state
Example coroutine framework in Java
utilizes fork-join-pool (Java 7, 2011)
cooperative
only one coroutine allowed to execute at a time
multithreaded
every coroutine reserves a thread, because Java
doesn't have macros
doesn't have continuations
doesn't allow moving exec to another thread
block detection with BlockHound
throws an error if a blocking call attempted
link: naive-java-coroutines @github
Conclusion
coroutines provide an easy and safe way to implement concurrency, when concurrency is needed
pure functions is still the recommended approach whenever possible
coroutines introduce state and thus complexity
needs some language support for both implementation and ease-of-use
Thank you
Keep on routinely exploring and learning!