Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi everyone, I'm Raghav Roy, and today we'll be talking about coroutines
and go. A little bit about myself.
I'm a software engineer at VMware. I work with the EsXi
hypervisor. When I'm using Go is generally
on my personal projects or when I'm contributing to open source,
something like kubernetes, maybe. So, just to set
the expectations of the talk a little bit, we're going to be covering coroutines
as generalized subroutines. So a little bit of
we'll cover a little bit of basics over there, how it all
started, where and how coroutines came to be classifying
coroutines, and why the concept of full coroutines is important,
and how some languages actually don't provide full coroutines.
Implementing coroutines in Go using just pure go definitions
and go semantics, and finally, some core runtime
changes that could be made to support coroutines
natively and far more efficiently. So, to begin
with, brushing up on some basics. What are subroutines?
Subroutines can do subroutines are essentially your regular
function. Calls can do certain things. They can suspend themselves
and run another function, or they could terminate themselves and
yield back control to its caller. These are the two things that your functions can
do. So in this case, say you have a main subroutine,
suspends itself, starts another function, this function can execute,
and as soon as it has to call another function, it can suspend itself.
Call this third function, which can start functioning again after
this outermost function has finished
processing, it can terminate yield back control to its parent
caller, which can terminate again and yield back control to its parent.
So this is what a regular subroutine looks like.
Now, subroutines. Now, these definitions differ
depending on where you are living on the Internet, but for this talk,
eager and closed means that, well, eager means expressions that a
function encounters are evaluated as soon as they are encountered,
and closed means that your function only returns
after it has evaluated the expression. So it can't sort of suspend itself
in the middle of processing and yield control back to its invoker.
That's the important difference that we'll see soon. With coroutines
now coroutines as generalized subroutines, can we define them
as generalized subroutines? And what I mean by that is, a coroutine
can do everything a subroutine can, which means can suspend itself,
run another function, terminate itself, yield back control to its invoker,
or this magic special thing at the end, which is suspend
itself and yield back control to its invoker, instead of,
so it just passes the control back to its invoker, pausing itself,
essentially. So how could that look like, say you have a subroutine running over here,
and you want to run this coroutine on the left. So your subroutine suspends itself
and say it wants to read from some output,
that is, output from this coroutine. So your
coroutine starts functioning. After it has the output, it can suspend itself
and yield control back to your subroutine with the output. And now
you can see that the coroutine has actually stopped here. It has saved the state,
and it's actually paused in the middle of its execution. The next
time the subroutine wants to read from it, it actually can just resume this
coroutine, which starts running from its previously stopped position
and output the new value that the subroutine requires, which can be
read by the subroutine, and so on and so forth. So it ends up looking
like this. Every time this subroutine was read from a coroutine,
it plays this coroutine back, which starts executing again from last
time it stopped. Now you will start seeing why this
is important, and you can probably already start seeing why something, why a paradigm
like this could be pretty useful. So coroutines
are like functions that can return multiple times and keep their state,
which means all the local variables plus the command pointer,
everything, and they can resume from where they are yielded.
So let's look at a quick sample. Right? So comparing binary
trees, those in the audience who are, who like
doing Leet code, if you've seen something like, seen a question like this, how would
you do this? You would have some sort of a recursive logic.
You know, you have two binary trees, and you want to compare
them and maybe ascertain if they are equal. So you
would go through all the nodes, save in maybe like an in order fashion,
and save all the values of the nodes. And after you
have traversed both the trees, you compare those values.
But what if you could actually do this in
one pass?
Every time you are at a node, you can pass through both these trees,
compare the values of the current node, go to the next node in
the next node, say, in order fashion, and just
keep comparing the values as you go through them. Something like we just
saw in our subroutine and coroutine example.
Let's see that with code. Right? So in this case,
your subroutine is the CMP. This comparison function and
your coroutine is this visit function. And how this is working over here
is your comparison function is just instantiating two coroutines.
We don't need to worry about how they're implemented, just assume they're just black box
implementations. It creates a new coroutine,
in this case the visit function. So you're passing this visit as a function
argument, and in this while loop, all you're doing
is you first start your first coroutine, your c o one, and you're
passing t one, which could be this head of the first tree,
and your t two is probably the head of the second tree.
So you resume this visit function.
So your visit first goes to t left. So it goes all the way to
the leftmost node in your tree,
your left most node in the left subtree, and it
then yields the value back to your
call back to your regime function. So next line yield
actually yields back control to comparison function, and this value is stored in
v one. Now this comparison function then calls
CO2, which is another instance of your visit function, and you pass t two
to it, and it goes through that tree and gets
the leftmost node in this case, which is two. And now you
have both the values actually yielded back to v one and v two
that you can then compare. So essentially you
have, you started traversing the tree in sort of
a one pass sort of a way, you just passing
back control to visit, and as soon as it has an output, you're reading the
visit function yields back control to your comparison function, and you can do your comparisons.
So again, you know, you call you in the while loop, you call your
resume function. Again, it goes to the right subtree from,
from two, which is four, in this case for both the trees,
because you know, it resumes both the coroutines. One by one,
it yields back the values, and now you can compare four so on and
so forth till you, up till, you know, both the trees are same. In this
case, this is actually, this sort of
an example was actually why coroutines came in the first place,
or why Conway thought of them.
It's 1958, and you want to compile your CObOl program in the modern
nine part compiler. That's Grace Hopper, by the way,
who is the co developer of Cobol. Now the terms over
here, even if you don't understand it, I'll simplify it. So,
just your basic symbol reducer, you can think of it as your lexer.
And if you don't know what Lexa means, that's fine too, because all
you need to care about over here is that you have two processes, like you
had in your binary comparison example, that sort of depend
on each other for their outputs. So in this case, your basic symbol
reducer acts like a Lexus. So what your main program
does is that it has actual physical punched cards.
That's how compilation worked back in 1958.
And you give this punched card to your basic symbol
reducer of your lexer, which eats this punched card and spews a
bunch of tokens. Tokens is what the output
of your symbol reducer is, or your lexus is right.
Now, these tokens can be read by a parser, which these tokens
become the input to your parser, in this case, your basic
name reducer, or name lookup as it's called today,
which, you know, puts the output into the next state.
So what's happening over here is sort of like our first subroutine example.
Your main program, your control, goes to your basic
symbol reducer, which comes up with a bunch of
tokens as outputs. And it returns
essentially these outputs back to your main program, which can then, your main program
then calls your namereducer with your output of the basic
symbol reducer as an input to your namereducer.
And, you know, your name reducer can then start parsing your tokens.
So this is kind of what it looks like initially, what we saw, your main
program called symbol reducer. Symbol reducer writes a bunch of
tapes to your. And these tapes
are basically used as input to your name reducer.
You have a bunch of extra tapes that you don't need anymore.
And this entire course is actually involved a bunch of extra machinery.
So convoy thought there had to be a better way to pass the input,
pass the output that you get from your symbol reducer or your lexer to your
parcel, which is your name lookup. Without all this expensive
machinery, we'll start to see how he actually
thought of the coroutine. And he realized that subroutines,
which was the previous implementation, were just a special case for more generalized coroutine.
The first thing that we saw in this talk, and we don't need to write
on tape, we can just pass control back and forth between
these two processes, because the input of one process is the
output of the previous process. So you don't have to return.
You can just yield back control, and you bypass all
this machinery. And this is how it ends up looking, which is very similar to
our previous example. So every time your name reducer, your parser
wants to read a token from your lexer or your symbol reducer,
it invokes this coroutine suspends itself,
and as soon as your symbol reducer has the tokens available, suspends itself
yields control along with the output to your namereducer, which is your parser,
which can do whatever it wants with the output. And when it wants the next
token, it again resumes your coroutine and
which yields back another token, so on and so forth till you reach the
end of your main function. Right? So this way,
raising the level of abstraction actually led to less costly control
structures, leading to your one pass global compiler. So this
was an example of a negative cost abstraction, because generally
when you're trying to abstract away logic, you're increasing
the cost because something else is now taking care of this easier
abstraction that you've created, something under the hood.
So this side
note, this was the paper that actually coined the term coroutine by Melvin Conway.
I've linked to this paper at the end of the talk and you should definitely
check it out. Where are coroutines now?
Considering all that we've talked about so far, coroutines should have been a
common pattern that is provided by most languages, right? Seems pretty useful,
but with a few rare exceptions, that's similar. Few languages do.
And those that do, and I'm guessing some of you already
have some ideas of, you know, some other languages calling their processes
coroutines. They are actually a limited variant
of a coroutine, and we'll see how, what makes them limited and why we want,
why this could actually affect the
expressive power of a coroutine. So the problems with coroutines
were also, apart from, you know,
the reason why we don't see it everywhere, is one, there's a lack of uniform
view of this concept. There aren't a lot of formal definitions that
people agree on on what core routines means, there's no
precise definitions for it. And secondly, more importantly,
my core routines aren't provided as a facility in most mainstream languages
is the advent of ALGOL 60. So this language brought
with it block scope variables.
So you no longer had parameters and return values stored as global
memory. Instead everything was stored as relative to a stack pointer.
So your functions are stored on a stack, stack frames,
right. So with the return addresses, all your local variables,
everything is on a stack. Now you have your stack, you have f's
activation record. So f is a function and its activation record just
contains all its local variables, return addresses. And the stack
pointer is just pointing at the top of the stack. Your function can call
a mole function. In this case it calls function g, which has its own
activation record, which can call function h, and so on.
So how would you try to implement a coroutine in this
sort of a parity? Well, you could have your f,
in this case wants to call a coroutine g. You could have
g running as a separate side, a separate stack on the side of this
thread, one stack right, as a side stack.
And your stack pointer now has to move from one stack to
another stack with all the thread context, your return addresses,
saved registers, everything has to move to this stack now,
which calls the function g, which is coroutine coroutine g
can call more coroutines h. Now what
if there is another function, not f, but some other function that's interested in the
outputs that this coroutines are producing? Well,
you can think of it as second stack. You have a thread two stack.
This is different from f's stack, and you have some function
z that's running over here with the stack pointer.
Now when the control comes to z, the stack mode has to move over here
into the second stack with other thread context and everything else.
So this almost starts mimicking heavy multithreading,
which increases the memory footprint, rather than the cheap
abstraction that coroutines were meant to be in our previous examples.
Right. What about the precise definition
problem? Well, Marlin's doctoral thesis
is actually widely acknowledged as a reference for this,
and it just summarizes whatever we have discussed so
far, which is the values local to a coroutine
persist between successive calls. You know, it can pause itself and
save the state there. And the execution of coroutines
is suspended as control leaves it, only to carry on from where
it left off when control re enters it. So when a coroutine
yields, the control leaves it where it
left, where it yielded. And when you resume a coroutine,
it starts processing from when it had stopped, basically what we've already discussed.
So now that we have the basics and, and the history out of the
way, and hopefully have made a case for its usefulness,
let's build up to what a coroutine looks like in go. But before that,
very quickly, I promise this is relevant.
Let's look at what we mean by a full core routine.
And some languages like Python and Kotlin, which actually provide coroutines,
don't actually provide full coroutines. So let's
start, just for the sake of completeness,
let's start classifying the coroutines. Well, one kind of
classification is symmetric coroutines, which is different from what we've been seeing so far.
A symmetric coroutine only has one control transfer operation,
and so it just allows coroutines to explicitly pass control amongst themselves.
But what we're interested in is asymmetric coroutine mechanisms that
provides two control transfer operations. In our case it was the resume
and yield operation. So one for
invoking the coroutine you resume, and one for suspending it yield,
and the latter returns the control to the invoker. So coroutine
mechanisms that support concurrent programming usually provide symmetric coroutines.
On the other hand, coroutine mechanisms intended for producing
sequences of values are provided by asymmetric coroutines. And you
might think, wait, then, you know, why not just use symmetric coroutines? It seems like
it's doing way more important things. You know, concurrent programming
is what a go is all about. But actually
the good news is that you can use asymmetric coroutines to mimic symmetric
coroutines, and asymmetric coroutines are way easier to write
and maintain. And actually, I have linked an article to the end of this
slide to actually you can see
how symmetric coroutines and asymmetric coroutines are implemented and how you
can actually implement symmetric coroutines using asymmetric coroutines using your
resume and yield functions. But you know, with the advantage
of that, it's easier to write asymmetric coroutines. And we'll see that when we implement
our go API, we implement
asymmetric coroutines. Secondly, you would
want your coroutine to be provided as first class objects
by your program or your language. You know, it has a huge
influence on its expressive power, and coroutines that are constrained
within the language bounds cannot, which and cannot be directly manipulated
by the programmer actually has a negative effect
on its expressive power. So what do we mean by a first class object?
Something we've already seen with a binary comparison example. The coroutine
should be able to appear in an expression. You should be able to assign it
to a variable, you should be able to use it as an argument,
return it by a function call, you know, all the good stuff.
Finally, stackfulness. This is the important one. This is what
actually differentiates Python and Kotlin's coroutines, or most
of the languages from a full coroutine is
because when you have stack full coroutines, you can actually
transfer control between nested functions. So even if your core routine
isn't at the top most stack, even if the coroutine is somewhere in the
middle of the stack, you can still yield control from the coroutine back to its
invoker. When you have stack less coroutines like Python and Kotlin, they are
in full routines. And the only way you can,
you know, pass control between two coroutines or a
coroutine or a sub routine. And a subroutine is by sort
of only the top most coroutine. And the top most subroutine can
pass control between themselves, which actually limits what your
coroutine can do. Basically, your full coroutine should be
stack full, should be provided as first class objects, and we,
we can implement them using asymmetric coroutines.
So full coroutines can be used to implement generators,
iterators, all the way up to cooperative multitasking.
And just by providing asymmetric coroutine mechanisms, it's sufficient
because we can use it to implement symmetric coroutines and
asymmetric coroutines are just much easier to maintain and implement.
This is a nice way to show the limitations of a coroutine.
So, cooperative multitasking, right?
In cooperative in a cooperative multitasking environment, your concurrent tasks
are interleaved. One task runs and
stops itself, the second task runs. So this is interleaving and
it needs to be deterministic. But when you're using coroutines,
coroutines by definition are not preemptive. So there's a
fairness problem that can arise if you have a bunch of coroutines running
in your kernels. Because eco space doesn't have just
one program that's trying to operate,
that's collaborating amongst itself. Actually a bunch of
programs running that, waiting for the cpu resources. Now,
if a routine is using cpu resource,
that the code routine panics, hangs, or just taking a
long time to execute higher priority tasks could
be waiting for this, uh, team to finish
executing. Right? This sort of fairness problem can arise
because of coroutines being preemptive.
So, but on the other hand, if you have user level multitasking,
uh, implementation, your core routines are generally going
to be part of the same program that of course collaborating with your common goal.
So since you can still have minus problems, but since it's restricted
one collaborative environment, it's much more easy to
identify them, reproduce them, and makes it less difficult to implement.
So this, you start to see why full blown core routines in your
kernel space, for example, might not be a good idea why
coroutines in go co routines aren't directly served or
natively served in go concurrency
libraries. And this was an interesting talk by rockbike, actually lexical
scanning and go. I won't go into the definitions, but I've linked to the talk.
What they did was for their implementation, they used go
routines connected by a channel. Now, full go routines
provided to be a bit too much, because go routines provide a lot of
parallelism, right? So the parallelism that
comes with go routines caused a lot of races. Proper coroutines,
as we'll see, would have avoided the races and would have been way more
efficient than go routines. Because of their concurrency constructs, only one
coroutine could be, can be running at a time. Now,
coroutines, threads, generators, they sort of start sounding the
same. So let's get the definitions out of the way a little bit.
Coroutines provide concurrency without parallelism. That's the big idea
here. When one coroutine is running, the others aren't.
Threads, on the other hand, are definitely way more powerful than co routines,
but with more cost. They require more memory, more cpu allocation,
and also it comes with parallelism. So you have
the cost of scheduling these tasks,
and you have way more expensive context switches and the fact that
you need to add preemption for threads.
Go routines are like threads, they're just cheaper, so they use lesser memory.
And all the scheduling is now taken care of. Go's own user space
go scheduler. So your go routine switch is closer to a few hundred nanoseconds,
which is way faster than a thread. Generators are like
coroutines, but they're stacked less, they're not stacked full.
So you know you have that. The problem that we discussed, they can only transfer
control between the top most stack frames between two threads, for example.
So with all this in mind, let's start building an API
of coroutines in go by using existing Go
definitions that's available today. And this,
this next part of the talk is heavily borrowed from Russell's research proposal,
which I've linked to at the end of the slides for implementing core routines.
And check that out, because have skipped a lot of the information here.
So it's very neat that we can do this using existing Go definitions.
Your core routines channels, the fact that go supports function values,
the fact that unbuffered channels can act like blocking mechanisms
between coroutines. So it makes coroutines safe. A coroutine can
do a bunch of things. It can suspend itself, run another
function, coroutine, terminate itself,
yield back control, or suspend itself. It doesn't need to terminate itself.
It can just suspend itself to yield back control. So let's
start with a simple implementation of the package coral.
So in this case, you have a caller and the callee. And right now we
are looking at the suspend and run scenario. So you have
a caller which you know. So now this can be two coroutines that
are, say, connected by a channel. Your coroutine, which is
your callee, could be waiting to receive from Cin. Let's call the first
channel, C in. So as if you, if your caller or your subroutine
wants to resume your coroutine or your colleague, all it needs to do is write
into cin. So as soon as it writes into cin, your colleague starts running.
And now you need a way to, for your caller to sort of
stop executing as soon as it resumes the function. So what it can
do is it can wait on another channel, it can block itself on,
say, c out. So as soon as it starts, as soon as
it resumes, a coroutine, you know, writes into c in which your coroutine
was blocked on. Your caller can be blocked on cout.
So as soon as your callee or your coroutine stops
executing, it's finished executing, it can write into cout
so and block itself and c in again. So your coroutine is blocked,
and you have now successfully resumed your
caller or your subroutine. So this sort of, we're already starting to see
like a control transfer mechanism coming to
play. Your new function over here. This function new
in your package coral. All it's doing is it's instantiating
two unbuffered channels, your c in, and you see out your resume function.
Okay, let's look at the go function first. Your go function, this coroutine is
essentially what your cor is, your coroutine with a c,
right? So you can see that your function, that this go function is running,
is blocked on cin. When you want to resume this function,
you call this resume function. All it does is writes into cin and waits
on cout. And as soon as your go routine, finish executing,
finishes executing your function, f it writes the output into
cout, you know, which unblocks the. This resume function.
And your go function can be blocked on cn again.
So you can pause and see how this works.
So the new go routine blocks on cn. So far,
you have no parallelism, because only one function,
one routine, has the control.
Now let's add the definition of yield that
we've been talking about so far and basically return its value
to the core routine that resumed it. Right? So right
now we're looking at suspend and yield. So your yield is just the
inversion of your resume. So in your yield function,
you're writing into cout, and when you write into cout, you wait to receive
from c in. So you call, your coroutine is writing into cout,
which your subroutine is blocked on.
And as soon as you do that blocked on, your coroteons blocked
on Cn, your caller starts functioning and you
know which was which one. Once it wants to resume the function
again, it can write to Cn again. So that's
great. Now you have, now if you notice, it's just the inverse of
your resume function. It writes into cout and blocks itself
from c in and you can actually pass this function
in your go routine. So that is
great. That go allows you to pass function values
in other functions, so as callback functions.
So on a note, you have only added
another send receipt pair and there's still no parallelism that's happening.
So that's the whole point of coroutines.
So let's pause for a bit. Are these actually coroutines?
They're still full coroutines, and they can do everything that an ordinary go routine
can. Your core routine, your core new, just creates
coroutines with access to resume and yield operations.
Unlike the Go statement, we are adding new concurrence to the program
without parallelism.
So if you have go statements in your main function,
say you have ten go statements in your main function.
Now you have eleven go routines running and they can be running
at once. So your parallelism is actually gone up to eleven.
But if you have one main coroutine that runs ten new
core new calls, then there are eleven control flows.
But the parallelism of the program is still what it was before, just one.
Right? So go creates new concurrent parallel control flows,
whereas your coro new, which is the implementation that we are developing,
creates new concurrent non parallel control flows.
So let's get back to implementing our core API. It was
pretty crude and we can start improving it a little bit. So first,
what if your callee or your coroutine wants to terminate instead of yielding
back control, it wants to just stop executing.
So say your code has stopped executing. How does your caller know that it
stopped executing? Right, it can try to resume this
coroutine that does not exist anymore and, you know, block itself and
cout. But there is nothing to write into cout because your coroutine does not
exist anymore. So all your, so a simple,
you know, solution to this could be just have a global variable
called running. And as soon as a coroutine stops functioning,
just set running to false. All your caller needs to do is, before it
resumes its coroutine, just, just ask, is this coroutine still
running? If not, don't try
to resume this function. You might think that having
a global variable when you could have potentially a bunch of
coroutines that are running a place for data races.
But if you recall, you can have only one coroutine running at a time.
So this sequential, consistent execution
ensures that there are no data races happening at this running
variable. So, a very simple implementation over here, you have the sunning variable.
As soon as your go routine, which is your coroutine,
in this case, executes this function. X, f,
your running can be set to false. And the next time you want
to start this function again, you check
is running set to false. If it is set to false, then just return.
If it is not, write into c and resume that function,
and so on and so forth. Another thing is
that, sure, if it terminates properly, then you
have your running properly set to false.
But what if the caller, what if your coroutine actually panics, never gets
a chance to set running to false. How does your caller then know
that the coroutine does not exist anymore and not
potentially bread lock itself? So all we
have, your caller could be blocked waiting for news
and couture, but it gets nothing. So yeah, it panics.
There are no rights to call and your caller is blocked. What you can do
is simply just, whenever your coroutine
panics, just propagate this panic up to your caller, and you can just write
into cout, which is what the caller is waiting on.
Write in the code that, you know, I have panicked and don't
exist anymore. So don't try to resume this particular co routine.
And this can be implemented using this defer funk.
So how defer funk, how defer works, if you don't know, is that
it just runs at the end of your function. So after
everything is stopped running, your defer runs, and all it's doing is
that checking if running is set to true. Now,
if your control has reached your defer function and running
was not set to false, which basically means that something went wrong
over here in this gray dot portion, which means that
your colleague possibly panicked. So if
it reaches a def function and running is set to true, you know, that something
bad happened. Set running to false and into cout,
which your resume is waiting to read from. Just write
the message panic. And now your resume
function can handle this panic however it wants to. It can return
in this case, and you can do whatever you want to with this panic.
So, you know, you solve that deadlock situation.
Finally, we need some way to tell the coroutine
or your callee that it's no longer needed. And that could happen because
maybe your caller is panicking or because the caller is simply returning.
Right, and it's the same thing, but the same thing, but in reverse.
Your subroutine just needs to tell your callee, you know,
on the channel that it's waiting on, in that the
parent of this particular function does not exist anymore and you can
stop working as well. So it's just the reverse of what we implemented.
But what we can do is so, yeah, right into cin that
the caller is cancelling, just propagate
this cancel call t or callee, which once it
receives the cancel, it can write into co that, you know,
I acknowledge that the caller is cancelled and
the caller knows that successfully canceled its quote routine
before, you know, stopping itself. So you have your cancel
function over here, and all it's doing,
it's writing into cin a special kind of a panic, which is in
this case this error cancelled into c, in which your
yield function can then handle. So if in
your yield function, if it gets a panic on cin,
it knows that the caller or the invoker
of this particular code routine does not exist anymore. And I can go ahead
and cancel myself, you can pause and look at a bunch of other things that's
happening over here. So finally,
what are the runtime changes that can be made? Right,
that's the, that's the interesting bit. So far we have
defined coroutines using pure go rust builds on this
use of optimized use, use of an
optimized runtime implementation. And what he
did was initially collections of performance data. And he saw that this
Coro, new coro implementation that we've been describing took
approximately 190 nanoseconds per switch, which lies
in the realm of what a regular go routine switch takes at a
few hundred nanoseconds. He changes one thing. First he
marks in the compiler, he marks as a resume
and yield the send receive pairs as
a single operation. So now these are two separate operations
that need to be scheduled by the co, scheduler. It's one single atomic operation.
So you are completely bypassing the scheduler. And you can just directly jump
between coroutines instead of waiting for it to be scheduled,
your resume to be scheduled, and your yield to be scheduled,
bypassing the scheduler entirely. So this implementation actually required
just 118 nanoseconds, which was 38% faster from our
exist current implementation. Then he talks about
adding a direct coroutine switch to the runtime, so you're
avoiding channels entirely. Channels are pretty heavyweight
in the sense that they are pretty general purpose. They are intended
to do a bunch of things, not just a coroutine
switch mechanism. So instead you can have a direct coroutine
switch in runtime. And that implementation actually
just took 20 nanoseconds per switch, which is ten times
faster than the original channel implementation. So, you know, if this ends
up becoming native to go, it would
be really interesting to see how developers use this really,
you know, super fast implementation in go in
various creative ways, and, you know, to see how it changes the
concurrency landscape that go has sort of spearheaded.
We covered quite a bit, and thanks for making it here. We are
able to show that having a full coroutine facility in go makes it even
more powerful for implementing very robust generalized concurrency
patterns. We covered the basics of coroutines,
its history, why it's not as prolific, the problems with it,
why it's not present in a lot of ancient languages. We showed
why we care about full coroutines, what full
coroutines are as different classifications, why we
need to have coroutines in go, and how they differ from go routines,
and how they would differ from existing implementations in other languages like
Python and Kotlin. We then implemented a coroutine API
using existing go definitions, and we built on it to make it a little more
robust. We showed what runtime changes can be made
to make this implementation even more efficient. Finally, these are
the references I highly recommend going through all of these
articles and videos. They're super interesting and quick
reads, and definitely go through co routine for go
by Russ Cox. That's where a lot of this talk was taken from.
Finally, the artwork is by Rene French,
tenten quaslite. Check out their artwork, some really interesting
things and yeah, thank you, thanks a lot.