Abstract
Confidential computing (CC) is a new and much-discussed security paradigm. It enables the always encrypted and verifiable processing of data on potentially untrusted computers, e.g., your cloud provider’s systems or maybe even your local cluster. CC enables many exciting new applications like super-secure bitcoin wallets or end-to-end encrypted and verifiable AI pipelines.
In this talk, we’ll give a brief intro to CC and the corresponding hardware technologies. We’ll talk about how the technology is particularly relevant for the cloud-native space and why Go and CC make for a great fit. We’ll sketch the status quo of Go tooling for CC and given an intro to our open-source EGo framework. Finally, we give some hands-on examples of Go CC apps and discuss use cases.
We argue that EGo is the simplest way to leverage CC - in particular for Go programmers :-) We’d love to get feedback from the Go community for our approach.
Structure of the talk (30min)
- Intro (3min)
- What is Confidential Computing? (5min)
- Why Go and Confidential Computing are a perfect match (3min)
- The architecture of EGo (5min)
- How to build your first confidential microservice (7min)
- Use cases (5 min)
- Conclusion (2min)
About Confidential Computing
Confidential computing is an emerging security paradigm. With it, data and code are protected inside secure enclaves at runtime. Enclaves protect against potentially compromised OSes, hypervisors, or even malicious cloud admins with hardware access.
Enclaves are created and enforced by the CPU. An enclave’s contents remain always encrypted in memory at runtime. Yes, correct, data and code remain always encrypted! This is one of the key features that make confidential computing so exciting for many, e.g., for Forbes.
Besides, enclaves have access to unique cryptographic keys, which can be used to store secrets on untrusted storage (“sealing”). it is possible to verify the integrity of an enclave and set up secure channels to it (“remote attestation”).
In one sentence: secure enclaves enable the always encrypted and verifiable execution of workloads in the cloud and elsewhere.
The most prominent enclave implementation to date is Intel SGX. SGX is available on many recent Intel-based systems. Several cloud vendors already have corresponding offerings.
Apart from unprecedented security, confidential computing enables new types of data-driven applications. The verification aspect is key here: users can verify precisely how data is processed, who provides the inputs, and who gets access to the results. For instance, this enables zero-trust data sharing, super-secure crypto wallets, and many other exciting things.
However, previously, developing confidential apps used to require arcane knowledge, significant code changes, and cumbersome build steps. With EGo, this changes!
About EGo
EGo is used via a simple CLI. In a nutshell, EGo consists of a modified Go compiler, an in-enclave Go runtime and a Go library that makes CC-specific functionality available to in-enclave code and external consumers. Most notably, the library facilitates the process of remote attestation.
With EGo, you can build and debug your Go code as you are used to. Apps built with EGo run on all systems that normal Go apps run on — even if those systems are not SGX-enabled. Thus, EGo can be nicely integrated with existing development and build processes. The following commands build and run a helloworld app:
go
ego-go build myapp.go
ego sign myapp
ego run myapp
If you tell ego sign
that you want a debuggable enclave, you can debug your app inside the enclave using ego-dbg
and GDB-compatible IDEs like Visual Studio Code.
In contrast to enclave SDKs for programming languages like C++ (e.g., Open Enclave) or Rust, EGo does not require programmers to split your app between enclave and non-enclave code. It simply keeps all of your data and code inside the enclave. We believe that this is the most intuitive and practical approach.
Most Go apps run out of the box on EGo. This includes the popular key management app HashiCorp Vault, which is a good example of an app that benefits greatly from CC.
Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hey everyone, this is Smokitz from Edgeless. I'm hoping you're enjoying the conference.
So far, this talk is going to be about how to build
cloud native confidential computing apps with go
so we're edgeless. We're a startup from the
city of Bohom in Germany. It's a city probably less famous
for returning to the premier division in soccer this year,
and more famous for rapidly growing vibrant
startup computer secrets startup scene. And we're part of
it. We're building confidential computing applications
with a focus on open source cloud native
tooling. We're roughly ten engineers and
were founded two years back in 2019.
So before I go into the details of building confidential computing
applications, I'd like to go one step back and
give you a brief introduction on what is confidential
computing, what problem does it solve?
And more importantly, why this is exciting and why
I'm giving this talk today.
So you might have seen this graphic
or sticker from the Free Software foundation on
some tech conference, on some t shirt or something that says there
is no cloud, there's just other people's computers.
And of course this is a very provocative statement,
but since a couple of years, everybody is moving
their data, their workloads to the cloud.
It's just private people like you and me storing their data
min some cloud service. But it's also
startups like us that are heavily using
the cloud. And it's great because we don't need to build and buy
our own infrastructure. We immediately have all the computing power we
need at hand, very flexible, scalable,
and the same goes for larger enterprises.
They don't want to maintain their own infrastructure anymore,
it's less efficient. So they start moving their workloads
to those cloud. Well, we have to remember that the cloud is not just some
magic place where milk and honey flow, it's some real
hardware in a real data center somewhere that is
maintained by real people. There's a software stack
running on this hardware with real hypervisors operating systems,
also maintained by some people. And then
there's other customers code running alongside your code that
potentially is isolated from your code,
built just through regular isolation primitives. So you can't
be certain that they might interfere with your computing,
they might be malicious and try to steal
your data through site channels or other means.
So on one hand, for private people,
this is the fear of breaches. Data breaches, losing your data,
using your photos and what you care about. But for enterprises,
that's even more than just. I mean it's bad enough, but it's
even worse, because they might miss out on
business value here where they can, because of
regulations or other reasons, share their data or move
their data to the cloud, potentially share
it with other enterprises and then do a
collaborative computation on data, profit from each
other, create new business value, create new insights from
merging their respective data.
And this is really problem today,
right? You could do so much more with those data you already have.
If you could share the data without sharing,
because you don't want to trust these other parties, these other companies,
you would need some kind of primitive so you can agree on beforehand
and then be sure that this is the way the data is
handled afterwards. And hardware providers saw this problem a couple
of years back and decided we
need some hardware base that we can build on to create
some secure computation space on the upper layers
that people can use to process their data in this
secure space. And this is typically called an enclave. So a
secure space that you can move your data, your code
and computers it there. And last year,
Forbes acknowledged that this is a
very fundamental change in how we see the cloud.
And they added the confidential computing as
one of their digital transformation trends for 21.
So this was of course very reassuring.
But it's not just this promise that this is going to be a digital transformation
trend. It's already happening. So you see the big cases
already having right now today offerings
for confidential computing, and they are growing rapidly.
It's really starting to gain traction now. People understand this
technology more, they understand what confidential computing gets them and
asking for these offerings. So of course CSP delivered,
you can have confidential computing vms clusters
on the cloud today. And it's not just an
additional product that gives you
more security, additional features. There are people even
going as far saying this is a real breakthrough technology that's
transforming the future of the cloud. It will be one of the building blocks
for the future of those cloud. So in approximately five years,
the cloud will be fully confidential. So all
the computing power there, all the workloads will be done inside confidential
computing. So this is why we're excited about this,
and more and more people are jumping on the train and
use this technology. So very high level
technical overview of what I mean. With secure arcways,
we have this stack beforehand. Every layer in the stack
could access or potentially access your code,
your data, including people that have access to
the operating system, for example. So now we introduce a
new part in the hardware that bootstraps
this secure enclave. You move the code and data inside,
and then nothing else in the stack has access to it.
And we typically define such secure enclaves with four
properties, so they are isolated from everything else. They are
on their own there. Their memory is protected at runtime. So even if you have
direct memory access to the hardware memory, you wouldn't be able to obtain
the plain text there. Then they have some kind of mechanism to seal
their state, meaning to persist their state on some storage,
of course, encrypted and only accessible
to them later on. And then they have some kind of
means of proving the identity to the
outside, proving they have integrated the
code and data that's expected there, and proving this to
a third party, to the user. And that's typically called attestation,
remote attestation. So those four properties is what
we expect from secure enclaves, from confidential computing.
I'd like to give you some real, I mean,
you can let your imagination go there,
but to give you really hands on,
maybe a little bit out of the box, examples of where confidential computing is
used. First of all, thing that's very interesting for a lot of people is
wallets, cryptocurrency wallets. Currently there's
a lot of special hardware used, but with
this technology, you could actually use commodity cloud
cpus, cloud machines to build secure
wallets that are even privacy preserving. And there's a company
doing this already called bitcoin. The other
thing, e health. So in Germany we have the electronic
health records and those e, which is
basically electronic prescription that
is going to be rolled out. And it's built with confidential computing. So you
have all your patients data, their diagnosis,
their medical history, stored, processed and accessible
in the cloud, built, protected via confidential computing.
That's one very popular messenger called signal.
They do what most messengers do, they do contact discovery.
So they go through your contacts on your phone and match
them with other registered users of this messenger.
And of course, this is often a problem of whether people criticize
about these messengers. They basically push your whole contact list to the
cloud and signal build confidential computing application
that handles things inside an enclave. So nobody,
not signal, not anybody else actually ever gets
access to your contacts, and they stay with you and you still can
have your contact discovery service. Very cool,
those. Okay, so hope
I got you hooked on confidential computing and what
you can do with it. So now let's take a look at how we
can build confidential computing application with go cloud
native. And that's what we want to do. So first question,
why go? Why is this? Where's the match here for us?
And I think for most people, confidential computing is a cloud technology.
Cloud in most cases means cloud native software means
microservices, means scaling. And I
think you could say that's most likely going to be some go service
running there. Most software there is written in Go. It's quite a good match
without going into details of the language here. But when
we started there was no go support for secure enclaves.
So we decided if we think about how we want to build
software for confidential computing applications,
we want to build them with Go, we want to build them cloud native.
So we need the support and let's get this cool
go experience into the world of confidential computing.
So essentially we build eg.
It's a modified go compiler that compiles your go code that
is able to run inside an enclave. We have some
SGX specific tooling, so you might be familiar
with the go way of building software. Go build, go run. You want
to have the same experience with ego. So you see
on the bottom there you can install it from the snap store.
We have a DAP package, you can install it, build it from source, and then
we have more or less the same experience of ego.
Go build. And then we have an additional step we will see
later on to sign your application and ego run.
So it feels very go like. And then of course we
have some libraries to give you the specific features
inside and outside of the enclave that are very specific
in enclave code. So before I showed you
the ceiling and remote editation and this kind of stuff,
and we will see things in the hands on shortly after things.
And it's not only us that like to build software with ego.
We heard that the folks at Microsoft like it as well and they included
it in their documentation for building
confidential application for the Azure cloud.
And yeah, of course eg works with your favorite
tooling to write your software. We mostly use vs code,
but of course you can use whatever you like.
So this is the high level overview of
eg. It's very short because it's not much you have to
adapt. If you write go code, there's not much you need to change. You install
it and then you're good to go. And now I'd like to
do a little demo, give you a bit of a, more of a practical experience
of what ego feels like. And to that end let me introduce
a little demo application. So we want to build
a cloud service that's running there with ego in the
cloud. And it's basically a key value store. You can pass in
a key and a value secret and
the server stores it. And whenever you pass in
the same key again, you will retrieve those same
value. So your secret again. So passing in a secret
and getting it back. Let me switch into
the vs code screen. Okay, so this is
our demo application. It's a simple server client
application. The server is an HTTP server handling
one endpoint called secret. Expect three parameters depending
on the first one, the command, it either gets or gets a
value. The value is stored inside a
map, simple key value store here.
And when you retrieve a value,
it checks for this key in the map, and if it exists it returns it
to you. And the map is essentially the state
of this application. And whenever a new value is set,
the state is stored or
saved and when it initially boots up it
loads a state. So this is implemented using
files on the local file system. So saving
is writing it to a file and loading is reading it from
this file. So when it boots up it checks if there are already a state
and loads it. So it has some very simple
primitive way of persisting state.
And in typical go I could now build this application
with go build server
go and
I could run it binary, or I can do go
run server go and then
it's listening and I can connect with my favorite tool,
curl, whatever, it's just an HTTP endpoint. Essentially I
wrote, for the simplicity of this demo, I wrote a client that
expects a command key value and then
builds up the HTTP request,
connects to the server and then executes the
HTTP get and
does the set and get commands. Very simple,
nothing magic going on here. So again, I could
build my client, or I can run it directly and
of course it expects command and
value and a key. So let's say test test,
it says okay, so now let's try to gets this
value and there
it is, so secret test retrieved. Okay, now let's
say we want to put the server inside
a secure enclave so I can deploy it to the cloud and nobody can
steal those secrets at runtime.
So it's protected.
And all I have to do for this with
ego, so let's stop. The server is
just, instead of say go build, I say eg
go build server
go, same as
before. And as I mentioned, I have one additional step.
I built this binary now with our modified go compiler so
it can be run inside an enclave. And now I can create an enclaves
by saying eg sign. All enclaves have
to be signed. So I need this one additional
step. And when
we do this, we get three new files, a public private key
pair that was used for signing, and a configuration.
This configuration contains
some SGX specific parameters. It contains
the application that this is for it contains the
key and of course I
can modify those values if I already have a
signing key. I can use this here for signing,
but with the default configuration now we
can now just do ego run of
server. And instead of booting up my go binary,
it now boots up an enclave and starts my go binary inside.
And everything else is just working as
before. In fact I should still oh no,
this doesn't work. I will get to that back later, but I can set
this value again and of course retrieve it.
So very simple this configuration.
We will see it in the next iteration of this demo.
But you can go to our documentation, you will find
those, it's on ego devdocs.
And then you find of course those commands I just showed you. But you also
find a reference for this configuration file
that is explaining all those parameters, what they mean and what you
need to set there for your own application. This is
how simple it is to create a go enclave from a simple
go application. Same thing works if you have a more
complex application. For example, typical thing
we show is those hashicorp vault secret store
that also runs inside a go enclave or an ego enclave.
The next thing I want to show you is remote adaptation.
So we just have seen how simple it is to create can enclave.
Now it's running inside the cloud, it's protected.
But how does the user that wants to
trust this application and wants to send over the secrets know
that this actually the service we just written there, and it's
not this guy, that's some malicious user
deployed that's waiting for user secrets and then runs
with them. So how does it work? And it works with remote
agitation. When a user does a request there, the service
creates something like an identity report. In the world of confidential
computing, in the world of SGX, typically called quote. So the
service sends over this quote to the user, first user
verifies the identity, verifies the integrity,
and when successful sends over the
secrets. Let's see how with Eg we can very
simplest have this remote adaptation, so open
things code again, go into our server.
What we want to do now, we want to, instead of creating an
HTTP server,
we want to create an HTTPs server so the connection is protected.
And with the TLS handshake
we also want to verify the identity of the service going
into our documentation. It has a link to the
go library that ego comes
with. So let's see,
we have different modules
depending on what part of the
enclave process you're in. So for the client side, for the enclave
side, and some specifics so let's see,
the enclave has
some explanation on how to use remote editation and
what we want to have here. We want to use the very simplest function
we can have that's called create editation server TLS
config. So it basically creates a go TLS
configuration that embeds the
identity proof I just showed you in the TLS certificate and returns
your TLS config. So this function is what we
want to use.
So going into vs code we
say tls config and
some error that we're going to probably ignore right now should be
this Mr. C and
it was in the enclave package.
So this function here and
let's ignore the error for now.
And now we can use this TLS config when
we create the server.
So we
say TLS config is this guy.
And instead of sharing listen and serve, we say listen
and serve tls.
And because we already configured a TLS config here we don't
need to set any more values. So now instead
of just doing
a normal TLS handshake,
this TLS server also includes the proof
of identity. And when I run this again
sorry, I have to build it first of course need
to sign it and then I
can run it's.
And on the client side let me give you a
bit of show you what this looks like.
I'm using OpenSSL here basically showing you
what the TLS certificates that are returned from those
server look like.
So this is the certificate chain that OpenSSL
parsed, basically giving you some
information about certificates. And then there's an extension
and this extension is just binary blob
garbage here, but it
contents the identity proof,
the so called quote of things service. And this can
be verified on the client side to verify the identity of the server.
And of course ego also contains a library for the
client. So on the client side we
now want to, instead of connecting to HTTP server we want to connect
to an steps server. So we set those scheme
to HTTPs and
where we do the connection that's currently the HTTP
get here let's get a client.
And instead of saying HTTP get we say client get.
And now we need some configuration
again and let's go to our documentation
instead of those enclave now we go to the client
and it says create adaptation client. This is those
other side of this API that creates
a TLS config for the client and it
expects a verification function. And this verification function
is there to essentially verify the identity
of the service. So the example is
right here. So let's copy this part going
back into vs code.
I'll just do this here now.
So this was a lot of code, but it's
very simple. So we say we want to create those client
TLS config. We need the eclient module
for this and then
we don't need this anymore. We can just say
client get. So what does those verify report function?
It obtains the attestation report that is
passed during the TLS handshake and now it should verify it.
So you can say what
values do you want to verify from
the server? So how do you want to verify the enclave?
And typically you would basically
say okay, what security version should be smaller than
two, protect id should be
1234. And then we want to have some signer Id.
And signer id is set here and it says you
can obtain a signer Id from an enclave using ego signer id.
So let's go into our enclave json, this is the configuration
of the server enclave. It said product id
should be 1234, security version should
be bigger than two. So let's set it to three. And then we
need the signer id of this enclaves.
So for a second let's stop the server
and say ego
signer id of
server. And here we go.
So we can add things here. So you say
signer Id is hex code
and probably some error string.
Sorry. Okay, and then we can
use the signer Id later on. And of
course we need to verify the error or ignore it.
Ignoring it now.
And this should be it. So we basically say this arcade
should be signed, should contain those values and
that's it. So boot up the server again.
We need to sign it because we changed those values in
the configuration, then we can run it.
And here we can do eg, run client
not sorry, it's just go
run client. And let's say we want to set
a value that worked. So it was verified successfully
and let's get this value and
that's it. So the client verified the identity of
the server, established a secure TLS connection and everything
was end to end. Secure end to end, encrypted. Very cool,
very easy. You find the details as I did
in our docs and in our go package description.
Okay, so this was remote adaptation was
very straightforward. One thing I admins didn't
go into detail yet was bring our map to disk. I had
this in original example. I didn't change anything in
our enclave version. But we have to be a little bit more careful here
because essentially we're storing this in plain text.
And if we do so we of course leak
our state, leak our secrets. So we need some kind of way of
sharing this secrets to persistent
storage. And this in terms of confidential computing is
typically called seedling. So a client sends,
we do the identity proof first, client sends over
the secrets, we now want to encrypted them and
then store them to disk. And encrypting them
is more or less straightforward. The question is if
you want to unencrypt them, unseal them later on. We need to
make sure things is only possible from inside the enclave and only from the enclave
that initially sealed this data or an equivalent
enclave depending on. Basically we need to decide on what identity we
build this sealed data to.
And eagle also handles this for you.
So let's go back into vs code again, see how
this works. Currently we're just storing
them, we just store the data to plain
text to persistent to this file.
So let me go into our documentation of ego APIs.
This time we need the crypto module
and it has two functions, seal with product key, seal with
unique key, basically saying do you want to bind those identity of
your sealed data to this product to this service
in general or do you want to bind it to the one specific
instance of the service? In our case, let's use the product
basically expects the plain text encrypted
with your key and returns the encrypted data.
So going into vs code saving
state here,
this is our plaintext data. So let's say crypto
data with product keyboard.
There we go.
And again this returns out our encrypted
state and some arrow
I'm going to ignore for now.
And then we can store instead of
the bytes directly we store the encrypted state.
And same goes for loading the state.
Again, this is a function unseal that's ccspecific.
So this works for either of those two sharing functions.
So when I load the state before
decoding it,
I need to decrypt it. So decrypt
state.
So let's call it the encrypted state equals
the ecrypto package module.
Need unseal with our binary
blob and this probably returns an error
as well. We want to ignore it for now.
And now instead of the blob I use this decrypted state.
Very cool. One thing I need to mention here
is ego doesn't mount your host
file system by default so that you
don't accidentally leak any data. So when we store this here into
some random file, it will end up in memory and not on a disk.
We can specify mount points in our configuration.
Of course you can see this in our documentation.
So for a configuration file it explains
those mount points. So let's copy this
example here. Mounting a path
in the host file system.
So I'm saying okay I need some mounts and
I want to mount,
I don't know, the server thing, I want to mount it into data,
it should be host of s and it should not be read only,
very simple mount point. And now I can say okay store
this into Datasecretstore.
I probably need to remove the current secret store because it's unencrypted
and it won't decrypt well,
but that's it, right? So we encrypt, we seal
our state before storing it and unseal it here
before loading it again. And I
need to build it again and
sign it with the new mount
points. And now I can do eg run,
let me make this a little bit bigger.
So in the client I can just do the
same thing, I can set a value and we
will see secret store pops up here. So it's
binary data, you have to trust me for now. That's actually encrypted.
I can show you directly.
So let's see if I retrieve it now. It should
return, just go
run. Client gets test those. This was
the wrong one. It retrieves the cv.
Let's restart the server and
then do the same thing and
return. So the state work was persisted,
it's stored in an encrypted file on
the those file system now. So this is how
simple ceiling is with the EG libraries.
So this is all for ego. I hope you had a good hands on experience
to give you a quite good impression
on how eg works. I just want to give you
a very brief, very short vision
of how we can now deploy for example
the server application we just built in cloud in a Kubernetes
cluster and deal with all those ccspecific
tasks that you have there. So think
of our server now as one
service, one instance of a service, one pod in
the language of Kubernetes. And now you want to scale
things up because a lot of clients want to store their key in your key
value store. So you have those in the scheme here you have four
instances. Now we need to basically
sync our state between each other and all those different instances.
They need to attest each other because they don't trust each other from
nothing, right. They need to do a attestation between themselves and
then from the outside you just want to see this as one instance. So you
have a lot of those CC specific tasks.
And we build another tool called marble run. That is,
we call it the control plane of confidential computing. That basically
takes those concepts we just saw from single enclaves to
the context of the whole cluster. So you can have
end to end confidentiality, integrity, verifiability, not only
in a single enclaves, but your whole deployment in the cluster.
And everything is updatable, everything is cloud native. So very
easy. And we don't reinvent the wheel here. Maran works
with the most common service mesh. It's designed to work with kubernetes.
It can also work standalone, but together they form what we imagine
to be the way of dealing with devop tasks
with deployments of confidential microservices in the cloud.
Okay, so conclusions? I hope I got you excited about
confidential computing and how easy it is to build
confidential microservices with ego.
I encourage you to try it out yourself. Go to
ego Dev, check out our docs, we have some samples you can
try out. Same goes for Malvern, on Malvern sh,
they're both open source, you can find them on GitHub if you
like. Leave a star, we would really appreciate it.
And yeah, that's it from my side.
Hope you enjoyed the talk. Thank you very much for joining in and
enjoy the rest of the conference. Thank you.