Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi everyone, I'm happy to be here today. I'm Shai, I'm the
CTO of lifecycle and today I'm going to talk about self
contained development environment. I'm going to start to stop
this presentation by sharing can experienced ed a
few years ago. So back then I worked at a company called Soluto
and I led the development of an open source solution for
feature flagging and mode configuration.
Since the solution was an open source solution, we wanted
other developers in the company to contribute their code to it and
make it better and add their own features.
In order to encourage them, we've organized a company
wide Akaton and because the project was called Tweak,
we called it a tweakathon and the
team has super excited. We created like an
organized backlog. We added some documentation in rough
areas, we print some shirts and
have like swag of tweak and well
the event started and well
at the end it didn't work that well. The problem was that most
developers in the company were struggling to run
and build and run the project. I mean the bootstrapping
was so difficult and many of
them didn't even get to the part that they are starting to develop
features. They just trying to make everything like work
together. The project was complex and the tweak team
has going from one station to other station and helping developers
install missing dependencies or having the right configuration.
And at the end we add some
few features that were added but the experience
for most developers wasn't that good.
And to put it simply, the onboarding was way too
difficult for Can Akiton. And that's the
topic I'm going to talk about today, how we can make it much
more easier to run
a project and do the onboarding and make sure
that developers always have great experience when
working on this project, new and old
developers alike. Before we
start to deep dive into this session, I'll tell
a bit about myself. I'm shy. I'm the CEO and co founder
of Livecycle. I'm a full stack developer. I'm passionate about cloud
development, backend architecture, functional programming.
I am the creator and maintainer of Twik can open source cloud native
feature management solution that I mentioned and I really care
about simplicity, consistency and elegance in code.
About lifecycle. We are building the next generation
of collaboration tools for development teams. It's based
on the idea of consuming playground environment and it
designed to bridge the gap between coders and non coders
and you are more than welcome to check it out. You can try it on
lifecycle IO.
So let's start by describing how does it
feel like when we start working on a new complex
code base. So first
of all, we are trying to build and run it, but in many
cases we don't have the right operation system or we have
missing or conflicting SDK or programming language.
Randoms can be like Python two or Python three. For example,
we have package managers that throw, let's say random errors.
I mean, they're not necessarily random, but they certainly feel that
way. And then,
well, we thought that it didn't work that well.
We are trying to read the readme and then
in the readme it said that we need to run some magic scapes,
maybe change the OS file. If it's like a really complex
project, we need to install tools and dependencies on the environment,
such as databases or something like that.
Maybe install root ca.
I hope you don't have this problem, but if it's, I mean, that's certainly not
fun. Okay, we build it and
we run it, but now we try to develop and in many
cases the debugging doesn't work, the id doesn't have
problem like attaching. In many cases we have problem
with photocomplete or dependencies code.
Watch and build doesn't work and we need to install like watch. No other tool
for better watching. Automodel reloading, if we are working
on a front end app, doesn't work because of, I don't know, websocket issues
or something, problem with external dependencies,
of course, you name it.
And that's not the worst. Then we get to integration
test, which in many cases simply just are
not running, especially if you have something
that depends on web driver test or
UI test. So it can be difficult.
And the worst thing about it is that all this setup
and we need to do it over and over again after
a few months. If we started to work on a different project,
if we come back to the project, there's a good chance
that we need to do this all setup again. Maybe because
the project got updated, or maybe we installed other dependencies that
are conflicting. And that's definitely not fun.
So why is it so difficult? So I mentioned some of the
problem, but it can be like we have lots of fragmentation in operation,
system in sdks, in runtimes. In many case it
work on my machine and I don't worry about it anymore.
I will prefer replace
machine only if my machine got burned or something. I mean,
it's work, I'm not touching it and that's the problem. When we're trying to
set up like a new machine, there is a vast amount of different
tool chains, ide extensions. Our development
workflows today are much more difficult to set up and they can break
like debugging, watching, building, got, reloading,
maybe mounts. We are not just doing like build and run
anymore. We want to have good tooling.
The developer machines are also polluted and
overloaded with tools. And besides that
environment and tools tend to change rapidly in active repositories.
And so at the end we have lots of frustration and
we waste tons of time. So how can
we make it better? So what's the dream? So the way I
see it, the dream is developing environments that are consistent.
We have the same predictable experience.
They are reproducible, so I can destroy and rebuild them. I don't
need to care about that. I managed to make it working.
They are isolated, so they are not get affected by
other development environments that I want by other
project that I'm working on. They are self containers.
So all the dependencies and tools that needed for development are
defined and packaged inside the environment can be
like a database or an SDK or something like that.
So if lots of this stuff sound
familiar to you is because
lots of this stuff has been said about containers and
lots of, I mean these properties that I mentioned and
two and a half years ago, years ago vs code
released a really amazing feature in my opinion
that allow you to work on the vs code is
one of the most popular id. I'm sure most of you are probably
using it according to surveys,
but they introduced a new amazing feature that
allow us to have our own local
ide running on our computer.
But the environment itself is
running set a container. So in terms of experience we
still have the native UI that is running. It's not like a
full remote solution, but the vs code instead.
Like all the extensions and the language server and the
debugging and the terminal and everything is running inside the container and not
inside our local installation of vs
code inside our host machine. So this allow
us to have a great developer experience in
a very well defined instant, instant selfcontained
development environments. Going to show a demo.
How does it look like? All the examples and
slides are available on GitHub. All the tools using this position are
open source and free to use project. I'm not selling anything. Most examples
here I will say that they are not bulletproof
completely and they are using some tools that can be
considered experimental, but they work and they work well.
You can easily reproduce them.
And in this example I'm going to go through several projects and show
how we can use this amazing new feature and
how we can configure our projects better to have this kind of experience.
So I'll try with a simple example that's
a CLI tool that take an image and turn it
into can ascii out. It's an open source
project. The project is written in go, it's written in
can old version of go before we had go models.
So it requires some tweaking to make it work if I was running
it locally. So let's open our id and
this is a regular vs code, but you
can see that it's working. We see on the game light
that it's written like dev container go and
what does it mean? So to showcase it,
if I go and take an open terminal here and
see if I have like the go cli or tool,
we see that I don't have them. They are not installed on my computer,
but here I have them. Why? Because this terminal
is not running directly on my OS machine, it's running inside a
dev container. But I still have the same experience
of a regular terminal, which is amazing.
And it's not just a terminal, it's like everything in the editor.
So how does this magic happens? Simply we
have a dev container folder with a dev container JsON file
here we define what we want to have the configuration
for this development container.
We can define what the docker file that is used to build
this environment. We can say some
settings for plugins. For example I define some settings for the
Go extension. I also can define what extension in
the id I want. So in this case the
dev container have dedicated extension of
vs code. Then it doesn't have my old array of extension and
I have other stuff that I'm doing. Because this project is
an old project, I need to do like a sim link between the folder and
the working folder and go lang path
convention that they are using the past for models.
But the good thing is that I don't need to worry about it because it
was done for me. So if I look
here on the docker file that is used to build a container, we can see
that I'm using a base Ubuntu image. I'm installing Go,
I'm setting an environment variable to not use go model.
I'm installing other tools
that are relevant for Go, such as Dep. It's like can all dependency management
manager that this project use?
I'm using the Zash shell and I'm installing
plugins for the shell. So if I'm going to use like git.
So we can see that we have like autocomplete for git and also for
go because here we define the plugin of git and go.
Amazing. So let's try to run this project.
I'm going to do like go run images key
go and we can see that it
needs to have like a file using the f flag.
And I'm going to try to run this file
and it won't work because it's like a URL and this project doesn't support
it. If I go to the code we can see what we need to add
for supporting URL. Basically it's
like adding the kind of logic here in the open image file
you can see first of all that I have like full autocomplete highlighting.
I mean everything just works. And I'm
going to check out our branch
that I have this implementation. We can see that I have everything
from the self control perspective, everything simply just
walk inside this dev containers. And again, everyone that
will open this project is going to have the same experience I'm
having now. No need to do any setup at all.
Okay, so I
did check the branch and we can see
that in the new branch we have code
that download the image. If it starts with HTTPs or HTTP,
let's try it. Okay, so that's like
my profile image in Afkart. Let's have the
python logo or
docker. And again it was very simple
and if I didn't have the dev container I can waste hours
making it work and have a good developer
experience. You can see also that we have some
errors here. Basically if I was trying to run the test,
they would fail because they are missing dependencies in the project.
And if I'm going to run the dependency manager
here that is also installed inside the dev container, I'm going to run
the test and everything work properly.
And I'm not sure why it's still red,
it's like missing.
So after the insertion it actually should work
properly. Maybe I'll just do go build.
But you can see that the tests are working and the
application is running. I think that the mock dependency for some
reason vs code didn't identify it, that it
was fixed. But if I reload the id it will
probably work. Okay, so that was the first
example and I wanted to take notice on what we saw here.
First of all we saw that
we are running inside a development container.
So the development container has integration with the SCM.
That's where I can do like git commands and check out a different branch.
We have remote code editing so I can edit directly and
we have a remote terminal. In addition we
saw some configuration of the environment. So we set out the
runtime, we set environment variable and path. We configured
our shell and we define extension for vs code to use.
Okay, let's continue to our next example.
And it's a Python application. So again it's a Python conference,
so it kind of makes sense. Here we have a
simple flex application to send email based on sendgate example.
I mean a simple flex application to send email. It is based on the
send git example. So again all
these projects are open source, you can try them. We have a
new challenges. We need to run and interact with a server.
We need to manage secrets such as the flask
secret, the sanguine secret, and we
need to have a good debugging experience.
So let's try that.
That is the product of my Python
project, the simple email sender example. We can see the dev container
here. Here for example, I won't have like go or
node or the other stuff I had in the previous dev container.
I just have like a Python 3.9.
Here in the dev container we can see the definition, we can see the
settings that I want to add to the Python
extension, the extension I want to use.
And I'm going to show the
Docker file here. And in Docker file you can see that I'm installing
another tool called sops.
Sops is a
tool that designed for encrypting
and decrypting secrets. I'll show you how I'm going to use it.
So basically here I have the configuration of Sendgrid
in a file that is also part in the source control.
But it's encrypted. I mean no one can read it.
It's encrypted with a GPG key in this case.
And to showcase how I'm going to use this file,
I'm going to show a different file, for example,
that's called example encrypt that here
we also have a secret, but in the secret file we has like the Sendgrid
API key and the mail default sender. This is like metadata,
but here we have the sum secret number.
So let's try to decode it. I'm going to use soap Cli
let's do d and you
can see that I have like the
secret number here is 30. I can
put it like in a dot file and
the other file that is decrypted is not going to get a check inside
the source control and you can see that I can easily edit it
as well. So if I'll do sops edit,
I can change here the number and
if I'll run the decrypt again, we can see that it's the right
different number, it's 40. And we can see that in the
source control that this file is also changed.
So if we look on the init file, we can
see that the first thing we are doing, we are decrypting
the secret encrypt JSON and put it in a n file
that flask can use and then we
install all the requirements.
So let's run the init.
Okay, now let's
try to run the application.
So the application is running. The reason that it's
running this way is because we have a launch
JSON configuration that define how the app is going to run.
It basically created automatically the moment you run it the
first time. In this case it's also like source controlled.
There's no need to create it every time.
So we have the application, we'll see that it's running on port 5001 and
you can see that it's running on the local host of the development
container. But we want to access it on my machine,
so how can we do it? So first of all, I'm going to show
you that somehow magically it works. So that's the application.
I can send a test email,
node it for the breakpoint yet. Spoiler.
Let's go back. So I sent an email and it's just now.
So it works. But the
thing is, why does it work? I mean
I'm the local lost on my machine and the reason is because vs code
do a port forwarding automatically. So we see that port 5001
on the container is forwarded to my local address
on the port 5001. So that's
simple. And I can forward any port that I want from
here. I could also open it inside a vs code like
in a browser ill and I'm going to
let's send it again and
I'm going to put a breakpoint and decide before,
but basically again it simply walk.
I mean I have the full experience, although I'm not running on
my machine, the id is running locally,
but the container is running on a different machine.
And we still see like we have a great experience. We can see the watch
variable, we can debug, we can do breakpoint, we can do everything and
it simply work. I don't need to do any setup for that.
I'm just opening the container,
I'm just opening the application vs code will recognize the dev container,
it will run the application inside the container and it will have great experience.
Okay, so that was another challenge if
we'll go for what we saw. So basically we
saw secret encryption.
We use Mozilla stops for encrypting the secrets.
I'm using GPG key, but actually it's much more
powerful. It can also connect to other encryption has a service solution such as
kms of AWS or Hashicorp key vault.
The metadata such as the keys are saved unencrypted.
So it makes it very easy to do diffing and check history as we saw
in the git diff. This practice
is not actually used for dev environment all the time.
It's actually popular in GitHub's context when deploying
stuff to production that needs secrets. So there are other solution
as well such as Git secret, Gitcrypt and Adele.
The id setting we saw that I'm using a launch JSON
for configuring the launch of
the project and I'm doing like port forwarding to forward ports on the
localhost. But again that was pretty
straightforward. The next project I'm
going to show, it's like a step up in complexity
and it's a personal project.
It's tweak, the open source feature
flag management solution that we used in that Akaton.
So in tweak we have lots of challenges. We have several microservices,
several databases, messaging system, core service
communication, different languages
in each one of the microservice. The architecture looks
something like that and it's very complex in
terms of like dependencies, kind of services can
be really challenging. And to solve it
I'm going to show how we are going to use not just dev
container but also docker compose and tilt.
So let's see the example here.
So that's the dev container of tweak.
We can see that we are mounting
volume for Docker and Docker.
I'm installing the extension of Docker.
So first of all you can see that I have Docker
inside it and it's like a nested
docker. So we'll
see that how we can make it happen in the Docker file.
So my extension include Docker. So I can see the
containers that are running it include
net because one of the project is in c sharp, it includes the
Golang and also include prettier for formatting
of JavaScript. The post
create command installed all the dependencies like doing NPM
and. NEt CLI installation.
So we can see it here. Net restore run
all yarn and all the other stuff and
the docker file itself, that's where it gets interesting.
So we have some code here that designed to run Docker
in Docker everything here it's actually taken from vs
code examples. We install
CLI extension plugins for Dockergate,
Golang and. Net, we're installing
net, we're installing Golang, we're installing
node js and yarn, and we're installing tilt.
So basically we have all the things that we need to run and I'll
talk a bit about tilt.
So first of all, we can see that
we have all the services, I can debug them.
I can have good experience for
every kind of file because I installed
all the right extensions in place. But when we are running
them, we want to run like the full installation of tweak
with all the dependencies and complexity. And for that we have
a Docker compose file.
So Docker compose file is maybe many of
you know, it is file that's designed to describe
a setup of several containers that are running.
And each container, what are the environment variables. It can also define
what's the build context to, how to build them,
or where do we get the docker file for building them.
So here I have the Docker compose files,
and the important Docker compose one is the one called Tilt Yaml.
It's also a Docker compose file that includes all the services
that we are running, all the environment variable, all the configuration,
and the reason we are using
tilt. So Docker composer allow us to run
all this application. But tilt allow us to have a
good development feedback loop. So every time we change a code it's
going to rebuild the image or try to do a live update.
So for most of the services we are simply on every code change
we are going to rebuild the image and then rerun it.
But for the editor we are going to do odd
model reloading. So let's run tilt.
We are using the command tilt app. It's a bit the same as Docker compose
app. And here we have like the UI of
tilt. Let's see it
running and we can see that it's run this
port, it's also folded here. So we
can see all the services of
tweak, the API and every service
we can see the log for it. So it's very convenient.
We can see the application running on port 80 81.
And I'm going to show that that's
like the editor, the UI of tweak I'm
going to show that we can easily do a code change.
So let's go to login
page, let's open it here
as well.
And instead of welcome to I'll
say Python
42. So I change
it and it has changed like with odd code reloading
without refreshing. I can change also
I think the login message here welcome
message span. So let's change it as well. We put something
like that, it will be more welcoming.
And it simply woke and we have like a great developer experience.
And again it has something that was very difficult to developers
when they tried to work on this project on tweak,
just running it was difficult. And now if they are opening this project,
they can easily run it,
debug it, do some changes in
the UI and get auto loading experience, do changing in the
services and get image rebuilding and rerun
it and everything simply work. And I mean it's really amazing.
I wish we has this kind of technology and capabilities like
it was six years ago I think, or five years ago.
So it's really amazing.
Okay, so the last example I'm going to show.
So let's talk a bit about this example. We saw nested
containers. We are using the approach of Docker and Docker.
There is also approach that's called Docker from Docker which we want the
OS Docker connection. But I think Docker
and Docker is much more stable and if it works well
in terms of performance you should consider using it.
We saw watching rebuilding on every code change,
remote debugging. I mean we can have hot
code reloading if possible and
things can get slower but it's definitely worth it.
Also in Wikipedia mock cloud dependencies. So the reason the application work is
because we have the Docker images of database. So we have image
of register that is running image
of nats. We also use wild compatible solution to
other dependencies. So tweak
use Amazon S three, but here we
use minio which is s free compatible and we use OIDC
openid connect mock server which is compatible with Google SSL.
So that's the reason we can have great
local development experience.
The last project I'm going to show and I'll try to
make it short is a protocol cost model, is a tool to
manage Kubernetes cost. And the reason it's interesting because the Kubernetes
deployments can really be complex. There is
a new challenge. We need a Kubernetes service server,
we need a metric server and we need Prometheus which is a time series
database kubernetes,
if you are familiar and try to use it. It's a
containers orchestration solution and it's like a full platform
and it can be very difficult to use and running local
development kubernetes today is already difficult. We have a
fragmentation of different kubernetes distribution and each
one of them have different changes in how you use them.
Versioning is difficult, upgrading is difficult.
So using dev container maybe we use a single kubernetes
distorted version and it can make the life much more easy.
So I'm going to show the example of using in
this project we are going to use a project called k.
So let me just turn off the tilt here.
So that's a Kubernetes cost model.
It's also used Docker. In Docker we have the extension
of vs code yaml for editing kubernetes
manifest. We have kubernetes tools. So we can see like the cluster I
have, we have Golang because the product is in go.
We can see that I have a Kubernetes cluster that is running.
And if we look at the Docker file we see that we
are installing Docker in
Docker. We're installing autocomplete
for kubectl. We're installing K
3D which is the tool that's designed to create
Kubernetes cluster, K three s clusters. We are installing ElM
which is a package manager for Kubernetes Golang node
js and again tilt.
So if we look at the nit script that we have here, so we are
running all the dependencies. But the interesting thing
is that we are creating a Kubernetes cluster and
also we can create a Kubernetes cluster with registry,
so we can build
and push images and deploy to dev.
Also we
are applying the manifesto in the dev container which
is the installation of Prometheus for. So Prometheus is a time series
database and here we use a Nelm chart of Prometheus
and we have the definition here. Again everything is
happening declaratively,
automatically. So when I'm running this project everything
is going to run and I will have a good experience
that have a Kubernetes cluster
running locally and also using
tilt. Again if I'll do tilt up,
basically tilt connects the image that I'm building locally
with the Kubernetes cluster that is running inside the dev containers.
So we can see again,
let's do a refresh here
and we can see that we have the cost model which is the API that
calculates the cost and the IUI, which is the IUI application.
If we look at the tilt file, we can see that we have Docker build
of the cost model. We are pushing it by
doing the Kubernetes resource
and loading the Kubernetes yamls that we have here.
And we have a local resource which is the UI
that is running and again that is running with auto loading and
all the things that are necessary. So the application server,
we can see it here and we see that it works.
And the UI we can see here.
And the interesting thing about it that if you look
on the, it's an open source solution.
So if you look at the contributing file, you can see that
in many cases it can be very complex. To run this
application you have separate stuff for building
and running. You need to do a port forwarding and in
this case everything is just happening by running the
application. So that's one example of how we
can create a so much better development experience.
That thing I'm going to show in this example that I can create
this cluster called
cost model, I think. Let's see,
cube cost. Okay, so let's delete it
and let's run the init
function again and I'll open it here, additional terminal.
And we can see that we have kubernetes running
and so it's still not running. Let's see.
Okay, we have a cluster, it's a use context, so let's open the additional
terminal. Oh, no need to open it.
Okay, so let's do kgetnod.
We can see that we have a cluster like
in again a few seconds. And if we'll do
like get pods, all namespaces,
we see that all the setup of setting
Prometheus and the metric server, everything is like running
taffix is the reverse proxy, but basically it's done like everything
automatically and in less than a minute. We have a working environment
of kubernetes that include all the dependencies we need for
running this application that actually have very complex dependencies.
So again, it's an example of how dev
container actually can be used in so
many scenarios of developing
application, even if these applications are very complex.
Okay, so that was the last demo.
So we are using, for running kubernetes in the dev containers we
are using k 3d, which is a Kubernetes.
It's based on k three s, which is a minimal Kubernetes
distribution. And we run it in Docker.
We use Elm for installing the chart, as we saw with Prometheus,
the ELM controller do it like declaratively and we
use tilk for facilitating build pushing, running and updating
of images. To put it simply,
what we find this example. So we have the dockeros that
contain the dev container that contain the
ide and tilt and
inside and the id are connected to
Docker and Docker which have a registry and a Kubernetes
node that inside of a containers that run the application.
So it's like a very nested thing.
But the good thing is that it's all like automatically.
And to put it visually, it's something like this.
Okay, so that was the last demo as I mentioned.
I think that this example showed instant,
instant selfcontained development environments. Something totally amazing.
The good thing about them, that they are also source controlled, so they
correspond to the application code so it can be easy to run like old version
of the application. The developer machines stay clean.
We can scale well to multiple environments without conflicts. As you
see, I run so many environments and everything works smoothly.
They can run locally or remotely, which is
again very convenient if you want a much more powerful machine.
Our setup at lifecycle, we actually have like tens of microservices,
even more today in Golan and typescript we
have a front end with what model loading. We have our own Kubernetes containers
and custom resources, lots of external dependencies,
full blown CI engine, graphQL engine.
I mean lots of stuff is happening there as well as
clis and sdks. And even though this setup is
so complex, the time to build
all this setup, tear down and build it completely from
scratch is less than 50 minutes. The time to
build run test code changes is less than 10 seconds,
even less for UI changes. The time to onboard
new developers so far is less than 3 hours. And that includes
like setting the docker and creating provisioning
a machine on AWS because we are working remotely.
Time to introduce a new tool to the project is less than five minutes
if needed. There are no walk on my machine and
there's no stand on developer machines. Secrets are
encamped in the repository, so developers need less to
deal with them, less they don't need to copyten files
from place to place on using slack secret to send them
or something like that. We are using data feeding
so developers have initial data to work with and
the team mean our setup work on both
Apple silicon devices and Intel Max. So it's
really nice to see that it simply works.
In the future I hope to optimize it more to have shared build
cache, to have snapshots to reduce that 15 minutes in
the first time and maybe
using a better cloud provider optimized machine
instead of AWS, something that is much more cost effective.
There are some drawbacks here.
Creating the initial setup can take some time. Lots of the tools
are building edge. There's additional code to manage, like the code
of the environment. I think the big problem is that dev environments are not
standardized yet, so we have coding to vs code.
Docker gate and Linux. Docker gate and Linux. I don't think it's
that bad, but Vs code is actually like we
don't want to dictate the IDE to use. And there's
also some performance issues.
Do we have alternative to vs code when using
dev containers so it's possible to use terminal
based code editors? Gitpod and
FIA have similar features as well with
Gitpod YamL file. But the one thing I'm
most excited about is that jetbrains are having good support for that,
both in their gateway project and gateway and spaces and
in flit which is their new ide.
So yeah, I think that in the future we have
more alternative. The experience in vs code is amazing, but not everyone wants
to. It's not the favorite editor of everyone.
I think that putting the development environment
configuration in the repository is part of a larger trend to put
more stuff in there. We can see it with documentation
with linting configuration,
we can see it with like a test or security test and
design system, the infrastructure of code notebooks
and other stuff as well. And the
way we see it, like in the future, every repository will
be self contained. All the code tools and knowledge
and definition related to that project will reside in the repository.
Everything will be source controlled with history. The code
will be so much more accessible.
We are lowering the barrier of entry because everyone
doesn't need to know stuff externally to the repository or
start to do like a research on how to make everything works.
The application are portable. It's really amazing
that if I'm creating a project I can control
the developer experience of the users that are going to use
my project and I can make sure that they have
good experience and these kind of technologies.
I think that we are seeing emerging tools ecosystem,
cloud ides, dedicated PR environment
that is like the PR environment is something that we
actually do in lifecycle, in our product.
And also we see emerging ecosystem in everything related
to Githubs that we have like a repository and we want to
publish it, publish the application.
I've shown lots of tools. Here is a small cheat sheet
of every challenge. What's the solution and tools for example.
Some of the examples here relate to stuff I didn't show in a concrete
example, but they are very similar. There are challenges
that you can tackle if you are trying to use dev containers.
And thank you. Everything is on my
repository and I also post links
on my Twitter account so it
has fun and very interesting and thank
you for your time.