Transcript
This transcript was autogenerated. To make changes, submit a PR.
Are you an SRE,
a developer, a quality
engineer who wants to tackle the challenge of improving reliability in
your DevOps? You can enable your DevOps for reliability
with chaos native. Create your free account
at Chaos native Litmus Cloud hi
everyone, I'm super excited to be here today.
I'm Ishai. I'm a developer and CTO and
co founder of Lifecycle. Today I'm going to talk about how
to create an amazing developer experience when onboarding
a new code base. I'm going to
start by sharing a story or an
experience I had a few years ago. So back
then I led a team that developed an open
source solution for feature management and
for feature flagging and configuration management.
Because it was an open source project that was
developed inside the company, we wanted everyone in the
company, every developer, to contribute
code to this project. Because this
project, to achieve that, we've organized an Akaton
and because the project was called Tweak, we've naturally called
it a tweakathon. And we were excited
that me and the team, we've created a dedicated
backlog for this event. We've added
lots of documentation to the project, we promote
the event, we printed t shirt and
at the end, well, it didn't work that
well. We didn't have many contributions and
the main reason we saw that most developers were struggling
to run the project, let alone develop new features
or test them. The problem was that
the project was complex and they needed lots of help
to run it properly and add code to it. And even
when they start working on it after a few hours,
the development experience was not working that well in terms of like debugging
or code intellisense. And today
I'm going to talk about the challenges we have when we onboard
a new code base and how we can make it much easier.
Few words about myself. I am a developer full
stack for the last decade and a bit more.
I'm passionate about cloud development, backend architecture,
user experience, developer experience I really love
functional programming. I'm the creator and maintainer
of tweak, an open source cloud native
feature management solution, the one I talked about before.
And I also care deeply about
consistency and delegance in code.
A few words about lifecycle so we
are building the next generation of collaboration tools for development teams.
It's based on the idea of continuous playground environments and
our mission is to bridge the gap between coders and non coders.
Our project is currently private, but the beta
is coming soon and you are welcome to check it out.
So let's start by describing how does
it feel to start working on a new complex code
base? So the first thing I'm going to do,
I'm going to try and build and run the application,
but I don't have necessarily the right operation
system, so I need to see what I
mean. Maybe the instructions are related to different OSS.
I might have missing or conflicting sdks or programming language.
Runtime can be the wrong version of Python or Ruby or node,
and in many cases I'm starting with installing
the dependencies and the package manager
for random errors and they can be difficult
to debug. They are not necessarily random, but they
sure feel that way. And after
that the build is working.
I manage to install the dependencies,
but the next step I'm going to run the
reader readme and try to make it work because it didn't run properly.
Apparently I need to run some magic script.
Some of them I'm going to watch fail. Maybe I need to
change the OSS file. Apparently this project
require a database, so I need to install the database.
And if I'm out of luck I might need to install something very
intuitive like a root CI or something.
But I managed to build and run it and everything work, but well
I'm here for developing and suddenly the developer experience
might not work properly. So debugging doesn't work,
the id doesn't stop on my breakpoints,
autocomplete or dependency management not work properly in
the id, the flows of code watch and build
doesn't work and apparently I need to install watchman or other tool.
Odd model reloading doesn't work because of
websocket issues or something else, I don't know.
And there's also these external dependencies that apparently
I need to manage them differently for them to
work. And maybe I have like code issue or something else.
So it's kind of a difficult experience. And after I
try to manage to run everything in develop, I get
to the integration test and here it's
bunch of tools and they are flaky and they are not working
properly. And yeah, my head explode.
But the worst part of it all is
that if I'm going to leave this project for a few months,
most probably the next time I'm going to work on this project,
I'm going to do all this stuff all over and over again because
my machine changed and because the project change.
So yeah, that's difficult.
So why is it so
difficult? One of the reason
is because we have so many tools and
so many sdks and runtimes and this like huge fragmentation.
Also usually when stuff open on machine,
we are not touching it. It's like it's woke. I'm not going to install
a new machine ever, unless my machine gets viruses or
get burned down, or my hard disk is failing or
something like that.
We have like vast amount of different tool chains,
ides complex development workflow that
can be really difficult to set up and break
easily. Our own machines are usually
polluted and overloaded with tools, which is really
terrible. And besides the environment
itself, the code bases always introduce new tools
and changes, especially in active repositories. And at the end
we waste tons of time and lots of frustration to get
this stuff working. So what's
the dream, what's the experiments we want to
have? So the way I see it, we want delelopment environments that
are consistent, that provide the same predictable
experience. So I'm not going to
get like these random errors, because their environments is
consistent. I want them to be reproducible.
It's possible to destroy and rebuild them, so I can move
CTO a different machine, or if I did some damage, I can just
destroy and rebuild them like we do today with servers.
I want them to be isolated. If I'm running several
development environments, I don't want them to conflict with each other,
so that I can have like this project required this
version of a code and this one
other, and this one required this specific CLI. And they are not
conflicted with each other. They should
be like self contained. Meaning that all the tools
I need to work in an optimal way on
this repository in codebase should
be defined inside. So it should
be like easy to. I'm just starting to walk and I have all the tools
and dependencies and packages.
And at the end we want this environments won't
break easily, so we won't struggle counter sour to
get things working again. And the feeling
is we used to have some
of these challenges around applications and
then we suddenly run them in containers, which have very similar
attributes to the one I mentioned here. Especially when we talk
about Docker container. We have like this tool can
of Docker that allow us to build and run
this environment, which are self contained, isolated to
some degree and reproducible,
which is really nice. Now, few years ago,
two years ago actually, vs. Code, which is one of the
most popular ides around, today introduced
a feature to develop inside of a container. Now the
idea is simple. The application, the id itself has
like two components. The front end run on our computer,
providing a good and native experience.
But the back end of the id which is responsible for file editing,
terminal language, server extension,
et cetera, et cetera is running inside the container alongside with
our code base. And that way we have our
repository and delelopment environments running like in an isolated
container while we still have a greater developer experience
on our oss machine. So how does
it look like? I'm going to show several projects
and example that are available on GitHub.
All the tools that I'm going to use in this presentation are open source
and free to use projects. So you can do it yourself.
You don't need any other tool for that or
something that require cost.
And most examples there are far from bulletproof
and use tools that are experimental,
but they showcase our value. We can get
somewhere inside a container the development environment.
So let's start with a quick example.
The first one I'm going to show is a project that do
like a translation between image to ascii art.
The project is written in go before we had go models.
So we test several challenges and
let's start to develop it. So the first thing I'm going
to show here is that we
are running inside the developer container. You can see it here in the status
bar vs code and you'll notice something very unique.
If I'm going to run the command point and look if I have the tool
for Go, you see that I don't have them because they are not installed on
a machine which is actually a windows machine. But here is a container
that run in Linux and have go
installed. So that's pretty awesome.
Now how does it achieved basically vs code if
it have like the dev container folder, it looks for an
instruction on how to build the environment for
this project. So we can see that we have a definition
here of what Dockerfile to use some settings
related to go extensions that should be installed in the repository, like the
Golang extension. We can see here that I already have Golang
installed because it's part of the developer environment and
postcard command that if we want CTo do some initial script.
Now if we look at the docker file here, we can see
that we start with a basic ubuntu.
We are going to install go put
it CTo our path delelopment environments variable
that tell Go not to use Go models because the project,
as I mentioned before, it's like can old Go project. We install
several additional Go tools. The dependency manager
of Go back at the time has called DeP that
this project is using and some configuration
for our shell. So I have the autocompletor
for Go because I configuration my shell
to have extensions for git and Golang and
basically everyone that is going to open this project
in vs code will get the same experience.
So this project run asciard. So I'm going to
run it against an image
and I'm going to tell you ahead that it's not
going to work because this project doesn't support URLs and
I want CTO add this feature and basically I
need to change the open image file here.
I'm not going to do it right now because I have already branched for that.
So I'm going to move CTO that branch.
But you can notice that I have like a good experience here in terms
of editing. I mean everything here is autocompleted fast.
Very nice. Now I'm going to run it
again with my branch and let's see,
pretty nice. We have like the docker logo and I can
even print my user profile
picture and yeah, that work properly.
I can run the test and everything work
and I can also run our dependency manager to
see everything work. Notice another thing
that I'm working inside a directory, that is
special directory because a Golang in the past required
you to develop in a specific folder for
stuff to work properly. And I
have here the definition of the workspace and identifying that I
walk inside this folder. So that's basically it.
And usually, for example if I was walking it locally,
I would need to install go in that version, install the dev manager,
create like this folder of Go Src with Goroot
and stuff like that. That is not that
fun and might collide with other project.
I have also the environment variable and other tools.
So that's pretty much the idea of a dev container.
It integrated with the SEM. So you saw that I can move
between branches. I have remote code editing capability,
not just editing the code but also seeing like the autocomplete and
everything work. We have like remote terminals,
so it's different than the one I have on my OSS machine.
It's internal terminal for that container.
And we can configure our environments the way we want. So we
can set the rally, try and time and SDK and CLI. We want
set environments variable as I did with the go environment
variable, go models environment variable. I can configure our shell,
add some plugin, define the extension, the ideas like the Goland extension.
So that's pretty cool. But can it work for more complex project?
And I'm going to show it. So the next project is
a server app, still a simple one,
it's a flask app that send email based on Sendgame. The example
here we have several new challenges for good and coding experience.
First of all is running and interacting with the server because the server is
running inside a container. The second one is managing secrets because we
need an API key for send git and then there's debugging.
So let's start with this project. I'm going
to start by showing the dev container. So we have a
dev container that is based on Python. We have the extension
of Python and we have the Docker file that
is based on image that Microsoft provides for Python application.
But additionally I'm going to install sops. Sops is a
project by Mozilla that designed to deal with secrets encrypted
and adding encrypted secrets inside the repository.
And that way I can add
the secrets of for example my API key and keep
it in the repository, but keep it safe because we know
that secrets should not lie in the repository unless they
are encrypted. So here we
can see that we have like a Sengit API key. It's like a
JSON file but it's encrypted. So we have Sengit
API key mail from sender and we see the data is encrypted
and I define what keys I want the
encryption and decryption to use. And we can see that I have here key
installed it's taking for my machine.
And the idea here is that I can use sops
to encrypt or decrypt value. I'll show an example with fake
additional file and we see that I have like the sum secrets
number and I'm going to
decrypt it. So how does it work? I use sops D
and we can see that we have the value 42 very thicket number,
the meaning of the universe. And the idea is
that using sops we can put secrets
inside the repository but they are safe because they are encrypted
or safe to some degree. I mean there are some tradeoffille.
So my knit file here that I have is basically
taking the dependencies and
it's going to insert them inside can
end file. So I'm going to have like an end
file for the application. I'm not going to show it here obviously.
And basically I have an app of python that is running,
that can run. Additionally, I want to show you that I have the launch
JSOn for vs code that define how I'm going to
run the application itself.
So if I'm going here and I click I'm going
to run the application with a debugger and
everything should just walk. So let's
see it. I have here like a
marinator box. So let's try to send email to it.
Okay, we got an email and that walk and
I'm going to show additional cool. Using that.
I can use the app here and I'm going to put here a breakpoint.
And let's send an additional email.
And we see that we stop here at the breakpoint.
We see the data of the object message and
everything looks just awesome.
Now you've noticed here that I'm running on the localhost,
but the application is running inside a container. So how
does it work? So basically we
do a port folder here. So we define that
port 5000 is going to run on my machine.
Basically the id is going to do it automatically. But I can also define
it inside the dev container
here with full port 5000.
Okay, so that's it.
For this example, we saw how
we use sops for encrypting secrets. Actually this practice
is common when dealing with githubs,
when doing like deployment to production. And we want
the production configuration file to be
source controlled. There are other solution, Jetseyker,
Gitcrypt, some others. The good thing about soaps
however, that it's really flexible. So I
shown an example of GPG keys, which is nice to start,
but it can be difficult to store these keys safely.
But the good thing about sops is
that it can integrate with cloud encryption and service solution such
as AWS, kms or keyvault. And basically the services
change the problem from storing
private keys to having the right access controls
for keys. So it's really nice.
And that way we can have secure access control with SSR
and everything that we need. And the key never leave
the cloud provider. The private keys, the metadata
is also saved encrypted. So you saw that the JSON contained
the name of the keys we want to
encrypt. So it's nice because we can do like this using and check history
easily. So that's pretty cool.
I also showed example of using the id settings define
a launch JSON file for
defining the debug configuration and doing port forwarding.
So let's move to the next project.
I'm just going to close that.
Okay, so next
project is actually a real big
project. It's called Ebetica.
It's like a big application that I
think has been around for eight
or nine years or something like that. I remember using
it in the past. It's like you have like a task management
solution like Trello,
but more sophisticated like with habits. And it's
designed for your own life, organizing your own life. And it's completely
gamified. So like an rpg. And it's really a cool
project. Now it's an open source
and also like a website. And we have a
new challenge. It's like a huge project. We have front end, back end
and a database here.
And I'm going to show how we are going to run it.
Okay,
so here is the project and let's start again by checking
out our dev container, because the project is kind
of heavy. I already run it, so to save us some time.
So here is the project. Now you see that not just using a
Docker file, I'm using a docker compose file that define
the environment and not just the id, because we want to
have a database here. So here we
have the docker compose.
This is the dev container itself.
And we can see that we inject some environment variable
and define the workspace. You can
ignore these labels for now, I will expand them later.
We have the DB here, which is a mongodb. We have
Mongo Express, which is a tool I've added that can provide us visibility.
CTo the Mongo, what's going on inside? We have
traffic, which is a reverse proxy that I'm going to use because
if the service have lots of port and I'm going to run
everything on a single port.
And we have the docker file that include
the Mongo CLI tools and
Javascript code version, basically.
So you can see that if I open in a terminal, I have
the Mongo shell.
Yeah, sorry. So that's it. And I
also have extension of like Mongo and vs code DB
extension. Okay, so let's see the
first thing I'm going to show here. I'm coding to run all
the project here. So we have the client and storybook agent
running and I'm going to run the server as well.
Now this project has many
applications that are running. So we have the
UI application that is basically a storybook to
see the design system of every compose.
We have the docs which has the rest API
and everything. You can see that everything is running currently locally.
We have the application itself and
I've also added like the Mongo Express which contain connected
to the database. And I'm going to log
into the application. So I'm going to need a
user. And luckily the application already
has a user here, the test user,
which I'm going to use in login test.
Okay, that's weird. Let's see.
Oh, the server is not running for some reason.
Let's see what's going on here's,
maybe I've exhausted the resource here.
Okay, so yeah,
now it looks like it's working,
or at least it's loading. In the meantime,
how did I have this user? Basically I
edited some data which are initial
data that I'm using for data setting. And when the dev
container is created I'm also going to do like
a Mongo import and add the data
cto the database. So everyone that is going to run this project is
going to have the initial data and we see
that the login works. And there I have like my test user
and with mission like cardio or process email and stuff
like that. So this is like a complex project.
We see that we have several application
running servers running and to achieve
that, that everything run on the same port. We see that everything is on port
8000. I've added the reverse proxy
which is defined the Docker compose here it's called traffic
and the idea is it's listening on port 88,000.
But based on the labels the other services have
in the Docker compose, it's going to redirect
traffic. For example from Mongo localtest me to
the server Mongo Express on port 9000.
So the same go for the application with the docs,
Docs, the UI and local test. So that's
basically this application.
I'll stop it because seem more
heavy and let's go back
to the slides. So we saw here a full stack application that used local
compose and DB image of mongo and additional tools like the Mongo
Express. I did some data setting with basic scripts.
Alternatively we can cloud data from staging or production if
needed. I showed the example of using a reverse proxy.
So instead of using ports which are shared,
we don't want to exhaust them. I'm using like a
wild card. And also it's more convenient CTO use subdomains
than numbers. I'm using like a wild card localhost
DNS. So basically the localtest me
or there are other domains like XIP are domains
that every subdomain of these contains
is going to point to our localist. So very
cool trick. And you can also create one yourself in
terms of security and not using the public one.
And we use traffic which is a very simple
and developer friendly reverse boxing. The nice thing is read the Docker compose
definition. So it's very easy to use and also integrates
well not just with Docker compose also with kubernetes
and other tools. So let's go to the next example.
And next one is personal. That's tweak,
the project I talked about before. It's a cloud native open source
feature, flag and configuration management.
It got lots of microservices, several DB
and messaging system, cross communication, polyglot environment.
We use typescape, net and go complex
architecture. The services talk with each other and
you don't need to understand this picture to see that it's a complex thing.
And we are going CTO run it as well inside the dev container
and provide great experience.
So that's the project of tweak. The first thing
I'm going to show is that in the Docker file,
I'm going to install Docker
in Docker. So the idea is that instead of using the
Docker oss we have on my machine, I'm going to use an
internal docker, a nested one. So you can
see that my docker PS here is
empty. And if I'll run the same command on
my computer. So naturally I'll see all the dev containers.
So we see like it's a dedicated container
for this project.
Dedicated Docker demon for this project I'm installing net
five Golang node js and yarn. I have all the tools
I need for development and I'm also installing tilt.
Now tilt is a very nice solution
that is designed to solve the problem that in
tweak we have like the environment itself is developed inside a container.
That's the easiest way to develop tweak.
So we have like this yaml that defined all
our services and
what tilt does, it basically provides us tool for
editing the files and changing them and
replacing code inside the container, or rebuilding our images
in an automatic way. So how does it look like?
I'm going to open additional terminal here
and we see that all the services of tweak are running inside the nested
container and in tilt it
also has Ui to see the application here.
So let's see that we
have tilt here. Let's see that it's running on the right port.
Yeah. Okay, so that's the
tweak application. We can see that all the services that are running tilt,
some of these services are services of tweak itself,
others are tools for mimicking the
cloud environment. So we have for example Minio, that is
a tool for object storage.
Like s three, we have nats, which is a message broker for passing
messages. Redis which is a database that in the cloud you can use like the
osted version of it. And we have our
ADC server mock which is like an OpenID connect
provider in production. We can use Google or something like
that. And we have the other services of tweak. And the
idea is that every time I'm going to do a change to the code it's
going to either rebuild the project or try to do
auto reloading. In this example I'm going to show like an auto
loading example. So here is the login page of tweak.
I'll just refresh it to make sure we are working
on our latest version.
And we see we have the page here and
I'm going to change the title here,
make it a bit bigger and
that's basically it. And yeah we see that it
walk instantly and it's pretty amazing.
I mean I wish we has that kind of developer experience a
few years ago when we did that Akaton it will be totally a
game changer. So that's really amazing.
So let's go back again to the slides.
So we saw the example of tweak. In tweak we are using nested containers.
So we have like Docker and Docker running inside. There are different
ways to run it, but if we are using nested container
with Docker compose, tilt is great for watching rebuilding on
every code commit or doing code reloading or remote debugging.
Everything works. It can be a bit slower because we are running again
inside containers that also run inside a container.
Tweak also use the practice of mocking cloud dependencies. CTO work properly
so we have docker images of database. We have wire compatible solution like minio
or OIDC Mac server. Other tricks you
can use to mark load dependencies can be manual docs or full frameworks
like local stack. And the last
example I'm going to show is Kubecost. This is actually an
example that we run our dev container
inside a complex platform. In this case the platform is
kubernetes and we need to install.
Kubecost is a tool for managing the Kubernetes cost. So we
need a Kubernetes cluster, we need a metric server and we need Prometheus
which is like a monitoring we.
What can we do with kubernetes? So the first thing CTO remember is
that kubernetes local development is difficult today.
We have fragmentation, we have different versioning, we have different distribution,
we have like mini Cube Docker for desktop micro kubernetes kind
k three s and everything is a bit different.
And you notice this differentiation especially when you develop
in a project that use the Kubernetes
API. So using a single Kubernetes
distro and version can make life easy.
And I'm going to start with our example
and that's the last one as I mentioned before.
So inside this dev container you can see
that I have kubernetes running.
That's awesome. I'll just also going
to turn down tweak because
again it's like every project. So we
have the dev container Json here we define the extension
we want to have Yaml Golang Kubernetes tools and
the Docker file here install, not just go and code
js. We also install here
Docker in Docker. So we have a nested Docker demon
and inside we are using a tool called k
install ion. So K for provisioning
a k three s clusters. So we can see
that I'm having a cluster here that is running and the
idea is that KFS is a very minimal distribution
of kubernetes so it can run very fast and it's
also like a single process. So it's
very awesome project.
Now I'm going to run tilt as usual.
And if you'll notice on the contribution guide here,
basically when you want to build, they tell
you to Docker, build a project, edit the
deployment YaML file, set the environment variable to
the permitted server, create namespace, apply.
And basically the good thing about it is that if
we are using these tools, we don't need to do it because everything is happening
automatically. So tilt is also integrated with kubernetes
and in this case we actually have a registry.
So every time we do a change you
can see here that we have a server and a cube cost
registry here. So every time we're going to do a change to the goal and
code. So Tilt is going CTo rebuild the project, push the image
and replace it in the Kubernetes deployment. So that's pretty amazing.
Let's see that our project work here.
So this project also has UI and
also this data here we see that we
have here the data, we can see it from today,
we can see it by pod for example.
So the different Kubernetes pods that are running,
we can see that the API is running.
And basically the tilt file definition here we
define how we build the image, what the Kubernetes yaml we
are using. And also I define the resource for the UI.
So for the UI we still have auto coding
and the UI itself is not running in this case in Kubernetes
it's like a local resource.
Just to showcase how fast is k three
s. I'm going CTo delete the cluster.
Okay. And let's run it again.
It's.
And you'll notice that I have a running
Kubernetes server in
less than 20 seconds.
So that's pretty amazing.
You can see get node,
so that's really amazing.
If you use Kubernetes other distribution
like mini cube or micro even kind,
you will see that usually it takes some time. And the good thing
about k three s and together with k
very fast and we have also a dedicated registry, the cluster is stable.
Also k three s has special integration for ELM, so we can
install the helm chart declaratively. That's what's actually happening with the Prometheus.
We have a Prometheus file definition there in the repository
and we have tilts that facilitate building publishing running if
we put it in terms of what's happening in this example.
So we have our docker machine, our Docker rosk inside. We have a dev container
inside. We have our ide inside.
In the dev container we also have Docker and Docker demon that have
like a registry, a kfres node that run
container D that run our application. Or if we'll try
to put it more visually, it's something like that's.
So no more demos. And thank you for your patience,
I hope you enjoyed it. So I'll summarize.
So we use containers delelopment environments.
The cool thing about it is that development environment configuration
is also source control. The development ships
stay clean, can scale well to multiple environments without conflict, as you
see in this presentation. And it can run locally or
remotely. Our setup in
lifecycle actually is composed of lots of microservices
and front end report model reloading.
We have our own kubernetes, custom resources and controllers,
graphQL engine, full blown CI system, stuff related
to SSL certificate and dynamic DNS.
A lot of CLI tool for code generation and at the
end the time to tear down and build all the cluster and the
dependencies locally for development is less than 15 minutes.
Time to build, run test code changes like 10 seconds
time to onboard a new developer including
revision provisioning in Austin AWS is less than 3
hours because we are working remotely. You can walk either remotely
or locally on your docker. We don't have walking on
machine occurrences. It's very easy to introduce new tool.
We don't have strand on our developer machines and our
team can work both with M one and Max in
the future we hope to optimize it more. We have to have
shared build cache snapshots, maybe use a cloud provider
that will provide us the best dev machines.
There are some drawbacks, however, the initial setup can take some time
to work. We need to code everything.
We use many tools, some are bleeding edge.
Basically using dev container actually make you feel like yeah,
I'll add additional tools because it makes easy because there's no installation,
but we need to be careful with that. There's additional code
to manage. Obviously the code of the docker file and the definition
the environment are not standardized yet. So in our example
we are using the
definition that are defined by vs code. So naturally we are pretty coupled
to Vs code. There are some performance issues and
there can be security challenges between development and production concept,
especially if you use secret encrypted.
There are alternative to vs code, but I haven't tried
them. So it can be possible to use like terminal based code editors and
working on a remote container. There's Gitpod, IO and FIA that
have similar features with Gitpod Yaml. I played it a lot.
I played it in the past and not on
complex environments. Jetbrains has
a solution for a remote
environment by using Jetblain projector. I haven't tried it,
but it's supposed that you walk on a remote
id and it's like projected CTO it to you or something like that.
And we can run local id with docker mounts,
but I don't recommend it that much.
I haven't shown can example of serverless but it should be possible.
If you can run it locally you can probably run it in dev container
and the same rules apply in regard to mocking cloud frameworks.
So use cloud mocking frameworks or build compliant
solution like minio and if you're necessary you can
maybe throw infrastructure as code tool to the mix to do
a dynamic provisioning. Native mobile is
a different story on this. I'm not that optimist.
It might be possible to stream application,
but mobile emulators are heavy and
the container ecosystem is optimized for Linux and the
ids are very tailored for mobile development.
It might be easier with cross platform frameworks like React
native or flutter. I will say however that this problem
is really difficult. I remember having like epic battle with my
ide that I work on with the
ID and the tools when I worked on Android or iOS development.
So I hope it will be better in the future.
And the nice thing about it is that we're seeing a
trend that is about putting more stuff in the repository.
We see that in the last decade we added more stuff.
It's not just the code, we have our design system there, the open API
specification documentation infrastructure as code secrets
notebook and the workflows are based on
the repository like PR workflows.
So it's really nice and I think
that in the future it will be more so. The idea is that every
repository will be self container. All the code, tools, knowledge and
definition are in the repository git act as a single source of truth.
Code is more accessible, the barrier of entry is lowered,
the application are portable which is nice and we
can control the developer experience which is very empowering
that if I'm having a repository I can also control
how to create an amazing developer experience for
developers when they start using my project.
And I believe that these trends will
create can emerging tool of ecosystem. We already
see it with remote ides and
PR environment solution and even tools like
Livecycle that take the repository and
create like a live version for other team members to
collaborate with. So I'm very
excited about it and
I've tools and patterns in
this demo. So this is like a patterns and
cheat sheet table that you can use.
But everything is going to be like on my repository and I'm going
to post everything on my Twitter account as well.
So thank you very much. It's been a pleasure.