Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello and welcome to my talk on environments as a service.
I'm really happy to have you here, and through this talk I
hope I can introduce you both theoretically and
practically into environments as a service and
FMR environment in 30 to 40 minutes.
Environments EAas a service can definitely help increase both
productivity and improve the developer experience solving
current issues in terms of environment management.
Environments as a service is about empowering developers
to create production like environments to be used
for both development and staging purposes.
And the purpose is actually to allow developers to
concentrate on direct,
productive activities, such as coding or
application architecture, or to invest more time
in automated tests.
So let's dive in, shall we? In this
talk, we'll discuss about current challenges
faced by engineering teams in terms of environments that
they're using. Then we'll take a look at
possible solutions and see the benefits they
bring into the environments equation, and we'll also
see how all of this actually works inside
bunny shell, and then we'll
wrap it up. So, in terms of challenges
faced by engineering teams, they are different. For each role,
software engineers face their constraints,
which limit their possibilities or consume time in a
recurrent manner, like working with
simulated dependencies. This means uncovering integration
issues pretty late in the process, which in turn
generates significant rework as a ticket
needs to be raised, then documents
the whole context, and only after that, the entire process
basically rewinds. The ticket reaches the developer,
a fix is produced, a test is remade, but in the meantime,
some time has passed, the developer has lost focus
on this task, and so on and so forth. I'm sure you're familiar
with what I'm talking about. Another issue is that
limited laptop resources imply
juggling with ad hoc partial environments.
So they basically fit into the local machine and
they deliver a subpar performance. Sometimes it's
outright frustrating, to say the least,
and this is a real turn off. Unfortunately,
while being in the zone, in the focus
area of the day, maintaining your environment,
working, often it's left unnoticed.
But it's a lot of effort
put into there, and therefore, time is
actually spent away from a developer's
main activities, direct, productive activities, as I've
referred them earlier,
QA engineers, they face other challenges,
which in turn create bottlenecks and also consume time.
It all boils down to time, basically,
right? It's the most precious resource
that we have. So for QA,
having a small number of environments, often that
small number is actually one. Or in many
teams, it's less than one because they share an
environment. Multiple QA engineers share an environment,
and this greatly limits their exploratory testing
in terms of time and both creativity as
it's unnecessary stress and it obviously
rushes the process of validating or actually testing
changes. And I personally know more than
a few Q engineers who basically shifted
their working schedule only
so they can have an environment to work on uninterrupted
during off time for developers or for other Q engineers.
And that's really a shame because we
really have the tools to fix this today.
Another issue is that testing a
feature together with other related but actually
separate features creates additional complexity.
And while trying to isolate a scenario
or maybe reverse engineer something weird
that happened, you definitely don't need this additional complexity
into your process. And unfortunately this
is the status quo. It's something that has
become the status quo over the years. Actually,
for far too long this status quo was perceived as
the cost of doing business. But it really doesn't
need to be like this. We now have all the
tools available to make things better. We have containerization,
we have container orchestration that really works
at scale. And fortunately we
see that environments as a service, sorry,
is starting to become a thing moving
towards the solution. Self service is actually the only
way to move forward, really. Decoupling the dependency
on developers engineers for pre productive systems and allowing
developers to focus on productive system
is truly a must. And I personally
don't know of any company who wouldn't love to achieve
this status. So on
demand environments and preview environments,
they unlock the ability to test features in isolation
and greatly enhance the collaboration between stakeholders,
the product team and the engineering team, and with other
teams from the companies as well. Support for
trainings and there are many scenarios.
Actually, I'm not going to get into this at this time.
Another very important aspect is
remote development. This allows developers to
start coding on a project within minutes. Obviously once
an environment configuration has been defined, but after that it
can be replicated and it can be
definitely started within minutes. The code will
basically run in a Kubernetes container and the environment will
reflect changes in real time. With debugging
capabilities, the development experience does
not change at all. You can use your favorite
id and the same debugging tools that you're used to,
so everything should be already familiar and
costs. Any viable solution must however,
take costs into consideration, and I'm really
sure that I don't need to go deeper on
this topic. Before seeing the benefits,
I would also like to graphically
present what shortening the feedback loop actually
means. Instead of merging and then deploying the code
into an integration branch,
it means you have real time feedback while developing.
This is what it's all about when it comes to development
environments, and real time feedback also
has a performance component attached to it. So the
performance of the cloud machines is far superior
to even the best of local computing power.
And we've felt that firsthand here at Bunnyshell.
And this is beyond any doubt zooming
in a bit on the process of development.
Let's compare what traditional developers with testing
real services looks like for most of the
companies today. By services I
mean cloud native services or other kinds
of external third party dependencies.
I'm sure that you get it and
this is what it can look like also
today or tomorrow or pretty soon. Either way,
it really depends on you. You can have instant feedback
instead of needing to continuously commit and push,
then wait for a build and deploy. And if it doesn't
work for the first time, as it usually does,
it's a rinse and repeat procedure. I know you
feel like life is going past you when
you wait for a pipeline to finish with fingers being
crossed. And that's what we are referring
to as idle time here at Binichel,
time that's left totally unused.
So the benefits, some of them we've already
touched until now. But mainly there are productivity
increase, which also brings in inherent quality
increase, far less context
switching. And this allows engineers to truly focus
and let me ask you a question. When was it the last time
you were as an engineer in the conf
fourty two or maybe 3 hours of uninterrupted
session of work? And yeah,
that's quite a challenge. Unfortunately, today,
moving forward, processes are streamlined as bottlenecks are
removed along the way. Releases can also flow better
as one single issue will not block multiple
changes going live. The quality of the
reviews grows exponentially when the reviewer can also test
the code. And I don't refer here just to uis.
Interacting with application allows for a faster,
deeper understanding of how a system works
and how the user can also interact with
it. Also, shareholder feedback
can come in much early,
shortening the overall feedback loop.
Self service obviously allows DevOps engineer already
swamped with work to do. It allows them to focus
on what is most important, and by
this I mean the production systems.
Last but definitely not least, onboarding becomes
a bliss for new people joining a team. It really makes
a huge difference to learn an application while it's up and running,
as opposed to figuring out things while
it's still trying to fix your
local setup at the same time, isn't it?
So, enough talk, let's see some action too.
It's time for the demo. So to
demonstrate the value of environment as a service,
I will next deploy an application starting from
a Docker compose file, which many teams already
have today. But you can definitely use kubernetes
manifest or helm chart if you have them, and you
can obviously use combinations of them, you're not restricted to
a single type of component within an environment.
Afterwards I will start a development session to
demonstrate how it actually works from a user's
productive and ultimately I will perform some
debugging while in the remote development setup.
Due to time constraints, we'll be working with a simple application, simple scenarios,
but nevertheless real world
scenarios. So having more components
in an environment or having many more lines of
code doesn't really change the principles demonstrated here.
Enough talk, let's dive into the Bunnyshell
platform. Sorry for that. So I
switched over to Bunnyshell and
I'm going to create an environment. I'm going to name it comfort demo
created. I can create it from Docker compose and chart
kubernetes manifest. As I already said, you can also use terraform
modules, but we'll definitely not go into this demo.
And you can use existing Bonnie shell templates,
but these are not the scope of this demo either.
So I'm going to select GitHub account this is
an account that I have integrated earlier, and I'm going to
use our demo books application on
the master branch. This is a GitHub repository
containing a Docker compose file and
pretty simple setup composed out of a backend
postgres database and react frontend application.
The backend is written in node and
after parsing it Bunnyshell has generated for me
AnamL, which is basically
the definition of the environment in Bunnyshell.
We'll look into it a bit later
down the line. In order to save some time, I'm going to deploy the environment
first. So this is the environment details
screen which I'm on. I'm going to click deploy the environments. Eaas going to
select the Kubernetes cluster, which is the
dev cluster for me. It's a cluster that we have connected to
our organization, but you can also use the Binichelmanage
cluster to try things out or see how it goes.
I'm going to create a custom URL,
a personalized URL for this environment named Conf
fourty two, and I'm going to hit deploy.
What happens now is that the environments has become
queued and as soon as the worker frees up it will
pick up the environment, perform the build and
the deployment, and it
has already started. The environment is already deploying because I already
deployed it earlier and I had the builds
cached. So this is something that definitely helps you
save time in case things are not changed to an
environment or multiple environments. Multiple colleagues
deployed the same, I don't know, master branch for the
front end, for example. This is a pipeline that was
generated for this environment,
and depending on how the environment looks like,
your pipeline might look differently.
As I said, going back to the configuration, let's see what
it's all about. So I have here a back end application
component which is for this repository,
the master branch, and it's located in the backend folder.
It's a Monorepo under Docker compose.
You have, well, Docker compose syntax, probably you're
already familiar with that. And we also have host
exposed here which has
a backend prefix, and then the environment based domain
is interpolated here you'll see how the environment looks
like in a bit. There's also a database which is
already said postgres. It's using the standard postgres image,
exposes some ports, internally exposes
no hosts, sorry, externally. And it
also has a volume, a persistent volume attached persistent
volume which is defined at the bottom, very similar to Docker compose.
There's also a front end application which receives the
backend URL as a build argument.
So this is it.
Let's wait for the pipeline to finish.
And it's still currently
deploying. I'm going to go a bit into the settings
to also show you the availability rules I mentioned, the cost
consciousness of the platform, of any
environment as a service platform. And I can
select here, I don't know, let's say want to
have environments running from 09:00 a.m.
To 06:00 p.m. From Monday to Friday. I can
have this, I'm going to save it actually,
and the environment will follow these rules.
Obviously you can define environments eaas,
you can also define application variables, which are
basically environment variables
at an application layer. And we have some
defined for the back end application.
For example it's the database credentials and also the front
end URL for the cars.
So I'm going to
go a bit into the pipeline and we can see that the deployment
is also almost finished. It's finished
successfully. As a side note, we can see
that we have here
under the component we have the Kubernetes resources and
I can click the deployment and this will show me the.
These are the actual live container
logs. It's the container output for my back end services. I can also
see the Kubernetes events attached to
this deployment and the actual manifest which
was deployed. So let's
see how the application looks like.
There's a small issue with the front end application. Let's debug
it, let's see what's
the cause.
So I'm going to go to the front end deployment
and I can see that the application actually
started now, so it
was a bit lazy to start.
That's fine. So I'm going to add a book conf
fourty two book it called the API stored in the database.
Refresh the page just to prove that we
have persistence. And going back into
Bunny Shell I said that
remote development is the next thing. So let's copy this
command and I'm going to switch over to my webstorm
since this is Javascript application,
but obviously you can
use your own id, any id you prefer to.
So I already have the demo books repository cloned. It's in the
playground demo books path and
I'm going to open a terminal. I can see that
I have here the backend application. I'm going
to go into the folder and run the command here.
Obviously the BNS command requires
you to install our CLI and then authenticate with
it. I have done this prior.
Remote development can be done in one of two ways,
with local files, which is the method I'm demonstrating
now, or without local files, which doesn't fit
into the scope of this demo. So what I
need to provide now is the local path, which is the current
folder which I'm in, and also the remote path
which is the path which I
know from the container. It's the work there. From the container is
the path from the container in which the application
is located. So what happens now is
that Bonnie shell changes my pod definition so
that my local files are synced into
the container. I'm not going to go into too much detail,
but on the surface I can tell you that we're using mutagen
to synchronize the files from local into your container.
The files are synchronized now and the pod definition
is changed and it has succeeded.
Now I'm actually in the container.
So what I can see now is the application
folder structure within the container. Now I
need to start my application and
I'm going to start it by running vm run
start dev. I can obviously see that
from my package JSON file
here. So the application has started,
the server is running. Let's switch back to the browser
and let's open the application,
the back end application directly. So this is the
welcome route. Let's say it displays a very simple welcome
message and I would like to modify something
on this route. I know already
how to perform small change. I'm going to go here and
add that conf 42 and
once I save the file, you can see here in the terminal that
the application was restarted
because the files were resync synchronized.
And sorry for that, I forgot
to switch to the id. So what I did
was to enter the server js file and modify
here the message for welcome I'm going
to hit save again and you can see that the
application was actually reloaded due to the file
sync. I'm going to go back to the browser now
and if I hit refresh here I can see that
conf 42 appeared, so the
sync is actually working.
Any change which I do now in
my id with my local files will get almost instantly
synced in the container and I will be able to
see the live changes. So the last
bit of the demo is the remote
debugging part. And for this I'm going to switch
again to the ide.
There's one more thing I need to do. In order for
the debugger to be able to attach to the running process
I need to run the
same command with sorry port forward.
Okay. And the debugger is running
on the nine two to nine port. And this tells
Binichel essentially that I need to forward my local
nine two to nine port into the container,
same number of the port. So Binichel is
again asking me for my files and
it will ask me for the sync.
It changes the pod definition and
it will quickly check if the files need
to be synced. And this is it. I'm going to run again NPM
run start dev, which by
the way has this inspect flag
which ultimately allows me from my local machine
to attach to the session.
And I also have a breakpoint
here added. What I'm
going to do is go into the browser and hit refresh.
Okay, nothing happened because I forgot to attach the debugger. I need
to run this. I have made a minor
configuration here. I mapped the
local back end folder to the path in the
container from user source app
back end. It's basically the path I've synchronized the files
into. I'm going to show the documentation on how
I did that. It's a three step process, very easy.
And now I'm going to refresh again. And I
refreshed the page in the browser,
the back end page causing a request to be made,
and I can see that I have here the call stack.
It's the standard debugging procedure.
I have access to variables, I'm able to execute
functions, I can step over step into
functions and whatnot, and I can also let
the request finish. So this is
the remote development in
Bunnyshell. With debugging, I'm going to go
quickly to the documentation and also show
you how the port forwarding was set up. We have
a bunch of examples here for node JS.
I started the remote development with the
port nine two nine forwarded.
You already saw that. And from
the configurations from Webstorm I added
an attached to node JS configuration. And then this is what I did.
I mapped the backend folder into the containers
folder. So this was it. And then I hit debug.
So this
was it. I hope you found the demo
clarifying and things are much more clear for
you now, and I hope that you find it easy to follow as
well. I want to say
that environment as a service concept
can truly revolutionize
how engineering teams work and how they interact
with other eaas as well towards delivering a better product
and delivering it faster. And very important,
being more happily while doing so
as an engineer. Just to recap,
self service is crucial in moving forward
for any kind of change. We will bring in how engineering
teams work. And with environment as a service, you can have real
production like environments for development, for staging,
for UAT. I mean getting feedback
from stakeholders in a real time
fashion, and also for end to end testing.
Remember, this is not a replacement for
your current CI, but it definitely can integrate with it and
help you implement end to end testing in
a much more sane way, with completely isolated environments
which only live as long as they're needed
for the duration of that pipeline. For the part of duration of
that pipeline, to be more precise.
And the concept is also cost friendly.
So this was it. I really hope you've enjoyed it
as much as I did, and I'd be really happy to connect
with you and have you share your thoughts with me
on the environment as a service topic. But not only that,
anything engineering related. Have a
great day ahead. Thank you very much for attending and
all the best.