Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello, everyone.
We all know that developers hate bugs, right?
But there is something developers hate more than bugs, and that is slow
development cycles that kind of make it difficult to quickly test out new
features whenever I want to implement them, or to also, Debug these issues
whenever they arise, and that can become a huge headache for a lot of
developers, which is why today we're going to discuss how we can all say
goodbye to this slow feedback loop and improve the overall developer experience.
My name is Anita Ihuman, and I am a developer advocate at MetalBear, where
we're working on an awesome open source project called MirrorD that aims to
improve the entire developers experience.
I like to call myself an open source fangirl and that is because most of
my career started off with open source projects and in open source communities.
I love advocating for inclusivity within tech communities and thanks
to open source communities like NUM.
Focus this and sustain this sustain OSSDI working group.
I get to do some of this work.
I am also the board of directors for CAILS where we actually develop open
source community health and analytics.
When I'm not doing all of these, I am probably organizing local
community meetups with KCD Nigeria and also CNCF Abuja.
So now that we have that out of the way, let us dig in.
During this section, we're going to look at how we're going to first of all look
at how a development workflow looks like in conventional development environments.
Then we're going to look at the challenges with some of these approaches, and we're
going to look at how we can actually rethink these approaches by moving on
to remote to local development approach.
And we're going to look at how MIRIDI saves the day.
And then finally, we'll look at Quick demo of how MariaD works in real time.
So let's dive in to begin with I would to discuss something that we're
all too familiar with, and that is the development workflow, right?
Ideally as developers, we actually want a smooth Swift process where we
just have to write a code, test it out.
It works fine.
And then get pushed to get up and move on with our lives.
But that is not always the case because in situations where we're
dealing with like cloud based applications, you have to like.
Taking two considerations after writing your code base, you have to make
sure that you build the application.
It works perfectly.
And then you move on to containerize it after that, you go on to run it in a
cluster, it works fine in the cluster.
And then you now go on to, make sure that it plays well with every other
components that is involved in the application and then finally deployment.
And even at that point, also make sure that whatever changes are implemented does
not break the application in production.
So this entire process can take up a huge amount of time, especially since
you don't have to like spend a whole lot of time on the inner development
loop which has become longer than developers actually wanted it to be.
And after that is done, you also have to wait to make sure that
even if it is even as it is in the outer development loop, the process
is like closely aligned with the.
production environments and nothing goes wrong at that particular point in time.
And I know so many persons might be saying, it's not all that bad.
If you minutes of waiting for that push weight circle, I can use
it to scroll through tick tock.
I can use it to pass time and then come back to it.
It doesn't necessarily affect productivity or delivery, right?
But when we're actually dealing with like microservice applications that
now require like tens to hundreds of services, In this case, it gets trickier
as now, because you have to think about the more development loops that are
involved, the more interactive services that are involved, the dependencies
that go on between these services.
You also have to think about the resources and then the
complexities of these resources.
Resources and so on and so forth coupled with like multiple developers
individually working on like the services.
And so there's so much at play within like the microservice.
Environment that things get out of control and suddenly you're not
just writing your code anymore.
You're not just writing your code anymore, right?
You actually have to repeat this process over and over again across
these services until we see that there's actually no more errors.
So imagine how much time gets involved at the end of the day.
Imagine how much waste time actually gets involved at the end of the day.
It can't it turns this entire process is supposed to be like short and swift to a
more complex maze that developers now have to do with introducing challenges like.
Lengthy rebuilds and deployment processes where developers have to deal with
long feedback loops, which eventually results in decreased productivity.
You also have to handle like the suboptimal testing conditions.
So I know so many persons are already saying, I can actually use mugs.
It's not that much of a problem.
But the thing with this mugs is it gives you.
Like an implicit assumption of what the application would look like in a
production state, which means it's not giving you exactly what the application
would look like in production states.
And at this particular point, yeah, likely going to be issues that are coming
up in terms of compatibility, in terms of performance, in terms of security,
vulnerability, and so many other issues that could come out that will affect the
application when it is put to production.
Developers now have to like increasingly become dependent on DevOps teams as well,
because in a typical traditional dev circle, you'll see that there is like
a centralized staging environment where every, person can deploys every untested
code to that particular environment.
And because it is a centralized space, whenever an issue arises in that
particular space, the DevOps teams have to take charge and figure out
how to fix the issues at that point, which also takes up additional time.
And that is a whole lot.
And a whole new ballgame on its own.
And so many persons are already saying there are like local developments
environments that can be used to solve some of these problems.
Yes, there are.
There are like several solutions that organizations are actually
opting for today that address these.
And the good thing with these local development environments
is when you're using your local development environment, you enjoy
like the fast and the speed of it.
Yeah.
We like the iteration process.
You don't have to wait several because immediately you're making your changes.
You can actually see what it looks like locally, and you can address
any challenges that come up from that point in time, you get access to all
of the local debugging tools that your developers are familiar with.
So there's no much struggle with onboarding into new tools and
all of that smooth and fine.
However every other Process or every other tool that comes up, there
are some added challenges to it.
Obviously, there's a clause and some of these challenges that developers have to
now deal with this local development tools or development environments is that you
have to handle like the limited resources compared to the cloud environment.
While it is very easy to get started with and is very local, it doesn't
actually give you like entirety of what the application would look like
in production, the resources that the application would need in production.
And so there are also chances that you might not get exactly what's the
production states of the application would look like, then you also have to
deal with like high maintainers where your developers have to like constantly
manage and update API scripts, mock data to ensure that everyone is on the
page in the same page whenever they're working on a particular service in
that particular, in the application.
And then they also have to deal with the system compatibility.
Now the thing with this local development environment is it always
introduces the problem of it works on my system, but it doesn't work on Mr.
A's system or Mr.
B's system.
And that is because some of the libraries and dependencies that I
use often compatibility compatible with specific processor architecture.
And not compatible with others.
And so this often leads to like runtime errors or slow down in application
performance and so many other issues that come up along the way.
And yes, I know so many other persons are probably saying
that's not also a big deal.
We have our remote development environment, right?
Yes, we do.
We have the remote development tools that are doing an awesome job.
So many organizations have also opt for this dev environment as
we're speaking, and it also comes to so many advantages, right?
Because yes, while you're doing this, your team is going to enjoy less
workload on their systems because every single thing in this particular remote
development environment is done remotely.
And so you, you don't have to deal with the services or the database or the
clusters because it's all leaves remotely.
And that makes for your system a lot easier.
Developers can easily handle any maintainers issues by moving the
core of the workflow logic into a continuous integration and
continuous delivery pipelines.
And so everyone can just make a push to the code and then a set of
procedures just completes the rest.
So you don't have to deal with The differences in environments and all
of that great and works perfectly, but like the local development environment,
it also has its close and some of these include like the long feedback
loop because you actually have to like when you're done with this, you
have to, when you're working with the remote development environment.
You have to, whenever you make a change, you have to containerize the application,
push to registry, deploy to the cluster before seeing the effects of the changes.
It's introduced like an additional time to like the development time, like
the development process as a whole.
So you now have like long feedback loops which eventually affects
the engineering team's efforts.
You also have to deal with frustrating debugging experience, because imagine
you're debugging an application and you actually have to go through all
of these processes before you see the outcome of whatever changes you want
to make, that is like an additional time spent within the application that
was not actually calculated or planned for making it a lot more stressful.
And then you also have to deal with the high cloud costs.
Is setting up these environments cause you have to set up development
environment or like cloud environments for each developers, depending on
how many developers exist within an organization, you have to, you actually
have to put into consideration.
The dev, the environments for each developer involved and
cata for all of that, which increases like the cloud costs.
And at the end of the day, you're paying so much and then you're spending
so much time, which is like a lot of issues that we're trying to avoid.
When adopting this in the first place.
So at the end of the day, your developers actually need an environment that does
not require so much from them and then take up a huge chunk of their time.
And at the end of the day, closely or can closely mirror the production environment
so that you're spending less time, you're saving more money, and then you're
getting the best from your developers and they're also giving you the best because.
They're not stressed with whatever dev environment you choose, right?
That is where the remote to local development in
comes as like a golden path.
I often call this remote call because it's kind of image of
remotes and then the local.
Development.
So like remote call.
What or the magic that actually happens in remote call development environment
is like it is a hybrid strategy, right?
Where you take the best part of the local and magic with the best part of
the remote development environment.
The magic that happens is you get the benefit of running your
microservice running the particular microservices that you actually
want to work on locally, while every other thing gets to run remotely.
You don't have to set up everything locally, but you also get but you are also
not limited, you don't have to like work on everything locally, but you're not also
limited to the, your computer resources.
Because at the end of the day.
So basically, every other thing that your application needs or
your app every other thing about application is accessible remotely,
like your services, your database all of the dependencies, literally
everything is available remotely.
It's just that specific microservice you're working on that you get
access to on your local computer.
You also don't get to deal with the downsides of an
all remote or an all local.
Workflow, like we looked at earlier.
Other benefits that your developers get to actually enjoy in a remote
environment is a consistency.
You can actually code you can actually code or test or debug
in an environment that is closely aligned with the production state.
So you don't have to deal with, oh, it's, it looks good here, but like
when you push through the production, it actually breaks something or it
doesn't work like it's supposed to.
Your developers have to have the benefit of enjoying the ease to use because they
have access to like their favorite ideas for tasks like debugging authoring of code
or running any unit tests which makes it a lot more approachable for your developers.
And easy for them to actually get started with.
They also have to, they have access to like the fast feedback loop.
Because unlike the remotes or the local where you actually have to
wait for the CICD, unlike the remotes where you actually have to wait for
the CICD pipeline to be completed, you have to do the containerization.
You have to also check in the cluster and so many others.
You don't get to do that here because you're getting like the states of the
the application in the cloud while you're actually running everything locally.
And so that actually makes the whole process a lot easier.
You'll end up saving a lot of money because your developers can actually
use a shared environment instead of like individual environments.
For each developers.
So you pay less for like your cloud providers at the end of the day.
And spending a whole lot of money on cloud dev environments, and you also
like getting the best of both worlds which is like an added perks, which
is why a tool like mirror D is, or should be your go to and saves the day.
So if you're wondering how exactly, or what exactly Miradee is, don't worry.
I'm going to answer all of that.
Miradee is simply an open source project that makes it possible for and very easy
actually for developers to debug and also like test applications on Kubernetes.
It comes as like a CLI tool and also an IDE plugin, right?
And so when using Miradea, developers can actually run local processes in
the context of their cloud environment.
And with Mirror D your developers get access to the cluster services
as if they are running locally and then reroute the traffic to like the
local, to like your local process.
At the end of the day, it becomes very easy for you to test your
code on a cloud environment without actually going through.
The drama of the carization or CI CDO deployment.
And you also don't have to like worry about disrupting the environment by
deploying untested code at any point in.
In time and how MirroD works is like very simple.
It comes with two main components.
That's the MirroD layer, which exists in the memory of a local process, right?
And then the MirroD agent, which actually exists as a
port in your cloud environment.
So when you initiate MirroD, the it starts the MirroD agents, which operates
within the same network namespace as the particular port that you're targeting
in the remote environment, this agent has access to like literally everything
your application will need in a cloud environment, like network, and then the
file system or database and all of that.
So you can using your local machine, you don't have to like deal with the heavy
lifting of handling all of this because.
The Miradee agent gets you access to all of that.
And then the Miradee agent on the the Miradee layer now, on the
other hand, integrates into your local development environment.
Intercept as well as redirects all low level functions to the Merode agent.
So this allows it to interact with all of the resources just like your
application is running in a live course.
So at the end of the day, you're wondering whether you're in the cloud or you're
still testing your applications locally.
Don't worry.
We'll see it.
Once I get started.
But just know that at the end of the day.
You actually get to check out your code with real data and it's like
working against real an actual production like environment.
And some other benefits that you get with MirrorD is, it's not just about the name.
It's actually gives you like a mirrored state of like your cloud environment.
And so when you're using MirrorD, it's like a bridge between your local
and your cloud environments, right?
So you can configure configure exactly what exact, what functionality
that you want to happen remotely.
And what you also want to happen remotely.
The sweet thing about using MiroD is that it doesn't you don't need the
root access to actually use MiroD.
It is very easy to get started at the CLI or the IDE.
You don't actually need root access to your local computer to get that started.
Mirror D is not invasive as a remote cluster.
It just attaches the Mirror D agent exists while that whole process is going on.
And once it is once you're done with the whole debug process and then you end
that, Mirror D agent also ends itself.
You're going to see it as well.
And so it takes a couple of seconds actually to get started up with Miradee.
It doesn't waste a lot of time and you can run multiple processes
all at once and each connected to like different remote ports.
So you're not even limited to just one task at a time.
You can do a multiple all at once.
And MIRDI is also like very versatile.
So it actually doesn't care which setup that your cluster has.
If you're using a Savage Mesh or using a VPN, whatever setup it is that
you're using, MIRDI doesn't actually take all of that into consideration.
It just works.
And so at the end of the day, Let's get started.
All right.
For this quick demo, I'm going to be using a very simple Python
application to show you how it works.
So let's get into it.
This is what my Python app looks like.
The basically what it does is it gives you like the weather updates in
like different cities that you enter.
So whatever city you actually put in is going to give you all of the prompts.
By calling an API that actually tells all of this.
So I don't want it to answer this header anymore.
I want to probably change the button and I want to change this.
So let's go into the actual code base for this.
I want this to answer instead of today's weather app, I want it to answer.
Anita's weather application, and I wanted to say,
tell me something, what is the weather?
Let it just tell me something, right?
And so that is in place.
And if you notice, I have like my MirrorD.
json file here, and what this file does is it actually has the
configuration configurations that I need to tell MirrorD what I want it
to do and how I want it to do it.
So I've already set like the targets, which the targets that I wanted to use,
which is like the weather and the period.
Weather app deployment.
And imagine you're working on like a microservice where
there are multiple pods.
You can actually specify a different targets depending on what you're using.
But I'm going to be specifying the weather app deployment.
And I wanted to also steal the traffic.
From this particular, from the actual remote cluster for me.
So I want it to still in the traffic and my mirrored.
json file looks good.
And my app py file looks good.
I'm going to go on and first of all, initiate mirror D at the bottom here
and make sure that looks okay as well.
And then hits the bug.
This is going to take a few seconds, but you're going to see it's running shortly.
And what you're going to see at the end is Miradee starts to go
through the whole process, right?
It's going to create the Miradee agent.
Which is going to attach itself to like my remotes cluster.
And then the MIRDI layer, which is available here, will redirects
and also reroutes the traffic.
So let us see what that looks like.
This has started this is working fine.
So let's see if anything changed.
I'm not going to use.
I'm not going to reload the actual page.
I'll just open a new web browser and try to run this in, and so
we can see that Anita's weather application now exists here.
If you try to type in, tell me something, it gives you like the
whether updates in Texas, if you try to try inside like a city in Nigeria,
just, you also gets the updates there.
So like we are getting the feedback of what the application
would have actually performed.
If it wasn't if it was actually in production right now.
So if you also try this particular if you try this URL now.
This domain name now on your end, you also notice that these changes that
I have made, even though I have not pushed it you can actually see what the
changes look like on your end as well.
And how do I know that MiroD is actually working?
I told you earlier that MiroD kind of injects Itself into
like your remote cluster.
That's the MiroD agent, right?
So let's see if the MiroD agents actually existed by doing kube
get pods and let's see.
Now we see that there is actually a MiroD agents that is running same same
as like the weather app deployment.
So like they're both running at the same time, right?
And now that actually tells you that this is actually how MIRDI would work.
Even if you're running like a very large microservice application, you can
indicate the particular microservice.
You want to work on the the target, the particular code that you want to
specify, and then Miradee will still do the same process that it's doing here.
And at the end of the day, I am done.
Let's say I'm actually done with this whole process and I'm
done debugging my application.
Now I want to stop this.
If I stop this debugging process, you also see that the Mirrodi agent
automatically terminates itself
completed.
And then, yep.
So now we're back to like just the deployments that exist here running.
And so basically that is how Mirrodi works.
You don't have to deal with all of the drama.
And once you're done with the changes, you can actually push to Push your
application and then you're moving forward from that point in time.
Meredith takes like your feedback loop from what it used to be, or what we
actually looked at earlier to this.
And once you code, you can actually test in your application
in on staging using Miradee.
So you're like seeing what's your.
Application will look like and you're making your changes once you're
sure that everything looks good.
You'll put your pull request and then your CI CD processes and tests
all go on from that point in time.
And you also don't have to like panic at any point of The application
breaking because already you're sure that you got like a very good state
of like it's very closely aligned production states of the application
while doing that with Meredith.
That's basically what I've been trying to explain since.
So you spend less time with mocks and simulations.
You get like immediate response of the cloud conditions of your applications and
using ready, you also get to there are low chances of your application failing.
I like the later stages once you're done with all of this.
That is all that we've been trying to say about mirror D now we're done.
If you have any questions at this point in time, I'll be more than
happy to walk you through or go through these questions with you.
And if you also want to find out more about MirrorD or contribute
to it or use MirrorD, you can actually check out mirrord.
dev.
We have an excellent documentation that makes it easy for you to jump
jump on and get started with MirrorD.
If you have any additional questions, we have an amazing discord community with
great contributors and also developers that are on standby 247 to actually assist
you with any challenge at all you have using MirrorD and you can also create
any issues that you also come across using MirrorD at the end of the day.
That is all for my presentation.
If you want to reach out to me, you can access me at Anita Ihuman on
all of my social media platforms, or you can also reach out to me
via my email listed out here.
So thank you so much and I hope you enjoy this.