Transcript
This transcript was autogenerated. To make changes, submit a PR.
Thank you you to conference 42 for inviting us along today
for our session. Monolith to microservices accelerate
your modernization with AWS app to container and Amazon Q
my name is Richard Lees and I lead our application modernization go to
market strategy in Emir and I'm delighted to be here today.
I'm also joined by my colleague Jorge Alvarez.
Jorge, would you like to introduce yourself please?
Yes, thanks so much Richard. I am delighted to be here today
and my name is Jorge Alvarez. I am a senior solution architect specialist
in migration and modernization. Yeah, Richard,
go ahead. Thanks Jorge. So our
agenda for today. So firstly, I'm going to quickly touch on
why customers modernize and why do they modernize on AWS.
We'll then talk about monoliths and microservices and the
benefits and concerns of both. And then I'll
pass over to Jorge and we'll dive into Dave's scenario. And that's
split into two parts. The first is around accelerating your
modernization journey with AWS apt to container.
And then the second eagerly anticipated part is refactoring
using Gen AI with Amazon Q so
Genai, the buzzwords on everyone's lips at the moment. And I know we're all super
keen to understand how we can use Genai to accelerate our
application modernization efforts. And then finally
at the end, we'll wrap up with some next steps and provide you
with some follow ups, places you can go to kind of learn more
about the topics we've touched on and discussed today.
So why do customers modernize? So,
across the customers we work with in Amir, we find
that customers typically modernize their applications for three
reasons. The first is speed and agility.
So speed matters in business, and in
the majority of cases I get involved in customers
are focused on modernizing their strategically important
apps to improve their customer experience and get
to market faster. The second is security and
operational resilience.
Customers are particularly keen to embrace AWS
managed services, improve their risk posture
as a result, and ensure they are building and deploying
secure and resilient applications from the get
go AWS standard. And then finally,
cost and modern applications typically are more
elastic and actually take advantage the full benefits
of the take advantage of the full benefits of cloud and can scale
up and down, therefore providing a more optimal cost base for your
applications. And when customers have got really clear
about why they want to modernize, they typically then
think about how they can scale their modernization across their organization.
And that's when we find customers leaning into the
modernization pathways that we've developed here at AWS
so the modernization pathways are designed to
give organizations a set of guardrails,
patterns and tools that they can use to go and scale modernization
across their organization. Now, we typically see
customers lean into four modernization pathways in particular.
Time and time again. Those are move to cloud native,
move to managed, move to managed data, and move
to open source. So what are those? So move
to cloud native. This requires the
highest level of time investment from you as a customer,
and often the highest level of investment to achieve. And it's
typically reserved for apps that are linked to large business goals
or challenges you're looking to overcome as an organization, and will involve
refactoring a significant amount of code to take
the most advantage of AWS native products and services.
Move to managed so many
customers in their modernization journey will stop that move
to manage the large proportion of their it estate.
Move to appcontainer in particular is a very popular modernization
pathway, and AWS offers the most secure,
reliable and scalable containers environments.
With nearly 80% of appcontainer in the cloud today running on
AWS, we then have move to manage databases.
So on AWS, we have eleven plus fully managed
purpose built databases, including Amazon, RDS and
DynamoDB, which means customers can choose
the best database for the job and then
finally move to open source. So a particularly
common modernization pathway tying
into the overall trend of customers looking to adopt open source
technologies, be that modernizing from
net frameworks, net core and from Windows to Linux. Okay,
and today we will be focusing on two modernization
pathways. So as part of this
exercise, we will be moving to a cloud native
architecture and we'll be leveraging containers,
monoliths and microservices.
So the monolith so a quick recap of what a monolith
is. So a monolithic application can be thought
of as a single block whereby all components and
functionality are inside that single block, inside a
single code base. Those components are typically
coupled together and reliant upon one another, and changing
one typically has an impact on another.
Finally, this is the traditional way that applications were built
and written, and the monolith has
its advantages. So monolithic
approaches to application developments are well understood by developers,
and often when customers are starting to develop a new application,
it's often just simpler and faster to get started with
a monolithic architecture whilst you're testing new ideas
and seeking product market fit. The second
advantage is that you have a single executable code base,
and that generally makes the deployment process quite straightforward,
especially if you only have a small engineering team focused on it.
And then finally testing is easier, so all
functionality is coupled together in a centralized unit,
and testing is typically faster than it would be in
a distributed application architecture.
But despite all of the benefits and the quick start that can
come from monolith, the monolith has its limitations.
And when we start to think about your strategically important business
applications that change frequently and differentiate you in the
market, and you're
growing those applications and you're growing the team supporting them,
the monolith presents some inherent challenges that become more prevalent
as you scale. So firstly, those components
that are tightly coupled together are often interconnected
and reliant on one another inside the code base. So lots
of hard dependencies. As a result, when you're
updating a small piece of the monolithic architecture,
often the entire application has to be tested and redeployed,
and this can often result in a high level of change effort.
And for what could be a relatively straightforward
small functional update,
redeploying the entire app each time,
each time you go through a development lifecycle can also
slow you down and start to reduce your release
frequency cycle and really make it difficult to ship changes
to market faster reliability can
also become a concern. So a bargain. A particular module
of the application, a memory leak, for example, can potentially
bring down the entire application.
Scalability can
inherent limitation of monolith is the fact that
individual components cannot scale independently,
thus inhibiting many of the kind of
innate elastic benefits of running workloads on AWS.
And then finally, often adopting new technologies can
be difficult in a more distributed architecture. There's no reason that
each part of that architecture can't if it needs to
adopt a different technology, stack microservices,
and this is the scenario we're going through today. So, from monolith to microservices.
So microservices architectures can
solve a lot of the problems that we've just discussed.
So, in terms of what a microservices architecture is. So whereas
a monolith is a large unit that does everything and contains
all functionality for the application, a monolith
is split into smaller services, and each service
does one specific thing and delivers one piece of
business functionality. And those smaller services
communicate via APIs.
And this can result in a number of benefits. Firstly,
your less tightly coupled architecture means that each
service can be developed and deployed independently without having
to deploy the entire app each time.
This can increase deployment velocity and also
give your engineering teams more autonomy within their particular swim lane.
Reliability can improve, services can be made
independent, and failures inside one service won't necessarily
propagate out into other areas of the application, assuming it's
well architected,
scalability is more efficient.
You're only going to have to scale the particular services that
your users are calling, rather than scaling the entire
monolithic application. Adopting new technologies
is easier. You are not bound by the choices made
of previous engineers and the wider application.
For example, being forced to use an existing data
structure inside the monolith and then finally smaller
services with smaller code bases means that it's easier for new
developers to get on board and ramp up to speed.
Okay, so with that, we're now going to head into today's
scenario. So our scenario today.
So today we'll be modernizing from a
monolithic architecture to a microservices architecture
in an effort to get more business agility,
be more cost efficient, and create a more portable code
base. So our scenario in detail.
So today we're looking at an application that was developed
years ago with zero proper documentation.
So a common problem we find in many
of our customers, the team on the ground are
not comfortable with containerization and AWS
deployment best practices.
You want to kickstart your modernization journey as soon as
possible, but in addition to
solving for these challenges, there's also some
new business requirements and you need to make incremental functionality
changes following the containerization.
So today we will be applying a two
pronged approach. So step one will be containerization.
So we're going to quickly take this monolithic application and
get it into a container on AWS. And the reason we're
doing this is fourfold. First, to achieve some cost savings.
Second, to achieve some initial productivity benefits of
removing the need to manage and scale our own underlying infrastructure.
Thirdly, to improve resilience by running it on AWS cloud.
And then finally some agility benefits through moving
towards a more standardized deployment process.
Once we have the workload containerized and running on AWS,
we are then going to refactor it.
Ultimately, applications are easier to refactor once they're already
on AWS. But crucially, this is where we'll be diving
into Amazon Q so Amazon's Genai
tooling to see how we can use Genai to accelerate
our application modernization efforts.
Okay. With that, I will now hand you over to my colleague
Jorge Alvarez, who will take us through our modernization
approach. Jorge, over to you. Ok,
thank you Richard. Thanks for the explanation. So as Richard mentioned
beforehand, there are different steps that you have to follow in
terms of modernizing your application and AWS. He mentioned we are going
to start for something related to how to move your application
or modernize your application in containers to AWS.
And later on we will use Amazon queue to help in the refactoring
and how your application can be converted. But I don't want to jump the gun
a little bit. So let's start for how we can use an AWS
product called AWS Appt container to
help us to move towards the modernization and
to move things from our easy to instances
or from our on prem solutions to the cloud.
But if we think about how we developers,
because I include myself, I was developer back on the day, how we
do things is a very manual task. And when someone comes
to us and say hey, I need to do a containerization of my application,
we always think on four different steps.
The first one is the discover. So when we are going
to move an application from on Prem, we need to know what
the impacts are. We need to know exactly where are the third party
libraries, where are the dependencies, even the network dependencies,
or if there is other applications that are interacting to
that application. You will never want to move an
application that has a downstream dependency or upstream dependency
and that application stop working,
right? So the first part it would be to understand
where are that boundaries of the application.
The second part that you would have to do is to get ready to
prepare what you are going to do. So you are start thinking, okay,
I need to see where my source code is. I need to manage different
aspects. Like for example, my source code doesn't have any security
constraints or it doesn't have any connection
streams. And if I am going to package my
solution, if I'm going to containerize my solution, it has to
be secure enough, at least the last part
or before the launch is going to be building the solution.
So you create your docker file because you are going to the container,
you put all the information that you need there and then you
are going to take all that information and going to
deploy in an infrastructure. So in this case we are talking, for example
in our new cloud provider, like could be AWS,
still a monolithic application, but you move to containers to get some benefits,
right? If you think about that, if you think all
the steps that I mentioned is a lot of work and I
remember doing this in my previous life and it
takes so long, but what are the challenges that we normally
facing? So the first challenge is we are speaking about
technology, we are speaking about tech stack challenges.
And your application might be running in a legacy platform.
Your application is in a server that people don't even remember
what is inside or that server that is
in one corner in the data center that you don't even patch
the operating system anymore is something that is there.
It's crucial for the company, but there is no
time to get it in the next level. And there is
no time to of course do all the steps that I mentioned beforehand.
Also, if you are part of a team, you have to
think on, well, I have to deliver what my business is telling to
me to deliver, but at the same time I have to see
what do I do with this application. But somehow
you always go in the direction of the new features.
Somehow you want to help your company to move to market.
So you don't have essentially time or you don't have,
if you are the manager of the team or the lead architect, a staff
that can help you in modernizing that application,
in moving that application to containers. Because all the steps
that needs to be done. But what is the impact? Well, the impact
is your operations. So if you have an application that nobody's
taking care in terms of modernizing or moving it forward or
doing something with it, your team has to keep it in the, keeps the
light on needs to keep it in Ktlo alive.
And that is going to impact in
your operations. That is the step or
these steps are the challenges that appcontainer can help
you with. But what is exactly up to container?
Well, app to container is a command line tool that supports
Java and net applications. So basically it
will work deployed in an EC two instance or in
on prem in your server. And it's going to do all the steps
that we explained beforehand. It's going to do the discovery
for you, but also it's going to do the containerization
with minimal efforts. So after container is going to
remove that burden specifically from the developers
and it's going to create all the appcontainer needs
from what is the docker file to the image and
all the different artifacts to containerize your application.
One important caveat is Appto container doesn't need the source
code. If you think yourself doing manually the
job that we saw beforehand, you probably need
access to your source code because you probably need to know exactly where
the data is or what is going. You need to understand the application itself.
What appcontainer does is going to scan
inside of your server. It's going to capture where are the dependencies,
where are the running applications and it's going to capture
what is needed to containerize the solution.
Thanks to that, AppContainer will help you to have a
portable solution because it doesn't matter that you are deploying things
on prem or if they are in c, two instance in
AWS after container can be installed there and there
is not an issue with that. But also it can help you to containerize
at scale. Because if you think about doing this for multiple
applications, the last thing that you want to do is to repeat
all the steps that we saw beforehand in particular manner for
every application with appcontainer. The CLI
is going to guide you through all the process and step by step
you as a developer or your developers, if you are a manager of a
team will know how to do it. They will get used to
and is going to help you to scale. Last but
not least, up to container is going to help you to create the
CLI, the CI CD pipelines,
and it's going to help you to move to the AWS
best practices. So it will allow you for example
to deploy the solution. If you are comfortable with
Kubernetes, with AWS EKS, or if you want
to use more managed services or serverless solutions
like AWS Fargate, that is your decision.
The tool is there to help you. How does it work and
what are the different steps? So after container will
start with the discovery phase. So it will go and discover
and analyze the solution and the dependencies around it. And it's something that
we will see in the demo later on how it does later on.
It will guide you through all the steps and it will help you to
extract all the information for creating your docker container.
It will extract what are the dependencies. It will create a file
that you will be able to modify and you will be able to
tweak. Depends on for example if you want to use different container image
or if you want to be in another new version of
operating system inside of the container image.
Then during the creation of the deployment artifacts is when
I said you can select exactly what is going to be the outcome.
You can go and select apart from EKS or ECS, you can
use for example a runner and it's going to help you to
deploy in AWS, it's going to help you to create
the CI CD pipeline, to create entire infrastructure
that is needed to have a resilience and a new modern
application. So if we see the typical
application that we have in our data center or NEC two
is running everything in one server.
And in this case let's imagine it's a Java application that
is running in an Apache Tomcat. I am giving you some
hints on what we are going to see it's connected to a database
and using up to appcontainer, we can end up in a solution like this
one. So we will go to have a solution that is deployed in
ECS. It's going to be connected with the internal
network created in our VPC and it's going
to have all the benefits to have ecs running in AWS Fargate.
So it's going to have the serverless, it's going to have the
less operations impact, for example.
And that is going to provide you the option of
your developers to be more focused on later on refactoring
the application using for example Amazon Q and Genai
rather than focusing the operations with that.
Let's go and see the demo. Okay, so now
we are going to see how up to container can help us in this journey.
So for the demo perspective, I have here my EC
two instances and inside of one of the easy
two instances I have an application and Java application
that is running in Tomcat.
And you will see how up to container can help me in all
the steps to go into a point that I have my application deployed in
AWS. There is going to be certain things that I'm going to show in
this demo that is provided in the documentation of appcontainer.
Because in the terms of time, you know that containerizing an image
is going to take some time. But I think it's important you understand
how it works and how much bartering is going to remove from
your site. So the first thing that I'm going to do is to connect
to the server using session manager.
So session manager is logging
and now we are going to do and start doing the different steps
in this server which is working like it could be your on premise server
or it could be your easy to instance in AWS because you
did a lift and shift. I have installed already up to container,
but before to do that I am going to grant permissions to myself. So I
am going to use pseudo to get the higher permissions
or elevated permissions. And the first thing that I'm going to do is
I am going to show what are the
different options that up to appcontainer CLI
has. If you remember the steps that we did beforehand and
we were reviewing inside of the slides, we were thinking
about discovery, we were thinking about analyze containerizing.
As you can see here at glance, up to container is providing
you all the different steps and is explaining you
exactly when to use each of those. So the first thing that we
are going to do is to init the solution
and that is because I have installed
and is the first steps that we are going to do. So I'm going to
say up to appcontainer agent
automatically the solution or the CLI is going
to start showing me values that I
have already, which is my case. In your case, some of these values
might not be there present and you need to create them. So for example,
what is the workshop directory is related to the installation or
what is the easy to instance profile? The easy to instance profile
is the IAM role that is going to be used to
connect to s three. Because yes, up to container
is a CLI but need an s three bucket and I'm going to show you
for a second which one I created.
So here you can see that I have an s three bucket which
is called app to container for the conference 42. And here
is where app to container is going to deploy artifacts
and needs for containerization, artifacts that need for deploying in
AWS. But even if you have an error, it's going to provide information
inside about logs and things that you can see
later on. So I am going to say yes, I am going to use
this profile and I am going to use the exact same
region where I have my easy to instance.
In my case, which is going to be us west two,
the name of the s three bucket is coming by default because I have configured
beforehand. And yes I want to send to
AWS the user's metrics to know more
and get more insights from my side in terms of
how the solution is working. I am going to tell
up to container to send all the errors and
all the possible problems to my s three bucket. And the
last one is up to you if you want that your Docker
solution is going to be created using unsigned by
Docker content trust or not. In my case I am going
to say yes. Okay, so everything has been configured in the CLI.
So now let's start thinking on, okay, what is the
first step that I need to do? Well, I need to discover,
I need to get an inventory on what is happening in my server.
So I'm going to type up the container and I'm going
to say inventory automatically.
The CLI has detected that there is a Java application which
by the way is providing a name to it. And this
Java application, it has a process id, it has
a web route and then it has other
information around Apache tone card,
there is some identification and remember
I did not have to use any source code for that. So that id
is important that you save it for later on.
Because this information is going to be the one that we need for the rest
of the commands. So let's say that we want to know something
about the application itself. So we want to start
the process of analyzing the application. So we will go
and we say up to container and we
have to put what is going to be the action that we want to
do, analyze. And then we
are going to include where is the application id that we are going to
analyze. So we say application id
and we are going to copy paste exactly the
id that we have from the inventory.
Okay.
And what is happening is up to container.
Went to get the information about the application, scan the
dependencies and providing all that data. If you see
on the outcome, it says it created artifacts in
a specific folder. But what is important for us now
is that generated an analysis JSon. So what
I'm going to do now is to open that analysis Json and
guide you through it. Okay. I'm going to use Nano in this case
and I'm going to copy exactly this path.
Okay. So now we are going to see what is the information inside of analysis
Json. So here is the information that has
been created by app to container
when it did the analysis of the application. The first part
is going to be the container parameters. The first part is the part
that I would recommend you to change is the part that is
going to be used when you create your appcontainer. In this
scenario, you can see that we have here a container based image in
Ubuntu 18. So what I'm going to do is I'm going to modify this
and I'm going to say to use the version 20 because
it's more updated, it has more features
and I don't want to have the 18 version. But if I keep going down,
I can see other information about the analysis results.
I can see for example, that it's a Java tone cap solution.
And I can go to the properties
to see that it's a Tomcat eight. Or that is
for example using the configuration and login properties
inside of the web apps or even the environment
variables associated to the operating system.
So all of that is an important information that
we can have before we do the container. And we
can have that information to understand where are we heading
towards. So after I did the change, I am going
to exit and I'm going to say, yes, I want to change the
file and going back to our terminal.
So okay, I modify, where is the information about
the container? I saw the analysis and now
the next step is going to be to containerize the application.
So I'm going to say after container and
then it's going to be contain.
And if you see here, it can tell you what is
the next step. If you see the next steps, you see edit and then
start the process using after container containerized application
Id. Java Tonka Id. So you don't
have even to know this by heart. The solution is giving
you all the steps. So I'm going to copy paste this and
I'm going to start the containerization.
So what it's doing up to container is getting all the
information around the changes that I did.
The analysis of the application, the analysis JSON and
it's creating the Docker file. It's creating the docker and is validating
everything around it. One thing that you might spot on after
container is sending me a warning because I modify
the container based image. I got it moving
from 18 to 20. Cool. So after container
did all the steps. Docker file. The Docker file id
degenerated the deployment data and doing the prevalidation.
But what is exactly this deployment information? Let's see for
a second. I go here and I copy the
entire path and I go and say nano.
So the deployment JSON is the
file that is going to be used to deploy to AWS.
So we have our image created, we have the
deployment JSON. And in deployment JSON we can configure exactly
how we want to do this. If you see here, you might spot
on that there is an ECS parameters and it says create ECS artifact
true. That is because in this solution of up to container
on this demo we are going to show you how to do
it with ecs. But if for example you are a specific
passion about kubernetes, you can come to this section and see
at the end how you can create eks artifacts to true
and moving towards to the Kubernetes solution.
But for the demo perspective, we are going to use ecs
and we are going to use in deploy target. As you can see here fargate.
So one thing that is important is app to container
knows about your source code, but it doesn't know
about where you are going towards. And if we see here,
I need to provide to appcontainer my VPC id. I need
to provide to the CLI where are we
going to deploy the solution. So I go back to my console
and I go back to VPC.
And then I'm going to select my target
VPC.
And then I am going to copy my VPC id and
paste here. That's important because as
I mentioned, after container needs to know exactly where we are going to deploy
our solution. So with this I'm
going to say yes and then save the data.
So I have my deployment solution created,
I have my appcontainer created. It has been containerized,
it has been validated. In fact,
I want to show you that the images exist.
It's not that I am doing this like they don't exist. They are there,
they have been created. And I have everything to
start the deployment, literally to AWS.
As I mentioned beforehand, I might remember that I have to
use or what is know the next
step, or maybe not. But if you follow the next steps you have
here, what is the next step for you after container generate
app deployment Java application.
So I'm going to copy that and
then letting up to container going. This is going to
take a specific time in your case.
In my case it's going much faster because I have the things already
created. But what basically did is if I go here
to s three bucket, I'm going to show you the information.
And what it basically did for ECs is to create the
specific cloud formation that is going to be
deployed afterwards to create your ECS cluster.
So if I go here and I go to ECS,
you can see that up to container created the cluster, it created
the solution and we can see how it's working
up and running. So if I go to the tasks,
I can see that there is a specific public IP
and I can see also how the cluster is
working with the pod. And then I am
going to go to the load balancers
sec and
I have here my load balancer created for the cluster.
And if I open the public DNS,
the application is up and running. So in
all the different steps that we did after container
containerized the application, it moved
forward the application to AWS. And it helped
me to get step by step what I needed to do.
And there are other steps that you can keep going with
have to container. You can create your pipeline, you can create
the different deployments that for demo perspective we
are going to arrive to this point. But if you can keep going,
following exactly the steps that the solution gives you,
and repeat, and repeat again and again and go on a scale.
Thanks Jorge. And welcome back to the presentation
part of the video. Absolutely phenomenal to see you containerize
something and deploy an ECs in what must be kind of
less than 45 minutes even when you're doing that for the first time. Back over
to you. Thank you so much, Richard. And as we mentioned at the start,
containerizing and moving your application to AWS
in modern infrastructure is the first step.
But then we can use our Genai solution,
Amazon Q, to help us on refactoring and that
would be your stage two. But what do I mean by
that? Well, Amazon Q is a new
generative AI assistant that is designated to
help you in your day to day journey. So basically Amazon
Q will help you to answer quickly on
natural interactions and natural language interactions and informing
about your system, your data.
And of course it's going to provide you solutions that they are
full, secure and that can help you in terms of privacy,
even with your business. So Amazon Q is not going
to access on anything that is
not allowed by yourself.
Also, we have the Amazon queue
as an enterprise customer solution, which basically
is going to rely on customers requirements
from day one. But what are the different aspects of Amazon Q? And I
would like to give a brief overview.
Nevertheless, today we are going to focus only on the Amazon queue as
your expert. So which is basically the first one
that I'm going to talk about. And Amazon Q as an
expert, it can help you with the well architected framework.
It can give you answers to your questions
inside of your AWS console or even in your
more preference ide. In this case, we are
going to use vs. Code,
which basically Amazon Q is going to help us there.
So also Amazon Q can work in
quicksight for enhancing your business analyst and your business
users productivity using generative AI capabilities.
Also, Amazon Q can work in connect, so it can help
you to leverage real time conversations with the customers and
to the relevant company content to automatically provide that
information to your support agents.
So it's going to be the accelerator, all that journey
between your customers and your staff.
Last but not least is coming soon. Amazon Q can help
you in what is going to be your AWS supply chain service,
where it can help you to understand where is their
supply and demand planners and how
it can provide their answers about what is happening and
what is happening and what actions they need to take.
As I mentioned, we need to focus today in what is
going to be the Amazon queue, your AWS expert, because that part
is the one that is going to help you to move forward in your modernization
journey. And Amazon Q can
help you. For example, if you are using it inside of your AWS console.
It helps you when you are processing inside of
the console, but also it can help you to fix in bugs
if you're using it, for example with lambda or it can help you to research
inside documentation or knowing exactly what is your
right instance, or if you have to run or
upgrade a specific runtimes versions. One of
the examples that we are going to see today is Amazon Q code transformation,
which helps you to move this Java application, for example
from Java eight or eleven to Java 17.
Also, Amazon Q can provide you to help to troubleshoot
your own code. Or if someone left the company and
you don't know what that code does, Amazon Q can give you
in the IDE via Amazon code whisperer information
about how looks like the code or what
it's doing. Or even you can ramp up new base code
in no time because it can support you understanding
what is the context of the application and doing things that normally
take days in a matter of minutes.
But where is the best approach in terms
of the refactoring stuff, where can help our developers?
Well, 73% of the
time our developers are running and maintaining
applications. So if you remember all the challenges that we had
at the start with the monolithic application, when we are speaking about
operations and we are speaking about time that people take
to get that application, which is critical,
up and running is where developers can put the effort
there. So basically when you move for your
transition to something and much more managed services, when you
move and transition to something more modern solutions, that helps
you to move the needle more to the size
of the developers doing what they want, which nowadays is around
27%. But how Amazon Q in that
journey can help? Well, developers will face multiple
challenges. For example, they will face how to
manage the resources, they will face issues about how to
write the code or understand that all code or even
upgrade it. All of these is like brick walls that developers have
to pass every day. And thanks to Amazon
Q, dark walls are starting to get
removed because the solution, it solves
the problems in one location. It's one genai
powered assistance. It's going to help you with code. Scanning four
security threads is going to help you in the iDE and it's
going to help you in the console, but also it's going to help you with
upgrades as we are going to see in the next demo
from Java, in this case eleven to Java 17.
Okay, here we have a solution that is Java
eleven solution. As you can see, I have the palm file open and
here is the version of the Java and I have
the solution open in Ambs code in my configuration
I have here my AWs toolkit where I have my
Amazon queue and I have configured the professional edition.
So what we are going to do today is I am going to
show you that this application is in Java eleven and I'm going to
ask Amazon Q to do a transformation to move into
Java 17. So I go to my AWs
toolkit and I go into Amazon Q and I say I want to
transform. So first of all, it's asking
me on well, what is the project? And it detected my poem file at the
moment Amazon queue at the time of recording it works only with maven.
So I click on it and I am going to tell what is the version
that is the current version. So it's
eleven. Yes. And now it's asking
me where is my GDK? I have here the path
where is exactly in my machine copy
paste. Okay,
so now what Amazon Q is going to do is starting the transformation
hub. It's taking all the information,
understanding the code and then starting to work on it
to get it transformed from the version eleven to the
version 17. But meanwhile Amazon queue
is working. I am going to show you other
benefits that it has. So for example, if I go here and
I open order manager and I
open here my Amazon queue chat, I can start
seeing that I have multiple code here that I'm not really sure what it does.
I don't know exactly what are the different pieces.
So I can start having a conversation with Amazon queue and I can
say, can you explain me the
order manager
Java class and
what Amazon Q is doing is basically going
to the project, going to the class and it's going to start
telling to me what exactly does.
So it tells me, okay, the order manager class is responsible for
managing orders placed by the customers and key things
are and it's giving to me information
inside. For example, the method get order that is visible here
or the method lease orders.
So it's giving me context exactly on what is
this code doing, which is something that I might not know beforehand
because maybe the person left or maybe because it's a
source code that I downloaded from a public code repository.
I don't know, it might give me some idea.
But I can go and I can go to a specific. So Amazon Cube
come with solutions that we can send prompt
code to it that is going to provide information
we saw, for example explain, but I can send for example
if it can refactor it. So it
gets information and automatically
is catching up. Where are the different
steps inside of that method that can be refactored and
that can follow best practices on that journey?
Even I can go and I say, okay, that is for my get
order. But what about all these lease order
API gateway answer sorry,
that was the Amazon queue was creating the transformation plan for
the transformation. As you can see we are doing concurrent
work. I don't have to be waiting for it. Okay, I am going
to select all of this and I'm going to say cool
queue. Can you optimize all of
this big method and tell me how can I
optimize?
So if I go here, it tells me, well, one of the
things that you can do for example is to take the authorization logic and
put it in synod method and then that will
consider parallel streams. You can consider for
example validations and how to handle different errors.
So thanks to the solution I can go and I can understand
about the context but not only the information that
is here. I can go and ask how
can I add Cognito to
the project and
asking questions that they are not literally related to.
This project is going to give me the information that I need related
to our documentation, related with the different
steps and related to what
is really needed for me in that moment. So I
don't have to go outside of my ide
and I don't have to be worried about understand if
there is anything there that I need to figure
out. But Amazon Q comes with multiple
options. So this is the chat option, but there is another one which is
the depth option. And when we are going to work with
the entire project and we are going to can for
example for security concerns or we are going to
get information about it, it's better to
go to the depth tab. So I type and I
just click entry.
And then the first thing that I'm going to do is I'm going to open
my palm file. I know my palm file has a security concern.
It's on purpose for the demo perspective. And what I'm going
to do is can you list the
security issues in
smell? And what it's doing
now is getting the information and
it's taking the information from the palm file. It notes
exactly that I am speaking about this project that we are having here and
it's going to provide me back what are the different security concerns that
exist inside of the palm file.
Okay, so now Amazon Q has finished the
functionality here it downloaded
what is going to be the transformation from Java eleven
to Java 17. And I can click here what are the proposed
changes? So it's giving me what
is the different changes that I need to do. It's giving me for example,
a log that I can consume and I can see,
so I can open the log and see what are the different steps
that it was following and what are the different considerations that
it was taking and if there are any worries that I have
to be worried or pay attention about it. And what is saying
for this case is that they have to take care about the
new version of the maven changing the
solution. And that will help me
to start taking care on what is the different
aspects that are happening here in this building solution.
And after I have that converted to Java 17
and I follow the steps, I even can go to the
dev tab which basically is going to help
me to work in other different aspects.
So for example, if I say to QA,
can you list security concerns
in ordermanager
Java? What it's going
to do is it's going to go to the ordermanager
Java and it's going to see what the code is,
it's going to review if there is any security concerns and how
looks like the changes that we should do
and we should provide back and scan across
all our application. So it's taking
some time to get the answer and upload it and
it's going to create a plan also for me. So you can see
here, it detected that okay, in the review order
Java manager Java, sorry, you have to look for places untrusted data,
add validation to path parameters, add authorization to get ordered,
remove sensitive data. But if I want to know exactly how
it works, I say okay, create a code for me.
So it's not only that the solution has been
transformed the application from Java eleven to Java 17,
it's also that they can move more towards to have
something like I saw before, to have something connected to
cognito or to enhance the solution in
security aspects, or to
understand from the different new components
that I want to go in the direction of my refactoring journey.
And also it provide us generation of the code
on how to get these things sorted. Apart from
the explanations and apart from the optimization and apart from
the different steps from one location,
I did not leave my ide at all and I did not move anywhere
else to get information that I needed.
So it's creating the code basically
what it's doing is taking all the steps that they
were listed above and
it's going to create a specific diff file to compare my
old code with my new code and see how we can
change it. Okay, so you can see here
that I have a test, it created a test and then he
created the new solution. So I am going to
click on it and then I can see here where are the difference between
one or another and see where are the different steps
that are going to be provided by the solution. So I'm going
to say okay here also to
start a unit test and I can say fine,
insert code and it says that the code
has been updated. So thanks to this solution
I can be in one place and I can keep going
with the refactoring since we started in an on
prem solution and now I have new code selected.
Going back to you Richard. Thanks Jorge. That was
a really fantastic demo and it's amazing to see us take that workload
from on premise, move it onto AWS
with appcontainer to get it on a modern compute platform and
then use Amazon queue to update
the Java framework from version eleven to
17. And now that application team can really
go on that modernization journey and use Amazon Q
to start refactoring code and delivering more business value
for their customers. So thank
you to you Jorge, and thank you to all of our listeners for joining
this webinar today.
If app to container is something that you think can benefit your business
and your customers and Amazon Q is something you're
looking to explore,
be that in the confines of application modernization or more
broadly, then we have a couple of recommended next steps for you.
So the first is check out our modernize with AWS
apps container workshop. So that will allow you to work through
can example like the one we took you through today, but at your own
pace and get hands on experience as you go.
And then secondly, check out our use Amazon Q
code transformation to upgrade the
code language version workshop too. So similar to the app to container
workshop allow you to go at your own pace and update
that Java or in the example today that Java
framework and modernize that code base.
Jorge, any final comments from you before we wrap up the webinar?
No Richard, it has been a pleasure to be here.
Just keep going and keep trying
these different solutions that help you to modernize.
Of course. Thanks Jorge, and thanks to all of our listeners.
If any of you want to get in contact with us, our contact details
are on the screen. It's been an absolute pleasure. Thank you very much.
Bye. Thank you,