Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone, welcome to my presentation on unleashing the power of
serverless for building scalable and cost solution using AWS
Lambda and GQGen. In this presentation we are going to cover two
most powerful technologies. One is serverless computing and another is
one of the go popular library GQ gen for generating graphqL
apps. We'll start by exploring what is serverless computing.
Then we'll go in more deep dive about how we can implement using
AWS lambda and then we are going to cover what is graphQL APIs and
how we can implement using gqengine. Before going further, I just want to give a
small introduction about myself. My name is Durgaprasad Budhwani. I'm working as a chief
technologist at Tech nine throughout my career. I always fortunate enough to work
with latest technologies, with latest trends and with various clients
with multiple domains and I always fortunate enough to provide them a cutting
edge solutions which fulfill their needs. And this is what we do in
tech nine. We always work on our latest technologies. Let's talk about serverless
computing in case of serverless computing which is also known as function as
a service, the cloud providers such as AWS, Azure, Google or Versaille
will manage the infrastructure and automatically allocate resources for
running and executing the code in the response of any request or
can event. In case of traditional computing, the developer has to manage
the servers. It could be a developers or a DevOps team has to manage the
servers. With serverless computing, the developer only need to write the
code for the specific function they want to run and the cloud provider
will handle the infrastructure, scalability and availability
of those functions. This allows the developer to focus on writing the code and
building the application without worrying about the underlying infrastructure.
There are lot of advantages of serverless architecture or serverless computing.
One of the main primary advantage of serverless computing is a
cost reduction. Since users only pay for the actual amount
of compute time consumed by their functions, they can save money by avoiding
the cost of managing and maintaining servers. Serverless computing can
automatically scale to handle changes in demand. As the
number of requests or event increases, the cloud provider can allocate more
resources to handle those workload, ensuring that the application remains responsive.
Serverless computing allows developer to focus on writing code and
building application rather than managing infrastructure. This simplifies the development
process as well as developer can only avoid the complexity of managing servers,
load balancers or other infrastructure. Component serverless
computing allows developer to focus on writing code and building application rather than
managing infrastructure which include it actually simplified the development
process, the developer has not to worry about managing servers,
load balancers, other infrastructure component with the serverless computing,
developer can quickly and easily develop code changes without worrying about the
underlying infrastructure. This can speed up the development process and improve
time to market for new features and application and serverless computing can improve
the application reliability by automatically handling tasks such as
server maintenance, load balancing and scaling. This can help to reduce the risk
of application downtime and ensure that the applications remain available to the users.
AWS Lambda is nothing but it's a serverless compute service
provided by Amazon Web Services that allows developer to run the
code in the response of event or request. With AWS Lambda,
the developer can write code in variety of programming language which include Java,
Python, node, JS, and then they can upload their code to
run as a function. In AWS environment, these services automatically
handle the compute resource scaling, availability of functions so that the developer
can focus on writing code and building application rather than managing servers.
AWS Lambda support variety of programming languages there are a
couple of languages which has provided as a native which include Java
net. If a programming language is not supporting AWS
lambda, but you can also create a docker image or you
can also create your custom runtime and then you can also run your
code on AWS lambda which is basically happened when you are
choosing a PHP language. You can create your docker image or
you can use a custom runtime. In case of the trust language.
AWS lambda is often used for the event driven application where code
is triggered in the response of events such as changes in Amazon S three
or any new message which is pushed into AWS SQS,
Amazon Q service or in case of any
message which has been pushed to Amazon simple notification. It can
also be used to create API and run bad jobs and perform data processing
tasks. So we are going to see how we are going to write a simple
go application for AWS lambda. But before going to more on how
we are Golang to write this for the AWS lambda, first we'll take a simple
example of go application which is listening to any HTTP request.
For that we can see a simple example where a
main is kind of starting point. This program is
using gin framework. Jin is nothing but one of the popular web framework.
Here it is creating a router and the
router is then listening to an HTTP request, a get
request. When someone is going to put for this specific request it
is going to return a hello world and then at the end this web
framework will start listening to HTTP request on the port
number 80 80. That means when someone is going to hit the localhost HTTP
localhost 8080 request, this will going to print
the hello world. A very basic example in Golang go
application. Now when you want to write a similar application for
the AWS lambda, in that case, now when we want to port the
similar application for this AWS lambda, then we need to do
a couple of more changes. The same thing is
we are going to use for the gin framework for the routing. We need to
import multiple libraries like AWS lambda go events,
AWS go lambdas. And then we are going to also going
to use this lambda go API proxy. So this proxy support
for multiple framework, it support for fiber, it's support for gorilla,
it's one of the library provided by the AWS lab so that it
can use the existing router, the router which has been created,
and it can kind of mostly
port or wrap as per the AWS lambda.
So we're doing the same thing at the global level. We are just
initializing this gin lambda. We are doing the initialization of part
this thing. So one of the, we can say a small drawback of AWS
lambda is because it's going to run a function periodically.
So function initialization time is going to take some time. So that is called
as a call start time. We can kind of reduce the
overall call starter for the successive calls. And that's why
so the calls which required to do one time initialization,
it's recommended to put into the initialization block so that for the next
successive call this block will not kind of reinitialize.
So we are just doing the same thing here in the initialization bar.
We are just initializing the router. We are also kind of saying
that if someone is going to hit for this forward slash endpoint,
just return hello serverless. And then we are initializing
our gin lambda which is a proxy
and we are passing the router. So this handler will be kind of starting
point for the AWS lambda. It accepts few things. One is context,
which has a lot of information. It has information about the
x ray. If you're enabling the x ray, it has information about
from where the call has been initiated,
kind of a lot of information in the context. And then it just provide
another thing which is a proxy API request proxy. Then this proxy
will have information about the exit path that it is
calling. Then we are just kind of
wrapping inside the gin lambda
proxy with context where we're passing both the things and then the
starting point will be lambda start. So it's like a same like it
will wait for the request, then it will call the lambda start, it will
call the lambda. We already have a router which is already
initialized. We are good to go. And when someone is going to hit this URL,
it's going to call hello serverless. So we have seen that
lot of code, for example, for the router AWS, well as
when you are writing a basic goal and application, most of the code is same.
Only a small kind of a wrapper is required when we are running this code
for the AWS lambda. So let me quickly
open my editor and just walk you through on how we can kind
of make sure that we are trying to use the same code.
And I will also walk you through what are the practices which AWS recommend when
we are going for the AWS lambda. Now I'm going to show you that how
we can use the code sharing. That means the code which has been written for
serving the AWS lambda. We can also run the same code locally.
One of the main challenge of running any of the service technology code
local is how can test everything locally.
So there are a lot of tools available. One of
the great tool especially for AWS is called as local stack.
By using that local stack you can also run AWS lambda locally.
Internally it just deploy into this local stack environment
which is kind of equivalent to AWS lambda environment.
And by that mechanism we can test it out. But again,
kind of deploying that code, it's kind of a time consuming
process. So this is a simplified process which I am
figured out and I hope that you are also going to get some
benefit of that thing. So the solution
is pretty simple. We need to identify what is the common
code. This is also one of the combination from AWS lambda is
to keep your initialization code separate
from your business logic code. So in our case, consider that
our business logic is to serve our HTTP request. In that
case, we are just kind of moving that code,
especially for the routing code, into a separate file.
Let's call as router go and then on
our local go which is going to run on the local system, we're just mentioning
here that just call the router and just initialize
the router. It was the same code which I shown a few seconds back in
the presentation that we can run a basic lambda
code, just initialize the gen router and run the code.
So this is good. We can
quickly test it out whether this is working or not. I'm going to
run the same code. Go run local code which is pointing this
local folder and it has
started listening on the port number 80 80.
I was just using one of the feature of I'm using this Golang
tool for all this kind of a development purpose, and this is
one of the good tool, especially for the Go language. You can also use the
same tool for the Vs code, the same principle which I'm showing here.
It will be work and I also request the organizer to
share the GitHub link along
with the presentation. So now the port is running on 8080.
You can see I can directly call a request which is our get HTTP
request, and we can get the data which
is hello world which is coming from this router.
Router has been initialized and here we can get the hello
world thing. Similarly, if I want to write
the code for lambda and I need to kind of
follow the same principle, that the initialization code will be separate and
the business logic code will be different, then this
is how it's going to be happened over here. Again, the main
is kind of a starting point for us. Then we are initializing everything.
We are taking the router information in
our goal and initialized. We are just initializing the router.
Then we are also using AWS lambda proxy.
Inside the proxy we actually created a global variable.
Here we are just passing, just creating the kind
of a global object new adapter and passing the
router. And then this is the starting point of
AWS lambda, especially for the HTTP request. We got the context,
we get the request and this will be the response.
And since we are using this AWS labs
gin proxy, it just kind of wrap everything and the same
code will work for both the things. Now this
is how we are able to kind of share the code between our local thing
and lambda, and we can kind of run this entire code too.
There are various options for the deployment, but for this presentation I'm
going to use serverless application model AWS SAM for the deployment.
So the other options could be we can deploy using a terraform,
we can deploy using cloudformation, we can deploy using plume, or we
can deploy using AWS Cdk. But if you are going
to do for some sort of PoC, then this SAM will be kind of
much easier. For the deployment. We need to install a SAm ClI which is
equivalent to our AWS ClI. After installing same CLI
we need to run a command which is same initialize. So this is going
to ask you a couple of things. It has a couple of boilerplate
project for the Golang, for rust
language, for node js, multiple languages and we can also
choose a custom project also. So I already installed
this SAM initialize and it has created kind of some project.
So after running a SAM initialization part, it has created
some sample project for me. I actually taken the respect information which is
required to run this application does include a Mac
file where we are going to run a SAm build command template
yaml file which is responsible for kind of.
It is mostly equivalent to a cloudformation template, but it's mostly configured
based on Samway so that we can deploy our lambda.
And a couple of modification has been made so far.
And this is kind of one of the important thing for us is because
this will tell us what kind of thing we need to do.
So it start with a folder location.
So when I'm going to run my build command, which is my back
build command, it's going to generate the artifact. But this
build command will utilize this same template. It's going to
see where the code path is. In our case the lambda server
go, that was where the code was. There it
will create the name of the handler. So this handler will be kind
of a logical name for the handler then which runtime we need
to use. So the lambda support multiple runtime node js Python.
So similarly right now it's supporting go one x
version, then you can select an architecture. So AWS Lambda support two
kind of architecture. One is we get called AWS
86 64 and another one is arm. For this I'm just
selecting can arm. Then it will ask for when
we are going to invoke this lambda, when this lambda is going to be called.
So here we are saying that this event will have a logical name,
catch all. It's just kind of a logical name. And then the type of
event will be an API, which is kind of can API request from your
API gateway, and it's going to create
an API gateway for us. And when someone is going to say a
specific endpoint get request, then this is going to invoke at
the output of this one. When I'm going to run Sam build command and
send deploy command is going to generate the entire structure
stack for me. And once the structure is ready,
then we are going to see a couple of URLs at the end.
So what I'm saying to this Sam framework
is, or I would say internally, it's actually calling a cloud
formation template. So what I'm saying to them is just create an
API gateway for me and just also share the API gateway
URL at the end of it. After deploying the same template.
It's going to generate a URL. The format of URL will be something like this.
This will be kind of a name of API gateway execute API is going
to make can execute call to this API. We have this prod
which will be the default stage environment. And when
I'm going to hit this URL again, it's a gate request call.
So it's going to make a call to that lambda and it's going to print
hello world. So this hello world is same like which we have seen on a
local system when the code was running on
locally. This is the same thing over here, and this information is
coming from the router. So we have seen that
we can do kind of code sharing part of that thing. We can kind of
write the same code which can be used by the lambda and local.
Now this is mostly all about the AWS lambda.
Now let's come back to the next section, which is our graphQl.
So GQL gen. So I'm going to most talk about
the GraphQL part. So GraphQL is a query language
for Apan that was developed in 2012 by Facebook and they made
open source as 2015. It was mostly designed to
improve the efficiency and flexibility of API by requesting to
fetch the data which is only required, and to retrieve
multiple set of data in a single request. And GraphQL is
a strong type schema that offers structure of data and that can
query and query language that allows clients to specify the data they want
to retrieve. It can be used in any programming language and
backend technology. So when you are talking about the graphql,
we only need to think about the schema. So this schema will define
everything. It defines what kind of model we want,
how we want to tech that model, and how we want to update that model.
So schema is kind of a core for graphQl. And after
that there are two important concepts, which is called as Qian
mutation. In case of traditional rest application, we have
multiple things to get to update the data. So for example,
we have put request, post request, delete request.
So these are all the operations which is used to kind of update the data.
In case of GraphQL, that is called as mutation. So whenever you want to perform
any sort of update, which also include the deletion of that object,
it's called as mutation. And when we want to tech the
data that is called as query. So q is nothing but just get the data
and mutations, just kind of modify the data.
Let me show you a quick example of GraphQL. Schema. So we
have query for tech the data, we have mutation for update the
data and we have different, different models. So now let's
consider a simple example that we want to create a to do app. The to
do app required an id title description and completed to
get the data from the API. We are going to have
another input. When we are saying that 4k if you want to
create to do, then we need to pass a title
and description. Here the description is optional. If you see an exclamatory
mark that is, this is required property. Similarly, when you want to update
any of the to do, we need to pass the title description and the status
whether it's completed or not. Okay, in case when you want
to query here we say that, okay, we are going to get the to DOS
and to DOS will going to get the list. So if
you can see this is array option, we have square bracket inside. We are
passing a model, the model is required and this return
is also required. Even if it's going to return the empty object,
that is perfectly fine. Similarly, mutation, what are the options
we can perform? We can perform create to do,
update to do and delete to do. Now this
is all about the Graphql schema. Now what happened is now
someone is going to say, okay, if I want to get the data, if I
want to update the data. In our typical restful application we
used to create a controller. We used to create a routing and everything. In case
of graphql, instead of creating a controller router, we need to kind
of mention the resolver for example.
Now this is kind of a simple example, let's say if I want to get
the to do which has id title completed.
So this information will be linked to a particular set of
function which is Golang to execute. Now it may
be possible that the id title and completed these are going to
come from different, different functions. So we can
have a nested resolver. Also in our graphql GQl
generation we will see that how these things are getting linked. So please bear with
me for another five minutes. Now, what is GQL
gen? So Gql Gen is a go library for building a graphql API.
It generates typescript server code based on the GraphQL schema and resolver functions
that you define. So we talk about the schema.
The resolver is nothing but the function which is getting execute and is going to
either update the data or is going to get the data.
So with GQL you can define your graphQl schema
in a graphQl schema language and graphQl generates go code for your
server that handles the queries and mutations. This eliminates
the need for manually parsing incoming requests and serializing outgoing responses.
The GQL is built with a performance in mind and uses generation
to code efficient and types of code. To getting
started with the GQL gen, we need to run a simple command. This command
is nothing good. To initialize the GQL gen, we need to run
the command go, run GitHub.com 99 designs and GQL gen initialize.
It's going to initialize the setup. It's going to create a dummy project
for you. And once the setup is done, we also need to resolve the dependency.
I already did this thing, so let me quickly walk you through on the code
part, what it has been generated, and then we'll go in more details
about how we are going to integrate with AWS lambda.
After running GQL initialize command is going to create a folder structure
which is equivalent to this one. The starting part for the
GQL is to understand the GQL gen yaml file. And this
YaMl file has lot of information. So let's go by a bit
of information from here. So it checks
with a schema where the schema is available. So here
it's mentioned that the schema is available on a graphql folder. So it go to
the Graphql folder, it will check for this file extension,
Graphql S. And here it's going to find out the schema.
And based on this schema, it's going to create the resolver and the model,
and then it's going to check what will be the file path.
If you want to change the file path, we can change the file path.
Here it has uses this graphql generator go.
So this is the generator Go is a kind of auto
generated file after the initialization.
Similarly, it's going to create a models for
us. The model will be,
if you can go back to our schema, we'll see that we have this to
do. This is one model and new
to do. This is another model. So it's going to
create models new to do and to do based
on the file which you are going to provide. So here it's
mentioned that the models underscore gen Go is the place where
it's going to create a model. Now if you already
have can existing model and you don't want to create those models by
this code, gen by Jigger gen. So what you can do is
you can have one more option which is called as autobind. And here
you can mention the path of your model. For example, in one
of the example I'm going to show you that I already have a path where
I mentioned my model and then I'm just telling
the GQ gen that use the existing model, don't create the new
model in models underscore gen file.
So this queries and mutations, it has different,
different kind of functionality. For example, we require one
resolver. Resolver is nothing but the function when someone is
going to request for this query and then we require another create
to do. I just want to kind of create as simple as possible. So I
just put the create to do where I'm going to put the to do information
and it's golang to create this thing. So the
base class for the resolver is resolver go. It has nothing,
it has just a simple thing. This can be
used as a dependency injections. I'm going to cover that part as
well and then it's going to create a schema now.
So if you want to have a separate schema based on the file name,
that is also possible. That is something that we can configure here,
that what kind of schema we want. And it's mentioned that, okay, just follow
the schema based on the schema, just create the resolver.
So we have this schema resolver and it
has multiple function which is actually not implemented.
For example, we are looking for to DOS to
get the kind of curie information and it's mentioned here that
this to do does not implement it. This is where we need to put our
code. Similarly, it has created a resolver for create
to do, which is mentioned here. So we require one more resolve for the
create to do and it has created a resolver that is also
not implemented. So we need to put the logic over here to create
it. Now thing is, I just want to show you the entire end
to end flow where I can perform a basic code operation for that thing.
And I'm very fond of AWS. So what I've done so
far is I created one AWS sample where
it's going to kind of get the data from
the dynamodb and update the data from the dynamodb.
But before Golang to more on how this is going to be implemented,
I just created a very simple sample where we can see
that how we can perform basic code operation on DynamoDB.
The main thing is with the dynamodb is code is a bit complex to
understand, but there are a lot of libraries which are kind of a wrap,
which is kind of a wrapper on the dynamodb
to make our life easy. And one of them is dynamo. Now what this
dynamo does is we need to mention the schema.
So it's like we have can object where we need to just
do a card operation, basic create, update,
delete operation. And we need to say, okay,
now this id is linked to particular this id into
the table dynamodb table. Similarly, user id will link
to user id table. And likewise we have other properties
which can link to specific attribute to the dynamodb
table. So consider this, because Dynamodb is like a key
value pair database. So we can't use a term column, but right
now we can use a term kind of column where we have this id,
user id text and done. These are kind
of attributes or we can say properties of a particular document
in Dynamodb. And then
we're just initializing a new AWS session. So this new
session is in case when we want to kind of do anything related
to the AWS, we need to initialize this AWS session here. We can
also provide which region we want. We can also
provide the credentials AWS. Well, I'm going to pass the
credentials and region using an environment variable which I'm going to show you
in a few minutes. And then I'm just initializing my
dynamo which is kind of a wrapper on the dynamodb
library provided by AWS. And then I'm just selecting a database
table. For me, I just already created a table to save
all of our time. And then I'm just doing
creating an object for the table. It's like a struct for the table,
running a put command, it's going to add this information.
And then here I change a
schema a bit, I'm passing a user id with this thing and
at the end I'm just making a call to again the
dynamodb table here. I'm saying, okay, give me the information based on
the user id and it's just returning me this result and
this result will be visible. A very simple example.
So the main purpose
of showing this example is that for our actual
application we are going to use dynamodb where we are going to see the things
end to end. Okay, now the
GQL code which has been generated so far, I modified that
code. So this is kind of updated
code so far. It's a bit clean code
compared to what has been generated. So let me quickly walk you through on
that code. First here I just mentioned AWS where
client I'm just provides a session information. Okay, this is the session information
for us. We have this graphql which is same auto
generated. Now we have a model to
do model which I shown you before. We have this id,
user id text dynamite. These are the address on that part.
Now I'm just mentioning this GQL generate that,
do not create a new model for moss,
don't create a new model for me. So it's not created over here.
And this is possible by providing a path where it
can use an autobanding. So my to do models,
this model folder is a part of this autobanding.
So GQlgen will say okay, this model already exists, so I don't need to create
another one. And then that's why it's just ignored that model.
Then I created a service because the code which was
in dynamo, it's just for the PoC, not for the
actual code. So I just created a to do service. And this to
do service is actually doing two things here.
It's just doing adding a to do.
In that case it's just generating a unique id,
it's making a dynamodb table call, it's making a put request,
getting the data and then we are good to go. The new
table will be added when someone is going to call to this service.
And similarly we need to fetch all the users informations.
We need to fetch all the 2d information by the user.
So here I'm passing a user id, there it is
again, scanning the call, it's passing the user id and then we
are getting the results which is kind of modeled for us. So this is
a simple service that we have now coming
back to the server go code. So this code has been kind of modified a
bit so that we should use the gin framework. So far the auto
generated code from GqlGen does not have this concept called
as gin. They have a documentation that how we can integrate with the gin.
And I follow the same documentation and this is the code we have.
Okay, so again the code start from very simple thing,
main here we are mentioning the port number. If the port number is not mentioned,
we are going with the default port number. We are initializing the gin router.
We are actually initializing two more endpoints. One is query where
we are going to perform the graphql operation and the playground. So this
juque and gen support a playground. We can use the same playground again
or we can use something else. I'm going to show you one more chrome
extension that you can use and you can play around with the playground.
So this graphql handler here,
I'm just initializing the new dynamo client,
initializing the new to do service and I'm passing this
information to default handler server.
So this handler server is actually provides by our
GQLGen, it's provided
by GQGen and I'm just initializing this handler server and then
when the server is initialized I'm passing this information to
the gin context for
the resolver, the resolver which has been generated here
I'm passing the to do service as a dependency over here so that the
resolver have all the information about the to do service.
So this is the basic configuration that we have done so far
and now the GQL gen
has know about
to do service and it
has the instance of for the to do service.
Now if I'm going back to my schema again now
I need to fetch the data for the to Dos and I need to create
a new to do for that. The GraphQL GQ engine has already
created a resolver for us and here we
are fulfilling that information. So for the create to do which
is for the mutation operation we are just adding into to
do which include the text user id and done current
status which is false. And for fetching all
the to DOS we are just passing the user id and it will return
us information over all the user ids.
Sorry it will return the information about all the to DOS based on the user
id. To quickly see this in
action I'm going to run here.
So let me stop my existing server,
let me go to specific this folder and
here like I said that I'm going to run this information
on AWS cloud and I'm going to use my
existing profile. So you can pass your AWS credential here or
you can also configure AWS provides for the same I'm
just configuring the AWS profile and thanks to take nine
so they are allowing me to use their AWS cloud for this purpose
and I'm mentioning the region information. So these two information is
required for dynamodb and then
I can run the same command go run server.
Now what can I do here is I can quickly show you
that the entire thing in a now
it has started the server. Now we can see that it has two endpoints.
One is a gate request endpoints which is pointing to a playground
and we have post call which is actually calling to this which
is actually handling this resolver schema and everything.
So when we open this port number 8080 where the gqlgen
server is running. It is going to show us the playground. So this
is kind of playground. So we can see there are
multiple program provided. This is default playground provided by the
gqueengen. And we can also see the information
about the schema. So if you can see the documents,
it has cure and mutation. The query has this to do where we
need to pass the user id and mutation where we
can kind of create a new to do. So I'm
quickly going to show you a simple example. We have query to do if
you can see to do, it's asking for user id and
then text and user id. So this is the
information when we are saying that, okay, this is the information we are passing you.
And at the written just return me text and user id.
If I'm going to run this thing. We can see we are getting text and
user id. If I'm going to bypass any of the field, let's say
I only need a text because I already know the user id this
information is going to become. Now we can see that the information which
is coming from backend is
something that we are asking for. So this is one of the kind of a
major advantage of graphQl.
Now apart from query, we can also run mutation
where we can create a new data.
So here we are saying that, okay, create a to do, just pass some
random values over here, the text and for the user id.
So if I'm Golang to say for the user id to create it
and then return me only the id, if I'm going to run the same
command here and say that I need to have for
the second user get the data and I'm going to
run this thing again. So you can see that it's just returning me for the
user two and this is for
the user one. So this is all about the playground in this thing.
Now let's see how we can deploy the same thing on AWS lambda.
Now for the lambda, what I've done so far is I just created a
common code, which is mostly a router code, into the router
go file. We can quickly see that we have this router.
We have the information about the GraphqL handler, legal information.
It just initialized the router and it just kind
of returned the path of the router. And then I created
two different folders, one for local, for the
local which is for the local operation, the offline mode,
and then lambda main go, which is to run the application
on lambda the main local main go
is nothing but a simple initializing router, passing the
port number to the router and then lambda
has the same configuration, initializing a router which you
have seen as a code sharing slide.
And then we are just creating a proxy and we are just starting the
lambda. So that's it. We are doing it for
the deployment. The deployment is almost same,
nothing has been changed. Only one thing which has been changed
so far in the deployment is because
this application require a dynamodB access. So we actually added a dynamodb
access over here. We are just passing a customized
role information. We are saying it can perform gate
port queries, can update operation on the DynamodB database.
And then the function that we have, I think
everything is same. The path is lambda folder.
We have kind of customized name for the handler name. It is
using this runtime go one x version
and it is using a specific role and that role has
been created over there which has the dynamic access. Rest of
the things are same. Now one more, there is a
small, very small difference over there. If you can see the
queries, it has only two things. So we have this gate query for the playground
and for the post query to do kind of
a query or mutation. So in graphql everything is mostly a post
operation. So if you want to update something we need to use the
HTTP work which is post or if you want to use any,
get the data. We again go in for the post operation.
So this is the entire configuration that
we have here. We are mentioning that
we need to kind of open two endpoint. One will be a gate
and another will be a post. Post is for QD and gate is for
the playground and that's it.
And then again we need to run the same command, same build for building
this structure and same deploy. And during the same deploy
it is going to ask you a lot of things like what will be a
cloudformation stack name, other information, your bucket
name. And once the deployment is done you are going to get
a URL. And this time I'm going to show you
a different chrome extension which is for the playground.
And we'll see how we can play around with the same thing. And the
code is deployed on AWS lambda.
So this is kind of a different variation of playground. So we have seen this
playground which is provided by the GQL gen and this is kind of a chrome
extension for us which is kind of very standard playground for
graphQl operation. We can see schema here,
we can see the schema over here and then we can
perform the same operation. We can do again the query call
with gate and the user information is going to return the data.
We can quickly check for different user is
returning this data. I can check for the specific thing
and the important thing is the URL which you are seeing over here.
This is actually pointing to an API gateway and
this is the endpoint where we are heading so far.
Similarly we can also do the mutation here.
I can put some more content here, update and
then again run the same query for the same user.
And now you can see my contain this value which has been put
it's available. So this is all about how we can
use the GQN generator for creating the resolvers and how we
can kind of deploy this thing on AWS lambda. Thank you
so much for watching this presentation. I hope that you have learn lot of things
from these presentations. And before I sign off I just
want to give some information about my current company. I'm working in
take nine and this is the fastest growing company in the
world. We have offices in US, India and Latin America and
you can reach out us on LinkedIn as well as you can call us on
the mentioned mobile number. So thank you so much.