Transcript
This transcript was autogenerated. To make changes, submit a PR.
So hello everyone, my name is Rick Spurgeon. I'm a developer advocate
here at Kong. I provided some links here for you to reach
out to me if you have any questions about the topic today. Today I
want to talk about API Ops just a little
bit in the beginning, and then I want to do something really fun, which is
use some API to build a new API
and an API Ops work flow and showcase how
API Ops helps you deliver APIs to production.
So let's get started by just talking about API Ops first.
API first is a methodology for defining API
specifications as the source of truth for APIs.
What that means is a textual
representation or specification of an API defines the behaviors
and it can be used to generate documentation code libraries for clients
and servers and really
provides a great documentation source of truth
for your API at runtime. API gateways provide
a key abstraction for APIs behaviors.
So things like security, traffic control, things like that
are critical from an API gateway. It prevents us from having to
write that functionality into API service code on
the back end. So how do we bridge these two topics?
With Kong, we can use a declarative tool called Dec to
bridge these technologies. The way you would do that is you might have
your API specification typically for rest APIs in
the form of an open API specification. You can use Dec
to generate from that specification an API gateway
or Kong gateway declarative configuration.
That configuration then can be synchronized or amplified to
a Kong gateway to enact those behaviors onto a gateway.
So this is all nice and simple,
but as teams have grown to adopt microservices and
domain driven design, API Ops has needed to grow with it.
It's gotten more complicated. So practical API Ops
requires more complex behaviors to enable what we like to call federated
API ops. You want to empower development
teams to be able to build and deliver their APIs,
and you want to give them the ability to
self service to control their own destiny and their API
delivery, while at the same time as like a platform team
might need to still enable governance capabilities.
And we want to do these two things together. So federated means
enabling these teams to do as much of this as they can on
their own, while enabling a centralized team
to maintain this centralized control and potentially own
and deliver the API to the actual infrastructure.
So we can do this also with Kong. Using our Kong declarative tool
deck deck provides beyond the generation capabilities,
things like transformations,
merging, verification or linting,
and then of course the ability to deliver it to the production or
running runtime Kong gateway. And we can build these
capabilities into pipelines or workflows that
match what you typically see in CI CD systems. So for instance,
GitHub actions allows you to declare workflows. You can imagine these
pipelines here match those really well. The teams could have different
repositories and the code and the declarative configurations
could be sourced and copied across them. And as we're
going to see here today, we can use common development workflows
like PRS reviews and automated
systems to deliver these things using very well
known Gitop style approaches. Right, so I
want to get started because I have a long demo here. I want to try
to get to it. So let's get started with a coding demo.
The fun thing we're going to do today is we're going to use API to
help us deliver some of this. And we're just going to start from the very
beginning. I have a totally empty repository coding folder
here, and what I've done is I've found a
really fun tool called chatbased. And Chatblade
allows you to interact with the chat GBT API directly on
the command line here. I've aliased that to the term house,
the command house, so that I can just make this fun, interactive and quick to
type. You can see that I've just aliased that and used my credentials
to run it. So what we can do is
catalyte command allows us to interact with it directly using something
akin to the web browser you might use with chat Gbt.
So we could say something like ask know
what is an API gateway? And we stream back a response from
chat GbT just like you would on a web browser. But what I
really love about Chatblade is it can interact using common Linux principles,
using standard in, standard out. So we can say pipe a question
to it. So briefly describe API ops. And I'm passing that directly
into the chatblade command and it will again stream back a response
for us. We're going to use this to build code. How can we do that?
So the way I've started to use this is I ask it a
very specific question. Write me a bash script that initializes this folder to do
a node js project. You'll see that I pass that
through the chatbased command and then again through the t command that allows
me to pipe it out to a file, but also vet what we get back
from chat GbT. You of course wouldn't want to run anything that you
haven't first vetted from the API system.
So here it's going to build a script for us
to run. You can see here it's a bash script. It'll initialize
a git repository, it's setting up a package JSON file,
and it's setting up a git ignore installing dependencies, et cetera,
et cetera. So I've added that quickly. I've ran it a few
times. We can go ahead and do this and it will run this for us.
We now have a functioning node js repository here
locally. Just by asking chat GbT a question
API first, as I said at the beginning, is really important. We want to specify
APIs using a specification. So I'm going to ask
chat Gbt to write us an open API spec file. I want to call attention
to this little dash l flag that I'm passing to the chatbased command here.
That means use the last session for
chat GbT. It basically says to chat GbT. Continue the
conversation. This will allow me to ask questions in sequence and continue
on a conversation with the AI technology as I go. I don't
have to rephrase or reprompt every time with Dash
l, it's just using the previous context for the conversation. So here I'm
asking it to write an open API spec for a very simple API,
just like a hello world API, and we're going to tee that off
and look at it and also to an open API spec yaml
file so you can see what that looks like there.
Great, now I want to implement it. So how would I implement it?
Chat GBT please write us a Javascript web service that implements the previous
open API spec. Going back to that context
of the conversation. Chat GBT can just read back the
open API spec that had previously generated and create
a very simple node JS application for us here.
So that's now in the server JS file. We have an open
API spec file there too as well.
How would we run that? Well, everyone loves docker. We use docker,
so let's go ahead and do that. Build a docker file for the Javascript service.
Again, the context allows us to ask very simple questions.
Great, it's created a docker file for us
and we can then ask Chachi BT.
Let's go ahead and create a script that will build the image and run
it. I'm also going to ask Chachibt to run the docker
container on a specific network that will allow us to bridge it
later to the API gateway that we're going to deploy.
So here we've asked it to create us a little script.
It will build the container, create a network and
then run. I think that that
might be a little bit of a problem if the container already exists. So let's
go ahead and do that.
So let's go ahead and run it and see if that
works for us. So we
have a hello world service running.
Can we ask it a question? Yeah, hello world. So our service
is running here on Docker and we're up and
going. So now we want to put an API gateway in
front of it and start to build out API ops. So how will we do
that? Today I'm going to show you Kong connect.
Kong connect is Kong's SaaS product that proven,
among many things, a host of control plane. The host of control
plane allows us to treat API gateways
as a single unit for scale, obviously up and down.
And what we can do is we can configure this hosted
control plane and it will manage the runtime data planes or the
actual API gateways for us. Connect provides tools of
other features that I'm not going to go into today, analytics,
dev portals, et cetera. To get started, I want to use
APIs because I'm a believer in APIs and automation.
I'm going to create a personal access token here that will allow me to work
with connect programmatically. So I'm going to go into here
and I'm going to go ahead and save that token that I just created
that allows me to use it directly on the command line. And I can
do some things like use the connect API
to create one of these control planes called a runtime group. Currently I'm
going to give it a name called hello World. I'm passing in my credential and
we're creating a runtime group on the fly here using an API.
So we have a hello world runtime
group. Now this is our control plane and
what we want to do is we want to deploy a Kong gateway
into this hosted control plane, connect them together.
The easiest way to do that in a development environment is just to run one
locally on your machine. Since I have Docker,
I'm already doing that. I'm just going to do that here. So what I'm Kong
here is connect gives me these nice little helper
functions to deploy a con gateway. And all of the
secrets and all of the things you see above are used to connect
the running local API gateway back up to the hosted
control plane. Before I run this though,
I need to make sure that I run the gateway on the same network as
our running service. That way they can communicate. We're using the API gateway to
proxy traffic to our service. So that connection needs
to be valid. So what we have now are two containers
running a Kong gateway and a hello world service.
Okay, so what we're going to do now is let's
go ahead and build out the API ops workflows. So how can we do that?
Well, we're going to use GitHub actions, if you're familiar with
it. This is a command I've set up to create some necessary
folders and things that we're going to work out of in order
to enable API ops and automated workflows. Creating a GitHub
workflows folder, that's the well known place that GitHub uses for its GitHub actions.
We're going to create a folder called Connect, and inside of this connect folder we're
going to store the actionable connect
Kong declarative configuration files and that's what's going to drive our
automation. I'm using here the GitHub CLi to
create a repository, make it public,
and that will actually reach out to GitHub and create
that new repository. So if I
go up to here, you can see we have this new repository called my
API. There's a couple of manual steps I've got to do here quickly.
To make this work,
GitHub actions needs special permissions because what we're going to use GitHub actions
for is to create pr based workflows. When files are modified
in the repository, we're going to create prs that will then automation
the driving of changes to the connect system.
And so we have to give it the proper permissions. We also need
to give it a secret, which is that same secret
that we created at the beginning.
Connect supports service accounts and all sorts of RBAC and other security.
But for the purpose of a demo, we're just going to do this here.
So I'm going to copy in that same personal access token
and I'm going to use it for this well known variable and this will
be fed into the actions, into the CI CD system as it does things.
So here we go. We actually now should have the
necessary security settings.
And before I go on, what I
want to do now is you can ask the
API systems for help in building these GitHub action
workflows. In my experience they're not well trained
on these tools and it's also quite difficult to express the
question clearly enough to have it generate these for you. In the interest of
time, I've bootstrapped this for us so I'm just going to manually copy in
some files. There's three GitHub action workflow files that I'm going to show
you. But what I'm going to do is I'm going to show you that within
GitHub I want to go ahead and just commit these in because we can
just get the whole thing started just by doing this. So once the workflows are
in the repository, they will start to be evaluated by the GitHub action
CI CD system. So I'm adding the files, I'm doing a commit
and I'm just going to go ahead and push them up to our repository.
We'll go through the files in a second. As you now come
over to GitHub, you can see all of our files are up here and
if we go into actions we have running
workflows. So we're going to go through these one at a time.
The first one I created was called convert oas to Kong and it lives in
this file right here. What this does is it says anytime a
push happens to the main branch for this particular file,
and you'll recall back we asked chat GBT to create this file for us.
We're going to run this set of jobs, and this set of jobs includes checking
out the repository, setting up the deck tool.
So this is a little job that is provided, Kong provides,
that allows us to install deck into the CI CD workflow
here. And here's the key part. We're going to
use our Dec API Ops style commands
to convert that open API spec file into
a stage declarative configuration. So here we're saying convert this
file to output to this file.
And I simulated one of these kind
of multi stage apiops workflows by
adding a second step, which is another API
ops command we provide called add plugins. This allows us to
layer on a plugin using a JSON path selector.
So here I've said for all the services in the input
declarative configuration, add a rate limiting
plugin with this five second configuration.
And here we're going to output it out back to the same
file. So we are using the same file as kind of a working
place to build up a configuration.
And as I mentioned before, we're going to stage this into a pr
so that it can be merged and it can be
reviewed and then merged prior to further being acted
upon. So that's workflow number one.
If we go look at this now, we see that that action completed
and we have a new pull request. And that pull request contains
a new file Kong staged contains,
and this is a Kong declarative configuration file.
It has a service, that service is configured on a
particular port, it's configured on a particular host.
This matches what's in the docker container. From our earlier
request to chat GBT, we said build a docker container that can do
this, build a docker image that can do this and run it like so.
So this matches, it matched what we told
Docker and it matched what we told chat GBT to do. In the open API
spec generation, we have a route that matches on Slash
hello. And we have a plugin on our service for rate limiting
with a five second. So this is a combination of the convert and transform
stage that I showed in the diagram earlier, and it's staged into
a file that we can review.
Let's pretend that everyone reviews it, everyone gives it the looks good
to me. We can merge this down and what we now
end up with is the execution of a second workflow.
So let's look at that second workflows. What we're doing now is
we're saying stage the Kong changes to synchronize.
This is kind of like another step before production.
Basically what we're doing is we're going to use Dec to determine
what changes will occur in the production API
gateway prior to actually pushing them up. You could skip this stage,
but this allows us to do one more step of verification in
case there's drift in, let's say the production system. Let's look
at that file real quick. Here's the workflow
file. This says on pushes to main to that
stage file that we just created in the previous step. Do some very similar
things. Set up deck, check out the repository. What we're going
to do is copy the staged file to another file and
make this act as if this is the production file. So here's
kind of the working file and here's where we want to operate off of.
There are other ways to do this, of course, but this is a very easy
way to do so. We're going to use a deck tool command
called Diff. This will connect up to our
configured control plane, in this case the hello
World runtime group that we created earlier with our configured
secrets. And it will calculate a difference. This is calculating
any drift that may have occurred. So what is on the production system versus
what's in this state file that we've just passed into it.
We're going to use that diff to then create a
PR. That PR can be further reviewed and
approved or rejected so it's like a final step before
pushing into production. So if
we go look at the PR that it created, this is a deck
diff output pushed into the description of the PR.
We can look at what files have changed. And this is
the same deck file that we mentioned earlier.
It's a copy of the same file as you would go forward with
this, the whole file wouldn't change. It's just because we have a net new environment
here. It's basically saying nothing, an entirely new file has
been given. So we
can pretend that the platform team has looked at this
and they all approve, and they can
approve the PR and merge it down. And then finally one more
step occurs, and that's called deploy changes to
Kong. And so what it does is it looks at that other file,
this is the quote unquote production file. Anytime a push
to this file on the main branch runs, do the exact
same thing, check out the repository, set up the deck tool.
But this time we're actually going to run the sync command. The sync command
is the thing that actually enables the changes, right?
So if we go look back at that action,
it succeeded. So we can go in here and look at the log
and we can see that the deck sync command returned a positive
return code. We know it succeeded, which means that those changes
got pushed. So if we go up to connect,
what are we going to see? Well, we have the hello world control
plane that we created and some things have changed.
We now can go into the objects that are configured within
the runtime group or the control plane. And here
we see we have a new service, hello World API. It points
to the host hello world on port 3000.
It's enabled. We have a route that
looks for requests on hello.
And we have a rate limiting plugin which
is configured to the service and is configured for a five
second window. So you can see we never touched
any of these screens. We enabled the deployment
of this configuration using just a pure API Ops workflow
akin to a Gitops workflow, but it is a Gitops workflow.
And so this control plane now should have pushed
down that configuration to our API
gateway. So if we go back to the terminal,
we can ask the service directly because we're
on the machine, hello and get a response.
But we can also proxy through the API gateway and get a
response that has been pushed through the
API gateway and includes rate limiting plugin capabilities,
right? So this is where the API gateway provides that value of
the abstraction layer in front of the service. Right. What else
can connect do for us? We'll just click
through a couple of these briefly. But for example,
it can give us traffic so we can do
things like run this and generate some
fake traffic. And we can see analytics
collecting data on
our APIs, and we can look at individual requests.
And this is a single pane of glass across your API.
So you can imagine your domain driven design teams all
aggregated under this control plane,
and you could see them all together and get reporting,
debug issues, this kind of thing. There's also dev portals
and API products, I'm going to skip those today, but this is connect.
What else can I tell you about this? So if you'd like
to reach out to us about this API ops,
I want to show you this GitHub repository here go ApIops
under the Kong organization. This is where we're building out our API
ops capabilities for Dec. The library go based in
here that you can reach out to us on file
an issue or open a discussion.
That way, if you want to contribute or help us build out these
capabilities, we would love that.
Kong Connect provides a suite of APIs and
all of that is available to you on developer comhq.com.
I use the connect runtime API to automation,
the building of that runtime group earlier.
So there's a catalog here of various things you can do, including building
out dev portals identity management on the connect.
So I mentioned service counts, things like that, all API driven,
as well as the runtime groups itself. So I
wanted to share a reference of the tools that I
used here today. Obviously the deck tool Kong
Connect chat GBT. I was just using an account that
I have, and the chatbased account is configured to talk to that. And of
course GitHub GitHub actions in the GitHub Cli.
And again, I would like to go back to the front, and if you'd like
to reach out to me, discuss this or any of the other topics about
Kong or APIs API gateways, please feel free
to reach out. Thanks for listening today.