Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone, welcome to the session on streamlining API governance.
Kubernetes native API management.
My name is Puputhugunathilaka.
I'm a senior technical leader at WSO2.
My experience extends across the API management landscape, as well as
the cloud native technologies that include, contain orchestration systems,
service measures, and many more.
In this talk, I will highlight the Kubernetes Gateway API, exploring its
transformative impact, not only in the Kubernetes ecosystem, but also in the.
broader API management landscape.
We'll dive into how this new API is reshaping the way we Approach API
governance traffic management and service extensibility within kubernetes ecosystem
Let's kick things off by diving into what a typical application architecture
looks like in kubernetes Here I have a design and architecture for movie
ticketing system where users can book tickets through the platform The system
consists of various services like order payment, movie, and theater services.
To make the application accessible to external client applications,
the Ingress Controller is used.
However, you will need to implement essential functionalities such
as security, rate limiting, and others within the service itself,
as the Ingress Controller does not inherently provide capabilities.
Let's dive into the ingress resource in Kubernetes.
Here is an example of how you can define a service within an ingress resource
to make it accessible externally.
Typically, host based and path based routing are the most common methods
used to set up ingress routing.
You will need to specify the service name and the port for this approach.
For advanced functionalities such as path rewriting, course configuration, session
management, and timeout settings, you will need to leverage annotations provided by
the specific Ingress controller in use.
In this case, this annotation handles path rewriting.
request to a target path like order list.
While annotations offer flexibility, they can become difficult to manage as
the configuration grows more complex.
This approach does not scale well as handling numerous annotations
across various services can lead to maintenance challenges and
increase operational overhead.
Let's take a look at some challenges with the Kubernetes Ingress when it
comes to managing traffic and providing advanced functionalities for services.
One of the key constraints is that Ingress is primarily designed to
handle HTTP and HTTPS traffic.
It lacks native support for other protocols like TCP,
UDP, WebSocket, likewise.
This means if the application requires protocol flexibility, you
will need to use additional tools or controllers to manage that traffic.
Ingress only supports basic routing configurations such as
host based or path based routing.
It does not have out of the box capability for more advanced capabilities such as
traffic splitting, which allows you to do distribute traffic between different
versions of your applications or header based routing, which can direct
traffic based on the specific route.
While Kubernetes is highly flexible, the Ingress resource
itself isn't very extensible.
If you need additional features, you often have to rely on custom
controllers or third party tools.
This makes the setup more complex and harder to maintain
as your requirements evolve.
Security configurations within Ingress are quite minimal.
For more advanced security features like mutual TLS, you will need to add
extra configurations or use custom implementations which increase the
complexity and maintenance burden.
Ingress lacks built in tools for detailed observability.
It does not provide native metrics or traffic monitoring capabilities, making
it harder to get insights into the performance and behavior of your services
without adding extra monitoring tools.
Lastly, Ingress does not offer granular control over traffic behavior, such as
retries, timeouts, or rate limiting.
These are crucial for fine tuning the performance and reliability of your
services, and without native support, you will need to configure these
features manually or rely on additional tools, which can complicate your setup.
Overall, while Kubernetes English provides basic functionality, It has several
limitations when it comes to more advanced traffic management and observability.
You will need to adapt tools or configurations to fill
these gaps as your applications grows and become more complex.
In 2019, the Kubernetes community introduced the Kubernetes Gateway,
a PIA significant evolution in the landscape of API gateway management.
Within Kubernetes, this new API was developed with the specific goal of
addressing many of the limitations found in the traditional ingress resource.
The Gateway API offers a more robust and flexible set of features.
providing greater extensibility and the functionality than
the ingress specification.
It goes beyond simple routing by incorporating capabilities often
associated with API management such as enhanced traffic control,
security, and observability.
In essence, it brings API management like features natively into Kubernetes,
enabling more granular control over traffic routing, protocol handling,
and advanced configurations, making it a powerful alternative to
Ingress for complex applications.
The Kubernetes Gateway API specification introduces role oriented kubernetes
custom resources are extensions of the kubernetes api which provides the
ability to users to define their own definitions in kubernetes for their use
cases in this context infrastructure providers manage gateway class while
kubernetes cluster administrators oversee the gateway custom resource.
API or the application developers play a pivotal role in crafting
HTTP routes incorporating essential information related to the API.
This representation of roles ensure a streamlined and efficient workflow
within the Kubernetes environment.
The primary aim of the Kubernetes Gateway API is to standardize and
simplify the entire API gateway management process within Kubernetes.
It offers a user friendly, configuration approach streamlining
the overall management experience.
The collaborative nature of this initiative ensures that gateways,
regardless of their implementation, can seamlessly collaborate,
communicate using a common language, and function as a single entity.
Integral components of a unified system.
Through the standardization of APIs, the Kubernetes Gateway API
unlocks the potential for dynamic configuration, efficient scaling,
and seamless service discovery.
Furthermore, this standardization facilitates smooth integration with
the broader Kubernetes ecosystem, extending its capabilities to
include tooling, support for diverse frameworks, application management,
security, monitoring, and logging.
One notable advantage is the cohesive and collaborative
community serving as a driving force behind continuous improvement.
This community ensures that standards evolve and adapting
sync with the dynamic needs of ever growing Kubernetes ecosystem.
Here is an example of how an equivalent gateway API definition
would look when compared to ingress resource for order service.
In this case, the application developer takes the responsibility of writing
the HTTP route for the service.
This HTTP route consists four key sections.
In the hostname section, you can specify the hostname for exposing your API.
The rules section allows you to define rules for the API.
In this instance, a rule for the pause method of the orders
resource has been created.
Moving on to the filters section.
Two filters have been defined here.
URL rewrite and the request header modifier.
The URL rewrite filter transform the order's path to order list while
request header modifier adds a new header called XOREDID to the request.
Under the backend refs, the backend service name and port are specified.
The flow is structured such that if the request lands with the pause method Of the
orders resource path these two filters are then applied and the request is ultimately
routed to the back end service which is in this case is the order service.
When we compare the two Kubernetes resources, the HTTP route resource
stands out as being far more structured and flexible than the
traditional Ingress resource.
HTTP route allows users to define detailed filters or execution paths for each
routing role, providing much better Create a control over how traffic is managed this
increased flexibility enabled application developers to implement more fine grained
access control and apply additional quality of service features which are
not as easily achievable with Ingress.
In the context of API management, these filters can function as a set of
policies applied to the resource or to the API, such as rate limiting, traffic
stripping, or even security policies like authorization, authentication.
Likewise, this level of control makes a HTTP route a powerful way of managing
complex traffic scenarios enhancing performance and security for your
APIs within Kubernetes environment.
One of the standout features of the Kubernetes gateway API is the flexibility
it brings to service management.
From the start, the project was designed with extensibility in mind.
For example, in this HTTP route resource, you can specify the Kubernetes service
name and port used in the backend refs.
However, there are scenarios where more detailed configurations are
required, such as defining the service protocol, timeout settings,
and other backend specific options.
In such cases, the Gateway API allows implementation providers to create
their own custom resource definitions.
For instance, you can define a custom resource called backend, where you can
include the additional information not covered in the default configuration.
Once this custom resource is in place, you can reference it under back end refs in
the HTTP route for more detailed control.
This same extensibility applies to filters, enabling developers
to define custom API policies tailored to their specific needs.
Whether it's traffic management, security, or performance enhancement, The Gateway
API's ability to integrate custom resources offers a much richer framework
for managing Kubernetes services.
WSO2 APK is an implementation built on top of the Kubernetes Gateway
API, extending its capabilities with additional custom resources.
These custom resources have been designed to address gaps in the
default gateway API, providing more comprehensive configurations.
This includes definitions for backend services, as well as policies for rate
limiting, authentication, scopes, and more, allowing for enhanced control
and functionality within the platform.
With the Kubernetes Gateway API, the Ingress Controller is replaced
by an API Gateway, enabling users to apply more advanced quality of
service features to their services.
In contrast to the Ingress Controller, where developers had to implement security
rate limiting and other quality of service features within the service itself.
Making it difficult to scale, but with the Gateway API, it allows the API
Gateway to handle these responsibilities.
This shift enables service developers to focus on their core business
logic, while leaving quality of service management like security
and traffic control to the gateway.
So if you look at some history context of API gateways.
Early gateways functioned as converters where it enables communication between
different network architectures.
Then there were proxy servers, hardware load balancers.
In the early 2000s, application delivery controllers such as Nginx,
Citrix, and Fi were introduced.
With web services APIs, SOAP gateways came into the picture by enabling integration,
protocol conversion, and security enforcement for SOAP based services.
As web APIs and service oriented architectures gained prominence,
REST API gateways were introduced.
The evolution continued with the surge of microservices, edge
computing, and IoT, leading to the introduction of micro gateways.
In the present landscape, we witness the emergence of cloud gateways.
Equipped with advanced AI and ML capabilities, making the latest chapter in
the evolutionary journey of API gateways.
With the Kubernetes Gateway API, we move forward one step further.
API gateway is the crucial runtime in the API management architecture.
Envoy has emerged as the preferred solution for re architecting API
gateways, since it is specifically designed for cloud native environment.
Being open source, it benefits from community driven approach.
With developers worldwide contributing to its enhancements and making it
a dynamic and ever evolving tool.
A noteworthy point is the adoption of Envoy as a runtime for English
controllers in Kubernetes.
Several existing controllers leverage Envoy as a production ready edge
proxy, solidifying its reliability and performance in real world scenarios.
It does not just stop at being a proxy, it comes packed with essential
API management features such as authentication, authorization, rate
limiting, response caching, etc.
Envoy also comes with the support for REST and gRPC services.
Extensibility features includes native filters, Lua filters, Wasm modules, etc.
where you can write your own code and plug with Envoy.
Envoy is incredibly versatile, fitting into different deployment
scenarios effortlessly.
Whether it's serving as a sidecar in a service mesh, Hosted in TOL as a front
proxy, Envoy adapts with the architecture.
In a nutshell, and no isn't just tool, it's a game changer.
In the API management scene,
the API gateway is a central component of the API management platform, but it is
just one part of the larger architecture.
The platform is divided into two planes, the control plane and the data plane.
The control plane includes the API management portals, while the
data plane consists of runtime components and your microservices.
The control plane sends control instructions to the data
plane, guiding its behavior.
The architecture offers flexibility, allowing you to deploy multiple
data planes across different cloud environments, regions,
or even in on premise setups.
A major advantage is that with a single control plane, you can efficiently
manage and govern all the data planes, no matter where they are deployed.
In this setup, the API developer or the application developer is responsible for
creating services and configuring the API gateway using Kubernetes custom resources.
However, the key control instructions such as subscriptions, applications,
and other various policies still come from the control plane.
These artifacts are essential because without them, the APIs are not accessible.
The control plane ensures that API governance is centrally enforced,
managing critical aspects like access control, security, and traffic management.
It acts as the back office for API product managers, providing a platform where
they can enhance the API with additional features such as documentation, policy
enforcement, rate limiting and more.
This centralized governance allows for streamlined management
and consistent application of policies across all services.
The Kubernetes Gateway API specification ensures that all
gateways adhere to a unified standard.
The standardization enables a single unified control plane
to effectively manage multiple API gateway implementations
within a single installation regardless of their implementation.
This underlying approach simplifies governance, streamlines operations
and enhances the overall manageability of diverse API gateway instances.
With the Kubernetes Gateway API standardization, API gateways
evolving into a vital but commoditized part of the infrastructure.
I hope you can remember in the early days You were worried about the file format
of your file system that is FAT32 or NTFS Today you are not much worried about it
Sooner or later this will happen to the api gateways as well As it becomes a
commoditized part of the infrastructure
once the api gateway becomes commoditized part of the infrastructure developers
are freed from The complexities of gateway management they can focus
more on building and managing apis and other core development tasks with
this Focus will be moved to the api management aspects which includes api
lifecycle management api governance api marketplace version management API
productization, API insights, and more.
Although these features are available on the existing API management solution,
to fully leverage the benefits of the cloud native ecosystem, many of these
features will need to be redesigned.
This will involve seamless integration with various third party services
available on the Kubernetes to enable more comprehensive and
robust API management solution.
With that I would like to wrap up my session.
I hope you found it informative and engaging.
Thank you for your time And I trust you gained valuable
insight from the discussion