Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone, I'm Anvil Yanarachi.
Thank you for joining my session on simplify network services for
real world cloud native applications with ballerina. Without further
ado, let's dive in. Ballerina,
at its core is an open source cloud native programming language
designed to make integration a breeze. It is a product
of neutered by the Fox at WSO since
2016 and it officially released in
February 2022. And that's not all.
Barina comes with a vibrant ecosystem offering a plethora of network
protocols, data formats and connectors.
What's exciting is that you can craft your code the way you like
it, whether through text or visually, using a sequence diagram
flowcharts for that added quality,
Marina brings in built in, user friendly and user
efficient concurrency, all backed up with safety features,
ensuring your development journey is smooth and secure.
In modern programming, everything is an endpoint.
May it be a database, a security, handling or
even Internet of Thing Android device, everything can be
an endpoint. Interesting thing is that the application that
we are building or cloud native apps are increasingly depending
on these endpoints. So effectively what we are building is
an application which is talking of the network with massive number of
endpoints. Integration is the discipline of
resilient communication between endpoints.
It isn't easy, you know that there are a lot of
technology and techniques have designed to help build system
like compensations, transaction events, circuit breakers,
discovery, protocol handling and mediation.
These are all hard problems to solve. In the
past we had two ways of solving this left
hand side. We have used and using systems like eases
eais or enterprise service plus or enterprise integrations
business process management to solve this problem. These things
are understand integration. It helps to
do integration simple and they have one big challenge
and that is these not agile. In other hand
you can use general programming like Java Node
and these challenge of this they don't understand the integration,
they are not integration simple. So developer
has to take these responsibility for either solving hard problem or
they have to find a suitable framework to support that.
Camel and spring integration framework are some common framework
people are using. These are complex bolton framework
and don't necessarily be integration simple and still
have high learning curve and complexities. So we
kind of came to a conclusion that we call it integration
gap which is you can either be agile but
not integration simple or integration simple but not agile.
As an integration company WSO two, we have been working for more
than a decade to solve these integration problems and we
have been working on more than 1000 integration projects
and almost every project end up in one side of that integration
gap now let's see how ballerina is minimizing
this integration gap. Ballerina is a cloud native language,
so why it is cloud native language? The language network primitives
simplify writing services and deploying them in cloud native environments.
These primitives make it eases to handle network related tasks,
which is crucial for cloud native application development.
Ballerina is a flexibly typed language,
so structural types with support powerfulness are
a key feature in ballerina. They serve two purposes,
one enhances static typing and two describing
service interfaces. This flexibility is valuable
for designing and maintaining complex systems.
Barina allows for typesafe decorative processing of various
data formats, including JSON and XML
and tabular data, so thus making the
data processing over network much simpler. WSO language integrated
queries simplify data manipulation, enhancing productivity
and code quality. Balina programs
offer both textual and graphical representations.
These graphical form based on sequence diagrams provide
a visual way to understand and design program flaws.
Also, it makes your code documentation much
simpler. Davino excels in providing easy
and efficient concurrency management. The use of sequence
diagrams and language managed threads simplifies concurrent programming
without the complexities often associated with synchronous
functions. Barina also enhance program reliability
and maintain boot to several means.
Explicitly handling static typing and concurrency
safety contribute to most robust applications you can
develop. This is all achieved while maintaining a familiar
and readable syntax, making it easier for developers to
work and understand the code base. Let's see ballerina in
action. When it comes to ballerina, the preferred
IDE plugin is visual studio code.
It also offers support for various IDE plugins for
visual studio code such as docker,
Kubernetes, Opentelemetry, Corio, Copilot and
GitHub. Recently we have added support for Asia
function as a service deployment as well.
So when you are using vs code, everything will be everything
that you need to compile, debug, run, observe and
monitor. Your application will be with visual studio code.
Wso as I mentioned, Barina is a graphical
programming language as well as syntax
programming language. So this is a sample of what
marina looks like. In visual studio code you
have a simple program with the main function and on the
left side you can disview the code. On the
right side you can see the sequence diagram that simplifies
what this code does. So what this code does is actually
connect to the GitHub API and then get
the open pull request and then using the Google Sheet connector
it add those pull requests to the Google
sheet. So you can simply check this sequence
diagram and understand the code better. You can also edit
the sequence diagrams to generate code as well.
It works both ways, similar to the previous
side which shows a sequential program. It also
works with an integration designer as well.
This is a sample ballerina service, a rest service that
we have run, HTTP service. So it also
displayed the service the types that it has declared and
you can also view the open API configurations
of this service as well. You can navigate to the SQL
diagram view and then in that view you can also view the open API
of the service that you write as well.
So vs code also support autocompletion of ballerina
programs. It also knows the libraries that ballerina
has. So we have a set of standard libraries
for doing the standard functionalities of a programming language.
So those functions will be simply you can add it,
simply export it and you can just use them in your IDE.
The IDE will automatically complete the values there for you.
Marina has also cracked the challenge of mapping one
kind of data into another kind of data, which is a common
scenario in integration. You can do that in code as
well as our graphical editor as shown in this picture.
So it is very easy when you do the graphical
one, you can just add a functions to convert value from one end
to another value. Balino also support
data persistence. So when
you write a code or declare your objects
in Balarina those are called records. When you declare your records,
Balina will automatically map the entirety and relation
diagram between them. You can simply generate the SQL
code that you want to execute to run this Barina program.
So that also works graphically as well as
textually. Marina also supports debugging
your application via vs code. You can
remote debug your application and check the values that you
receive for your inputs and check your logics in the
debug mode, simply add debugging pointers to the code and start
the application in debug mode. If you are using
multiple services in your application, you can picture
or graphically view them using the architecture view
of the vs code plugin. Balina vs code plugin so
this is a complex system that handles multiple services.
You can simply go to the architecture view to see the connections
between those services and you can read down on each
component and see fine details as well.
Bavina has inbuilt support for multiple
tools that is required essential in integration.
So one is these OpenaPI tool which generates client
and skeleton for the open API specification that you
receive. GraphQl tool which generate client skeletons
in ballerina for your graphQL endpoints. Async API
tool which generate ballerina service and listener skeletons for an
async app contract and strand and dump tool which dumps
and inspect current available strands of a ballerina program which is
used for performance testing then
health tool which generate fhir HL seven
profile to client and stub generator tool of ballerina.
We also have support EDI tool, the set of command tools that provide
to work with EDI files in Balrina. This is just
most used tools. There are many more tools that you can use
with ballerina. Now let's look at how ballerina
is the deployment of the program that you have developed
using unique features of ballerina.
Bell command will generate an executable jar
file which you can use to run with bell tool
or using the Java ADK version as well.
You can also simply do bell run in your program
which will compile and run your program in your machine.
So both of these are supported with Balrina.
Balrina also support native compilation
using GalVM. You can simply say bell build minus
minus GalvM which will compile a machine code according
to your machine architecture which runs on your machine
as an executable. You can also build
a docker container using bellbill command which
packs your application to a docker image. So ballerina compiler
is aware of your application and it will automatically generate
docker file and docker image for your application. You can simply say bell
build minus minus cloud declare docker which will generate docker file
and a docker image for your application. You can also
build GalVM compatible docker image as
well. You can simply say bell build minus minus cloud
declared docker and minus minus Galvm which
will compile a docker image into a galvm. So these
are the stats of some popular frameworks and ballerina in
using GalVM. So as you can see, ballerina pitimus
has the same or better experience compared
to these values in other framework. Not only
that, ballerina also generates Kubernetes artifacts for
you as well. When you just write a service or a
main function or anything, you can simply say bell build minus
minus cloud equal ketest which will generate
yamls that are required to deploy your application in kubernetes.
This also build a docker image for you as well and that docker
image will be added to your deployment yaml and you can just
say Kubectl apply and give that Yaml folder.
It will automatically deploy your application into Kubernetes cluster
as well. Balina also supports function as a
service. You can write a Balarina function, either make it
either deploy it in azure functions or AWS lambda.
There are built in support for these things. You can simply add
annotation, or you can simply write an Asia function executable
and generate an Asia function executable to deploy in Asia functions as
well. Well, you can now deploy ballerina as a
Java file docker, image Galvm,
Galvm plus docker and kubernetes. Now let's see what
we can do with observability in ballerina.
Every ballerina programming is automatically observable by
any telemetry tool. You can view all the visibility and
codes, behavior and performance automatically will be published in
open telemetry syntax and you can simply add them
and view them in any open telemetry supported tool
distributed login also supported by ballerina.
You can simply say no hub bal run and add this one and
redirect your output to a ballerina log. Or then
you can tail this log as well. So you can view these
log value outputs in elasticsearch as well.
All right, so let me explain about my
setup a bit here. I'm using vs code with Balarina
extension installed and my current ballerina version is
Swan Lake update seven let me explain about the
scenario a little bit. What I'm going to do is I'm going to write a
service, HTTP service, which will provide
the location and weather information when you provide an IP
address to that one. In order to do that, I'm using
two API endpoints. One is IP API, the other
one is weather API. So this IP API endpoint,
what it does is it will return the location data.
These you provide an IP address for this one. These location
data contains latitude and longitude belonged into
that IP address. So now I'm going to extract those
two location data, latitude and longitude, and then
pass that one into the weather API endpoint where I
will be passing that latitude and longitude value to
get the current weather information for that particular location.
Then I'm going to combine the results of these two and
provide an aggregated weather data update for
that particular location which contains following
items. So it contains last updated time
of the current weather, temperature condition
fields like weather information. Then it will also
contain the location information, city, country and these
ISP. So I have
exposed this one as a service called with
base eases geo data which runs on port 99
and I have defined a resource function which does
all of this which accept IP address as a path parameter
and returns the weather data or a bad
request or internal server depending on the
results that we receive. Let me show
the graphical view of this program real quick so you
can simply click on this icon which will explore the graphical view
and it will give you a glance of what you have written.
Here we have this get two functions,
one service, one record and three module
level variables. So what these functions,
these is this one get geodata, the other one get weather data.
And this service, it contains two resources and it
features this weather data and I have used the
user key which is the key for the weather
app that I need to invoke. So I am reading this value
from the environment variable when we run and
it is configurable in ballerina.
So let's take a look at the service view of this graphical
view. You can see the resource functions
that I have written here graphically similar to that open
API spec level visualizer.
You can also view the cloud code view of this one which
explain these logic that is written inside this
resource endpoint. So we have this endpoint
which returns which is very clear
and easy to understand. If you want to document your code you can
simply copy and paste this value and be done with it.
All right now let's try and invoke this service
in my local machine. So I'm going to use the terminal
for this one. In the terminal I have created
two terminal instances, one to compile and one to invoke
this services I'm going to do just do a bell run on this
folder which will compile my
program and then also run it in this
90 90 port. In these
terminal I'm going to use curl to invoke this one and
get the output. All right it compiling.
Let me type the curl real fast until
that we can do HTTP and it's running on my local
machine in port 90 90. My base path is your data
and the weather is these endpoint and it accept
IP address as a path parameter.
Now it is running on port 90 90. I can simply knock this one
as you can see. Now you can see the output of this program and
the logs that I have added in this program which
prints the location information and the weather
information. And finally we can see the output on
this window which contains the details for my current IP address
location. Let me invoke another IP
address as well. Let's say one seven two
here. All right, it's from United Kingdom
and it's heavy raining at the moment. All right
that's one way of testing. Let me show you another
way. Using the graphical way you can simply
click on this try it button which will invoke the
swagger editor for this API which contains
all the results, services and the port you can provide. And I
can just say get weather IP and try it out and
enter an IP address here. Let me say one 7210 21
and execute. It will give a nice output
in graphical view as well. So United States partially cloudy
and these are these temperature information. That's the
graphical view. I want to just give a glass. You can do
lot more with this graphical view. Now since
we have running this, let's run this one in
Kubernetes. So I'm going to do this without writing
a single YAML file or building a docker image.
All right I have stopped the service for now but before
doing that I want to add another opensource to this one to
check the readiness of this service. Let's do it by graphical
way. I'm going to go to service and going to add the
resource and it will be a get resource and my
path will be health readiness.
So this is the endpoint that I will invoke in my readiness probe
and the response for that will be 200. Okay and
I'm going to do it and then save.
As you can see the code for this one
got automatically invoked. Now I'm going to do a return HTTP
okay here. Okay now we are ready to deploy this one
in Kubernetes. So in order to modify the Kubernetes
artifacts generated by compiler we can use this
cloud TML file. Here I have added a cloud TML file
to configure the name of the docker image that we are building and the
tag that we are providing for that image.
Also as you can remember we have a configurable
variable which requires the user key to operate
and eases it to the weather API.
So that for that one I have already created a
config map which contains this user key and I want
to use that in my deployment. So I am saying
that use this info config map
and use that key refer its key
user key in my deployment. Now I want to add
the readiness probe to this one. As you can see we can
simply type readiness and it will give you the suggestions
here. I have to give the path. So my path will be
health readiness, it should be
geodata health readiness
and my port will be 99.
All right now we have added the readiness probe, let's compile.
So I'm going to use bell build minus minus cloud equal Kubernetes
command which will compile this program. Then pack
that giant to a docker image and the docker file
will be generated along with the YAMl file.
Let's see. All right, we got an error.
In addition. All right, as I
said earlier, the compiler is aware of the resources and these
YAml files, so we can see that
health actually contain an EZ here. So I have missed that
one. Let's add it and try again.
So until it bears,
let me explain. By my Kubernetes cluster, I'm using rancher
for desktop along local cluster,
so I can use the same docker registry in my local machine
in this Kubernetes cluster as well. All right,
now it's generating executable. Likewise it generates Kubernetes
artifacts that are mentioned here along with the Docker image.
So these things should be in there
and it will finally print the commands that you need to run this program.
If you go to the target folder and Kubernetes file,
you can see that the file is already generated
and it has created a service with port 90
90 of cluster ip. Then the deployment with the same labels
which will match these service labels.
And as I mentioned earlier, we have this config value
parameter which will read from this one and inject
it into the container that we have provided. These name is Geodeeployment
and the port is GeopRC one.
And it also got that readiness trip configured as
well with these initial delay of 30 seconds. We also
generate an horizontal auto scale for dotoscale.
If you look at the Docker file, it contains the
jars that you require to run this one and the docker image as
well. So now I'm going to do,
let me quickly run a Docker images command to show whether the
Docker image has built. I'm going to do docker images.
As you can see, we have this geo weather data v
one 10 just created about a minute ago.
Now I'm going to run this command which we got
from this output of this ballerina
one kubectl create
minus f and the part to the ML file.
All right, everything got created. Let's see the status.
Okay, still it is running. I think this is
because of that initial delay of 32nd we have added.
Let's wait for 30 seconds and see. Let's see.
Other resources also got created. Get SVC.
Sorry these, we got this network services
and get HPA as well. Okay,
we got this HPA as well.
Still it's not running. Let's check these status
describe pod. Okay,
it's still waiting for that readiness pop.
Okay, now it's running. Now we
want to expose this service as a node port
in our cluster. So I'm going to type the command that
we already retrieved from that. Build output,
expose deployment, your deployment as a node port.
Okay, now this will create a node port service which we
can use to invoke this service in the Kubernetes cluster.
So it's exposing this 90 90 via this
32 two four cluster.
Now let's invoke that one and see whether we get the same result
we got from curl.
Before doing that, let's log the pods,
tail the logs of the pods as well. We are
going to do this log. Okay,
now we just do same,
but with the port that we got for the node.
What's the port? Let me quickly
get the port. It's 32 two
four. I'm going to use that in the curl.
Okay. All right, we got the log and the
result. Let's try another IP address as
well this time like one, one two.
Okay, we got our local results.
As you can see, the pod logs are also
being printed. Okay,
now that's all for the demo that I
want to show. This is just a
scratch of the surface of what ballerina can do.
In this demo, we have exposed Ballerina's network
primitives, the dual textual and graphical syntax,
and we also experienced the ease of concurrency management
and how easy it is to deploy in Kubernetes or
docker environments. The power of ballerina does not
stop here. It's a dynamic language that keeps evolving
and there's always more to discover and leverage in your cloud native applications.
I encourage you to take what you have learned today and apply it in your
projects. Furthermore, you can experiment,
collaborate and innovate with ballerina as you build
your scalable, efficient and relevant cloud native
applications. If you have any questions or would you like to explore
ballerina further, please contact us with one of these channels,
these websites I have mentioned in this slide you.
Thank you once again for joining with us.
Have a fantastic day ahead.