Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi everyone, my name is Hussain and I'm a cloud engineer.
In this session we will talk about how to build a software as a service
including infrastructure management, payment system, these kind of scenarios.
Let's take a look at scenario first here.
In this outline you may need to do a configuration in your project.
We will see it how to do a logging, proper logging in
your application. How to generate artifacts including binaries,
docker images how to do some kind of static code check in your project.
How to include your project artifact
generation into a pipeline, CI pipeline.
We will check how to do infrastructure as code management and at
the end we will see the payment system, how we can integrate with the payment
system. Let's say that you are building a
project go project. You can just start with go code in it,
right? You provide your repo URL
which is for example GitHub URL. Then that's it.
Your project will be synchronized to GitHub. You can push it
to removed, you can pull it again. So this will be your modular project.
When it comes to configuration I just wanted to suggest a
project which is called conf. There is of
course a couple of other projects about this configuration management
but this is the simplest one that I use. Quf. On the
left hand side when you take a look at there is a YAML configuration and
on the right hand side there is a Golang struct. So if
you use you can simply unmarshall your YAML configuration into
a Golang struct. In Golang struct there
are two sections as we have in the YaML configuration f and db.
On the right hand side there is an app and DB. Under DB there are
a couple of fields and under app there are port section which is 3000.
In the Golang strugg you can even use a receiver function.
There's DSN for example in database connection we will
be using DSM for postgres connection. You can
use this kind of best practices to use in your configuration
system. Most probably you have heard about
twelve factor app. In one of its rules there
is a section which is do your configuration over environment variables.
Here I am exporting a couple of environment variables but why?
What are they? For example app underscore port
is equal to 3000. Here underscore means the indentation in
the yaml. In Yaml we had app indent
there is a child port here, app underscore port.
So you can set this one through environment variable and it
will be again can be unmarshalled to a goal length track,
same using environment variable is the best practice in the cloud environment
because you have a workload, for example pod you can just parts environment
variables like this notation.
After configuration you may start to think about logging. Here I am
using Zep and in Zep as you can see I initialize
logging here and then there's a deferred function because
this kind of performance well performant libraries, they are
buffering the logs. They are not writing logs to some kind of destination like
system output right away they are buffering them and they write to
console in batch. Here let's say that if there is a problem as
a final step there before the exiting, it will just
flushes all the logs from the buffer. It is a
good usage. In order to log something you
just provide a sentence and then you can provide a couple of context item.
For example in our quiz application if there is a problem while fetching
the quiz questions you can say that oh, there is a problem but
in which session for which customer you can provide key value pairs
here. This part is important especially if you are sending your
logs to a logging backend. If you have lots of applications then
by using these fields you can easily filter out the log statements
in the logging backend like elasticsearch or Graylock.
These kind of tools assume
that we built our application, but how to generate artifacts?
Artifacts means here binary executables or docker images
go releaser helps you to build cross platform artifacts,
windows, Linux, whatever you need Mac and also helps you to
release something. Release not only means release to GitHub, for example,
it can also announce some new release versions into discord,
slack notification, these kind of things.
You can simply create a Goreleaser yaml file
and run this function. Go releaser build or release
clean to delete this folder after operation
finished here in the Yaml file I defined a build
section. Under build section there are a couple of builds you
need to provide the id because there is a cross reference inside the yaml
file. Id is queezor API. Main is
when you say go build you need to provide a main file
which is the endpoint. This entry point contains
a main package and main function right here. Since I have
a couple of modules in this project, I am providing my
quizzer API endpoint and as an output I am saying
that binary name will be quizzer API. This execution will
be for Linux and AmD 64 for
Docker section. Here I am saying dockers and
in same way like we have in the executable generation,
we provide an Id. I am saying that this docker image will
be for Linux and AmD 64. I am
providing ids because these binaries will be used to generate docker images.
Image templates is used for generating docker image.
Basically tag which we say docker tag
you mean here tag
is coming from git context,
git context and it is dynamic. For example
whenever you push docker image into this
repository, you will see this current tag will be used here
to generate docker image names
for build flag template. Basically in my docker
image I am providing a couple of parameters,
for example module name, these kind of things.
I can also use these build arguments here. In my case
there is a module key. I am also using label which we will
see it soon in the example extra files means
this part is important because while goryleaser builds
something, for example docker image in the docker context there are only two files,
docker file and the artifact binary executable. So I need
also config this yaml. In my current project these
extra files is used to include also this file into docker
context. Skip push false is used
for just push this one into docker registry which I
will be providing in the GitHub action. What about
quality check? In quality check we will be using Golang CI
lint.
Golang CI lint is used for maintaining
all the linters. There are lots of linters as
an open source project this Goling CI lint just manage them
to run. It is very easy. It runs
them according to your configuration. Again you simply create
a goaling CI yaml file and run Golink CI lint run.
There are a couple of lots of linters but let me
give you a couple of examples. For example, if you expose a structure
outside but didn't provide in a comment so there will be an error.
For the list of full reference you can just take a look at
linters in the URL.
We provided lots of executions these operations,
but again we can run them locally.
What about if we are working with team? So whenever you push something
to remote they will be all test verified right here we
can use GitHub actions. In GitHub actions we can verify test build
artifacts. Whatever you are doing in CI system you can do that in GitHub actions
easily. Just an example here I am saying
that whenever you pull push something to repository
or create a pull request, an action will be triggered.
In this action Ubuntu machine will be used as you can see in runson
and in order to run these actions I will need a content
read and package write. Package write is for generating docker images and attached repository
and also creating releases. Of course content read
means I need to read contents like git content or release
content, right? Because I need to know if there is a tag or not these
kind of things. In the step sections I
check out the code base. I set up goal because
I will need it. In the goal release I do some linter check
by using goaling CI linter there is an action for that.
You don't need to do nothing manually.
I am using camel because this is an emulator. So if you are doing
a docker build operation you can use this emulator.
I need to log in to my GitHub container registry right here.
The registry URL is GitHub container registry IO.
The username is repository owner which
is my username in GitHub. And also there is a GitHub token
because I need to log into here first and then
as a final set is Gore releaser. In Gore laser there is an
action for that. Of course as a parameter I am saying that release clean
so it will build everything and release
to GitHub. There will be a NIV release, there will
be a NIv docker package and
the final output will be something like this quizzer. On the right
hand side there will be a release and again in the packages section
you will see Queezer API.
What is next? We build lots of things.
So where do we ship them? If there is a docker image,
most probably we will ship it to a containerized environment,
maybe kubernetes, right? So who will manage kubernetes?
We will manage it by using terraform and in our case it
will be terraform cloud. We will see
an example soon. So you build
a software as a service project. But why?
Because you like to develop tools and you
want to earn money out of that, right? Because you need to earn money and
then put another investment on top of your product.
Here. In order to charge your customer, first you need to understand your
customer and then you need to track your usage. Here we
will be using stripe and in stripe there is
three important models in
stripe, let's say the domain model. The first one is subscription.
You let your customers to subscribe
a plan. So you simply define your plan. In stripe you
can define your plan in order to subscribe your
customers. You can come up with a UI. There will be a link
and these links will be redirected to stripe checkout page.
Again, checkout subscription plans, they are all managed
in stripe. You can do that because in stripe there is a no code notation
subscription item is let's say that there is a plans, right?
Under this plan there can be some kind of line items
like cost per storage,
cost per request count, these kind of things.
And we will be using this in our calculation
metric. Billing means this is called usage record.
You need to track your customer and send them periodically stripe.
So this is the only part you need to handle on your own.
Let's say that there is a cron job and in
this cron job you simply calculate the usage.
For example, in our case in quiz application, question count per
hour. For example, each hour you periodically etc.
The question count per customer and you send them to stripe.
So at the end of the billing period, stripe knows how
to calculate the total amount per customer and
then multiply it by unit price within their plan.
And then it will charge the customer and then it will notify the
customer. That's it. You do nothing if you use stripe,
almost nothing. Of course the first step
is subscribe your customer. As I said, there will be a link
click on it, check out session, they will select plan, they will
provide their payment method. Everything happens on the stripe servers,
not on your servers, and your customer
will be subscribed to a specific plan. After that.
Since you know the customer plan, how do you know
customer plan? You know because in GitHub site there is a
webhook system. Whenever somebody subcontracts a plan,
you can be automatically notified. So I am notified and
then I know in which customer belongs
to which plan. And then periodically,
each hour I am calculating the usage. For example here quantity
is two for these hours. Two questions. For example,
I am sending this to stripe and as you can see the action
is increment. So increment means I send
2257. So they will be all
incremented on the stripe side. At the end of billing period,
stripe will get the final amount and multiply it by unit price.
Again, to sum up, the only thing you need to do is just calculate
your pricing. I mean the customer usage. Send it to
stripe. That's it. It is only amount. All the remaining
parts will be handled by stripe itself deployment.
So here Argocd really deserves a separate session
for the entire ArgoCD concept.
But here I just wanted to focus on one of its
custom resource, which is application. Here I am saying
that just create an argo application with name quizzer
API and in the spec you see this application
will be under default project and the source
will be a helm chart. Let's say that you
already have a CI system. It builds everything and it generates
helm chart. Put it to GitHub pages, right? You already have a helm
chart here. I am saying that this application is responsible
for deploying a helm chart. Here is the repo URL,
here is the version, here is the release name. Just deploy this.
Deploy where it is defined in the destination section.
It is the current Kubernetes cluster. So wherever you deploy
this argo CD, it will deploy this helm chart into
existing Kubernetes cluster and in namespace
dev.
Okay, let's assume that we deploy our application helm
chart. What about confidential data for example postgres database
password and stripe
key for example for the integration.
For example, do you like to create secrets manually in Kubernetes
environment? Most probably no, right? But there will be a manual operation and
there will be lack of synchronization. Here there is a project which is
called external secret. In external secret you simply synchronize your
secret and
secret from the secret providers and your Kubernetes
secrets. When you take a look at their pages,
you will see something like this in AWS GCP vault,
there are lots of secret providers. You can
maintain your secrets in vault AWS or GCP
and secret manager external secret help
you to create secrets in your Kubernetes environment
by using these secret manager providers. So assume
that you put something into vault or remove something from vault or change
something from vault, they will be synchronized to your Kubernetes secrets.
This is very cool, right? So you don't need to maintain secrets
manually. Public access. You deployed
lots of things in the Kubernetes environment. They are already. But how do
you expose them to the outside?
Of course you expose them to outside by using Kubernetes service,
maybe ingress controller, that's fine. But they expose,
for example in GKE they expose ingress load balancer.
Right? But do you like to provide that IP address to your customers?
No. Right. Then that's why you need to
create some kind of DNS entry for that. Assume that
we are using Cloudflare. They have terraform provider and you
can just create a Cloudflare record. It is a resource.
And in order to create a Cloudflare record,
this is a DNS record. I am saying that just
create this record under this zone id. It is a Cloudflare
specific thing. Let's say that you created a domain there and there is a zone
id for each domain. You need to provide that information here.
Otherwise it cannot know in which domain. I need to add this
entry if my website
name under Cloudflare is Queezer
IO. When I add this entry, it will be something like terraform,
queezer IO and it will point to this specific IP address
because it is an a record. A record means name to IP
address.
Okay, now my quiz application is up and running. I can
access it by using a fully
qualified DNS name. Now when
I open the application in the browser, I see oh this page is
insecure. Where is my secret? You can use
sort manager if you are using Kubernetes environment, sort manager helps
you to integrate your resource,
especially for example certificate manager management
with the third parties. Basically you can see lots of configurations
in sort manager web page. But I want to focus on one thing here
again like with external secret like Argo CD,
you need to deploy the external sort
manager. Yeah, just helm shard. Once you deploy it
there will be a couple of custom resources available
and issuer and cluster issues are one of them. Here I
am saying that just create an issue for me which is named example issuer
and this will be integrated with the Cloudflare.
So I already have domain names in Cloudflare and Cloudflare will
know that oh there is a TLS in my Kubernetes environment.
So there will be an HTTPs request up until Cloudflare.
While Cloudflare connecting to my resource which is quizzer API,
it is using ingress resource, it will also connect Cloudflare
also connect to an endpoint which is TLS,
right. And this will be using a let's
encrypt. You know let's encrypt is an open source free system
that you can generate your certificates periodically, right?
So I have an issuer, it can be also cluster issuer.
It is a system in my ingress record
when I provided this automation sort manager cluster issuer
example issues, you see this ingress will
be automatically, you know,
when you add an ingress record, ingress controller just refreshes it
Nginx configuration. So there will be a TLS section under
this configuration. So Cloudflare will
be connected to TLS endpoint on this part.
So everything is HTTPs. You see from bottom
from Golang project up until to a
production environment we can have this kind of system.
So very summarized notation
of building a software as a service product.
What is next?
Let's take a look at an example in the code base here
I build a project which is called Quizzer. In Quizzer
I want to start from CMD folder. In CMD means CMD
is the entry point folder and here in the entry point there
can be multiple or single, whatever you prefer under
API for example, there is a main main in
main co. As you can see it is a main package. There is a main
function right here.
If I am trying to wrap something, for example logger
library, I use a notation, package notation which is technology
x. Since this is a logger, I am saying logger
x. So under internal I have a logging logger x.
When I say new, basically it returns a
zeplogger. So basically it returns zep new production,
right. If there is a problem, it will
do a log fatal f. That means it will exit one,
right? And it will print a log. And after that I am initializing
one configuration. Let's check what happens there. In initializing
there is a struct here config Db app. It contains all the thing.
And in the init function you see I am initializing the quant
which I mentioned before.
And also I am using Yaml Parser here there is a
default config file which is config yaml. And if
you provide a config yaml environment variable config location, I will
also use that. You can override the location of config file.
Here I am saying that could you please load the configuration for
me? It will load the configuration. Let's say that there is a byte array,
byte or slice of bytes. And here basically I am saying that
unmarshall this byte of array into the config.
That's it. So it will return config. Now I will have a config
struck and all the fields are already pre populated.
I can use this config in any other places in my application
if there is a problem. Of course it will do a log fatal and fail
to initialize config. What else?
Now I will try to connect to a database.
Here I am using gorm. I will show you an example entity there.
Here gorm is orm library for go
and it has different drivers support like postgres,
mySql, etc. Here I am saying
that as you can see DBDSM there is a connection URL here.
Could you please connect to postgres and it connects to
postgres database and I will show you the entity. Let me show
you entities here under domain. There is a for example question
entity here. In question entity there
is a field description and also there is a gorm model.
This contains id created, updated,
deleted. This is coming from gorm orm library. So you don't need
to put Id created this kind of common
fields all the time for all entities.
While we are running our application basically in the repository
section. Again by using gorm
I am using auto migrate and I am providing my
struct as a reference. When you do that it just do
some kind of reflection. It gets all the fields and it
creates a table out of that fields. Let me show you the column
structure for that. When I run this application I
need to check if I have this application running or not.
Okay. That database is not run. Okay anyways
I will show you that later.
So basically there will be a database table that
only contains a description field as a custom and
there is also id created, updated and deleted
section. Let's continue with the main file here.
Also of course there is defer here. I need to close my
db connection before exiting something. And I am using
fiber for rest endpoint. This is a very lightweight
and good well performance library for go. You can
use fiber for that. And here I am registering
an endpoint metrics endpoint and I'm using prom,
HTTP. You know Prometheus is very well known library for monitoring
systems. It is for metrics for the application. If you
do that it will basically expose some kind of metrics to the outside.
And finally I am initializing the question repository
and as a parameter I provide date, database and locker.
I have one more endpoint questions and it returns
list of questions to me and this application will be run on
port 30,000 and
okay I think I managed to run.
Okay the application is running right now. Let me show you the database.
I will scroll this one like this.
Okay, here there is a public tables,
entities columns. As you can see we have
description and we have deleted updated created Id.
Very simple. Okay, so let's
move on. I can also have collector main here.
This is another endpoint I just wanted to show you. You can
add lots of entry points into your application. And in
Dockerfile let me show you the docker file. Because Gory laser uses
this docker file. There is an argument which is module and inside
module I am copying this module. This module is used for
binary executable. So if it is quizzer API it
will be quizzer API. If it is a cron job it will be another enterpoint.
And I am also copying the config
disk file here. And of course I am providing config location
or config Yaml and running the binary executable.
So who passes this argument? Who passes this argument?
It is passed on Golang releaser. As you can see
here, build angular module is equal to queer
API. If I had another build section here,
I can provide for example payment collector or usage
calculator, these kind of things. So for
the configuration I think that's it. So when
I push my changes into remote, let me
show you what happens at that time because I want to
show you a couple of
examples.
Tab quizzer in
Quizzer you see for example I created a pull request
change not count. For example, as you can see here
I have CI build green and also there is a terraform part,
I will show you that one. Also in Queezor you will see
there is a release and there is a package. When I check the package
part, you will see it. Now there
is a version Queezor API and I can simply do a docker pull
on this part, right. And what else?
As I said, there is a terraform integration here in
terraform, basically in terraform cloud I created a workspace
and in workspace I just connected this workspace with my repository.
And in my repository I said that there is
an IEX folder here and there is a main file. So basically
I am creating a container cluster, I am creating a node
pool under it and that's it. It simply creates a Kubernetes cluster.
So let's click on this terraform cloud
link because it only does a plan section. When I click on
it, it will redirect me to the terraform cloud page.
Let me go through it.
Okay. As you can see the plan is finished. Inside the
plan you see one to create, one to change, one to destroy.
You can see the changes here. If it is okay,
you can just apply this plan. Once you
apply it, let me show you also Google
Cloud console what it creates
engine.
Okay, as you can see there is a queer Kubernetes
cluster and it created everything. So um,
including infrastructure, everything. If I have
them in my code base I
can easily sync everything to my production with single
push or single tag. So that was it. Hope it
was a valuable session for you and thank
you for listening.