Transcript
This transcript was autogenerated. To make changes, submit a PR.
Shuli we are here together for talking about one of the most interesting
JavaScript library we can find on the world.
We can find any. So we made it. Okay, perfect. So we
created this framework that's called crude, it's amigo backend
for fast shipping application. We will talk
about this today and we will try to use it in
our real project. And you can follow us
in this presentation for Discover what is good it and how we made
it. And so we hope to
enjoy, we hope that you enjoy the
presentation as we enjoy creating this framework for you.
Well, coming back to the presentation,
I think we are ready for start. Yeah, we are ready.
Great. So why
spending more time about telling about us? We are
software developers, we work in central consulting, that's a company
that visited Italy. And we make ecommerce solution and integration
projects. And we had lot of years
in the business and we work for first
year partners to create custom integration and
custom e commerce solutions. Great.
Let's start from a pain point. That pain point
not for me, not for Daniele,
but for quite every developer in the world,
even if they are front end developer or back
end developer or any other kind or level they
can have. So how we can develop an
applications in 2030?
Well, I started making this job many
many years ago and when I started was very easy because you had
the monolith and the monolith was very easy because you
had to be able CTO do everything from the back end,
from the throat end and so much today
the share is quite different. A modern app application
is more complex because we have multiple
component, lot of component that we can see here.
We have multiple microservices.
That's good because you can delegate the microservice development
to one team or another team. And that's great because you
can make work in parallel. So maybe you have some
background job that works in your infrastructure. You can have
message queue for making this microservice communicated together. And you
can have database relational or non relational, or you
can have database caching.
Then you have a front end application that's
usually a single page application. So a model application
that communicates CTO API to the backend.
And that's good, that's a very good solution,
because as we see,
you can use the best tool
for each need you have. So you work
always with the best of breed. Moreover, we can have
panelization across multiple teams. It's good because you can reduce the time to
the market. And of course you can
have specialized people
that do only one things all the day. So they are good.
And you can separate frontend. So every team
do and knows exactly what they need to know and
to do. And moreover, you can improve performances using
a simpler way of communicating data between the components
of the system. So that's a very interesting and very
good step forward that we can have.
That's good. We love that.
Anyway, I'm sad on news that this new
way of work bring with him some
small issues. Issues that in the most of cases
are not so relevant or can be
a lot less than the advantages they
bring to the system. So in this
case, it's not a problem. But in some
scenarios, the point that I listed
in this slide are not so good. Can example.
We need multiple teams or multiple competencies,
so that lead CTO. Lot of people or a lot of
ethical people, or a lot of teams that have to communicate together.
And this could be a cost.
In scenario where you have a lot of component
and few people that work on it, a single person cannot do the job.
This should be a good point. So it's
good that a single person cannot do the job, because each person that is involved
has the best skill for the job he do.
But in some case you would like
to be able to address the problem in a few minutes,
in few hours. And this is not always possible. In a complex
scenario, when the application is splitted in multiple components
that needs multiple competencies, CTO be understood.
In case of problem, we have to look. That's another good point. So in case
something is wrong, who is responsible for that? Where is the problem?
Maybe some problem start in the back end part and finishing
the frontend part. So it's not so easy to add this problem
in a fashionable time. Who is impacted
more from these small issues about the progress
we have in web application? Of course, low budget projects. So if you have a
low budget project, if you want to do something that's quick and dirty,
like the application, that often we need
to implement in small companies or in startup, in these scenarios,
well, we are more impacted. So a
lot of companies have to sustain a big cost, a big cost of big expenses
for maintaining and creating application that are performant,
that get all the advantages we can have on
a modern application. Well, the question is
quite provocatory, but it's actual and it's still
possible to develop a web application with easy. Okay, we are
here today. We are here today just for answer this question.
That's something that we want. We want to answer this question by
explaining how we try CTO do that and we try to
do it with crude and how is Logan? And you
can, yes, with crude it you can. Sure you
can. What you can, you can try. Of course
you can keep the result of creating a web application
when you don't need to have any competencies
in the back end part. So we want with this library to
accelerate the web development by reducing the amount of time
we have in the back end part, we want
to focus only on the business logic or write code
for implementing what you need. Of course in the market
we have lot of tool in the ecosystem
of headless CMS or data CMs
that help you for implementing
back end solution with small developing effort.
But in this library we want to make it more comfortable,
we want to avoid license fees, we want
to create something that could run in free tire of many
of the most spread
cloud platform like Azure, AWS or
vessel. And moreover we want to create something
that can be customized easy. So we finally have a library
that's about 100 kb, something like that,
that can run in easy and you can install and configure
and support complex scenario like multitenant. When you
want to create a startup and having a multitenant environment,
you have multiple cluster, each one with different data. So you
can. So with this small library you can do very very
powerful teams. So I would let
Daniela, the other Daniel, come on the
stage and present what is crude in detail.
Thank you Daniele for your introduction and
let's discover together what crude can do and what
can do in your project, in your existing project and what
crude it can do if you want to experiment
or create a new JavaScript project and
how to use it. Yes with crude
you can. But what is crudit?
Crude is a JavaScript
library that in a low code cloudless
environment can enable developer to
create applications focusing
on the front end, focusing on the experience the user
will have without thinking a
lot about the backend or the sysadmin knowledge.
What is cruit for? Crudit can be used in a
wide variety of projects. You can use
crudit for fast shipping multi tenant application
let's say you want to create a dashboard
where some user can
see something and some user can see something else.
You can create with credit
this interface and you can
allow user to log in with their credential
and see only the database they have access for.
You can include it also in existing
project. Let's say you have a project based on
a Mongodat hosted
on Mongo Atlas and you need to change the
data structure for change.
Request your client has requested.
You can use crude it to handle
the mutation and to manipulate existing
data on a Mongo database easily,
fast, and in a safe way.
You can also use crude for a
marketing campaign or some
survey. You can use
crude it for firmware travel, and you can
use server side validation to be sure that the
data you're receiving is clean and
the format of the data is respected.
But how crude is
a JavaScript library that can be used
in every node JS runtime.
We developed crude with cloudless
function in mind, such as lambda function
from us, cloud function from Google Cloud,
or edge function from Versailles.
The demo we
have prepared in our GitHub is based on
vessel edge function and once
you configure crude in this edge function
you get rest
compliant API with just a few configuration
in JavaScript. Crudita has many features.
The feature we start
with was the crude. On a database, the first
thing every application needs
is a way to communicate to a database to
safely store its data. So crude can
deploy in just a few line rest
compliant API to write read from
document based database and we use the Mongo Atras
database. You can use crudit to
create custom endpoints that have a dedicated
function to handle their request. You can
add server side validation so your data will
be clean and there won't be data
in a collection that don't respect
the validation. You can mutate existing
data and you can unhook some
event that happens on the database.
Let's talk about this feature one for
all with some examples because you
need to see how crudit is really
easy, really fast and really fast
to develop. This is the configuration
we use for crude in edge
function over cell for the crude
feature as you can see we use can
express server and the only configuration
we add is the database rule CtO
configure crudit. Then we let crudit
handle all the request and the response and just
crude it. Run request response and crude it handles
everything.
Every request with this basic configuration
will create a database
for each users that makes the request and
the collection where the data is stored will be
based on the path the request has.
So let's say my username is Daniele
and I call the
API in the path prova.
My data will be stored if I make a
post request in a document,
in the post collection, in the Prova
collection, and if with a get
I can get the data from that collection with
a put I can update the data in that collection and I can delete
with a delete
call. As I said
its user writes in its own Mongodatabase,
so there is no sharing between
users of the data they
write on the database. Each user has his own database
so can access other users
data and there is no leaking of them.
You can also create database
with other method. We are
going to present you in a few
minutes, but the basic for
the just crude part of crude
is like this. You can create
also custom endpoints. Let's say you have a custom endpoint
that you can overwrite the default
for a specific endpoint.
In this case we have a post
request on the publish endpoint.
On the publish path needs to be logged.
That's the true so you need CTO be a logged
user and then the request
is handled with a function handle request.
You can use this method to create
a login and register method for crude
so you can implement yourself the
login and registering method you are more
confident about. You can configure
a simple username and password
login or integrate
with a third party service so you are
free to choose the login method
you like the most. You can add server side validation
server side validation is performed before writing on the
database, so there won't be in your database
unclean data if you configure server
side validation. This validation can be scoped
on a certain database, on a certain collection,
and you can also specify the regex.
So let's see, you want CTO specify the
collection the database only
for the users that are admin.
So you
can specify the database. You know our
admin user using a regex and
you can specify which collection they can write
which structure of data also
with the name of the collection or with a regex
that validates the name of the collection.
The syntax for validation is
in the validation object and uses
the structure of validator JS, which is
one of the very few requirements
we need to crude it and you can
configure all sort of validation on the form of your
data. You can append yukes and
event listeners on some database events
and you can trigger
there are some events on the database and once
can event is triggered it
will launch a function.
The hooks can block the
execution. So let's see you have before save
hooks you can add some sort of validation
with a third party integration using
a call CTO. I don't know. Let's say you are
taking the billing information for your customer.
You might want to use vs
service to validate their billing data. You can
use this validation on the hooks before save
and if the data are valid you can
allow the user CTO write, but if the data are not valid
you can throw an error and the error will be
in the response when the
user makes the request.
We used this example on
the before save event. We save
in the logs database on the
before save collection, a log that
logs the date and the user that writes
something on a certain database and collection.
And this is an example you can use to log your data
and to add a logging
to your application. Then the mutation
the mutation is an implementation that
lets you mutate this existing data on
a database like the
validation. Also here you can use a
database name and a collection name or a regex
for the database name and the collection name and
mutate the data. Then you can apply a
mutation. A mutation once defined will be executed
only once and you can execute the
mutation with apply single so you specify a
certain database database and a certain mutation
and only that database is mutated.
You can use apply one so the
mutation on apply one you specify
a database and on that database will be
executed. Every mutation that has not already
been executed and the flag true allows
to continue to execute
the mutation if one has an
error. If you set that flag to false,
once a mutation returns an
error is blocked the entire process.
Or you can apply all apply all apply all the mutation
to all the database in the project.
So it can be very useful for bulk
mutation on every data in your database.
Like you need to change the structure
of the data in all your customer base for
a certain collection. You can do
this with a mutation with applied and
as I said before, you don't need even
DevOps because crude can
be deployed entirely on cloudless
runtimes. It basically just needs to
run on node JS compatible runtimes. So lambda cloud
function edge function azure function there
are a ton of different solution
that can run cruw it and
you don't need DevOps. You don't need to think about scaling, you don't
need to think about dimensioning your server because
it's all handled by the cloud.
What is this for? Is for a modern
stack Javascript project that needs a database collection.
So you think about front end driven app.
So you think for the
first thing you think about is your customer experience.
And then you need something to save your data. You can extrude it with
a simple integration. You can use crude it
on node JS with MongoDB applications as
a tool to manipulate your data. CTO add validation
to handle crude
operation. In any project
that is written most in JavaScript,
you don't need complex infrastructure in
the explorative or a material project because
credit can reduce really the cost. CTO zero because in
an explorative or a material project you
can be inside the free tier of many, many services
where credit can run. Who needs something
else? Crude is not for everyone.
Crude doesn't support at the moment
assets. So if in an application
that needs the customer CTO import
export assets, it doesn't work.
It works with not relational
database, with a document based database. So if
your data structure is very relational,
crude is not the best solution for you.
If you need high business logic complexity,
maybe you need something more robust.
And if you need to separate business logic from
data, also crude is not the best
because crude is just an API. CTo write to
a database with some feature and if you
need a robust business logic, you need something more
powerful. And it's also not ideal
for someone that needs a graphical interface
to access a backend such as CRM
because you can do it with crude, but you have to develop
from the ground up. Crude it can be used for an
API to write to a database, but it can be used
to read from a database. But if you need
graphic interface for the backend, it's not the best because you have
to write on your own. What do you want to add?
We want to add import from YamL
and JSoN of configuration. This will be
extremely important for the mutation, for example
because you can bulk edit them with YamL or
JSON. We would
like to add sports for many assets without
creating an endpoint. So let's say you add a base 64
of an image. We want to store the asset
in a bucket. That's the idea, but we
are working on it.
We are open to suggestions from future implementation.
Also contributions are open.
So if you want to participate in this
project, we are really happy
and also GitHub starts are
never enough. So thank you.
It was a pleasure to be part of this health.
Hope you see you as soon as possible. I'm Daniel
and this is predict.