Transcript
This transcript was autogenerated. To make changes, submit a PR.
Welcome everybody. Today we will learn how modern
browser APIs can enhance our web projects and help us
to think and design features in a new way. As we
will see, many of these APIs have the goal to close the feature gap
with the native apps, allowing to deliver a richer experience to our
users and offer functionalities that were so far precluded
to web applications and available only of native apps.
I will show you several demos and of course all the code
behind each of the API that we present. However,
I will focus more on live examples in order
to keep this talk more entertaining and dynamic,
but also because I left many comments in
the code so it's very easy to follow along.
Now, if looking at the picture, you're wondering whether
we can launch a space rocket with our browser,
well, the answer is not completely yet, or at least
I would not try it with Internet Explorer eleven.
But that said, JavaScript went to the space this
is the Dragon two console using Chromium,
JavaScript and C Plus plus for the Tetch command.
The Dragon is the first spacecraft with the touch screens.
The interface makes extensive users of web components and the custom
reactive framework. Of course,
as you can see, we can still have some hardware buttons
and toggle in order to do
the most critical operations. However,
even if with our web project we don't plan to go to the
space, still we can benefit from browser APIs and we
will shortly see how before proceeding,
just a couple of words about me my name is Francesco
Leardini and I work as a senior consultant and angular trainer
at Trivadis. Trivadis is an IT consultancy company
based in Switzerland and covering a wide set of technologies.
Even though I work as a full stack engineer,
having net and C sharp as my backend side,
I have a special interest about web technologies like progressive
web apps, angular and in general modern web.
But enough about me. Without further ado, let's start our API
journey now. The first API
we will see is the page visibility API. It provides events
we can listen to in order to know when the document changes visibility.
When the user minimize, for example,
the page or a new browser tab is open. This API
will trigger an event so that we can query the document
object in order to know precisely whether that
page is hidden or is visible.
We can think practical scenarios of usage of this
API when we have, for example,
a carousel with some rotating products that we want to
present our users, and we can stop
the carousel from presenting the next item
if the page is minimized, for example, or not visible.
Another case could be a client application constantly
or regularly pulling data from the server in order to fetch fresh
new records. We can stop it until the page
is visible again in order to optimize the network traffic.
And this is particularly important for mobile users.
Let's see then. Now our first demo
APIs and at the end of the talk
I will provide you the URL for the GitHub
repo. It's the API that we will use
is made with angular and angular materials,
but all the browser API that we will see
today, they are absolutely framework agnostic. Therefore you can use in
react, vue, js or even vanilla JavaScript.
They all work without any problem.
Every section of each API that we will
discover and describe is composed by a short
description eventually. For the APIs that are still experimental,
it's described if you have to use HTTPs eventually
and enable in chrome some flags
in order to make it working, then there is the core of the
demo itself in the central part and at the bottom there is a browser
compatibility section, also very important because, and you
will see especially for the other more experimental APIs,
it will tell you how bright and widely
users and covered is this API for the different browser.
In this case, page visibility API is not the newest
API and therefore is widely supported
for the demo. We have a video that for us is extremely important
and we want that our users won't lose even a frame of it.
Therefore when we are here we can let it
joining it.
When we changing visibility,
our application detects that the document
page is not visible anymore and here for the sake of a
demo I made, so that the title also reflects this state.
So the video is on pause when I go back again,
then to the previous page, the API will trigger
an event that now the document is visible again and the
video will automatically play again from exactly the
second we left it.
So this is a simple case how we can implement
in a real case how we can implement this API,
the code behind and this is also a very positive and
nice thing is that it's very easy to implement this
API. Now, leaving aside,
as I said, all the boilerplate about the angular framework,
it's very simple. The logic we have
a video element whose I get a
reference through a video element template
variable that I can get in the component side
and I can use at this level in the class
so that I can listen for a document visibility
change event. And then when this event
occurs I can query the document object and see
whether it's hidden or not and then apply the
logic I want. In this case I stop or
I pause or I play again the video according to the visibility
of the document. Very simple here down
we just changing the title, but this is not really relevant.
It's just for the purpose of the demo.
Screen wakelock API is a very cool API.
Actually one of my favorite to save battery.
Mobile devices typically go on
sleep mode or idle after a specific timeout.
Could be 15 30 seconds in most of cases, or if
you set it for a longer period. Of course, even 1 minute
on my phone though is typically 15 seconds.
The screen Wakelock API provides a way to prevent
the device from dimming and locking the screen.
This is very important for web applications that
have to provide some content through the web app
because allows to keep the screen locked and always active
while displaying its content. Or another case
is a cooking web application. Provide some kind
of recipe or steps that the user has to follow.
And this is as you can imagine, very important.
If we have to use our ends, for example for
some parts of the recipe and to keep
the screen always active and not locked. Think about if you
are in a situation like this and all in a sudden the screen
goes dim and idle and you have to unlock it when
your hands are in that stage. Not very nice.
For this demo I will use my phone
because of course I cannot use my desktop for that.
And I will mirror it so that we can
have side by side my phone here mirrored
and the desktop version aside, I chose
specifically for this API this recipe
that is extremely long. Not because the recipe is good,
I never tried, but because to give a real taste
or a real case of very
long and with multisteps recipe that we have to follow and use our
ends to bake it and to prepare it. So let's
imagine we start cooking and I leave my phone there.
In the meantime I go on explaining about the API.
The API works only on secure connections and
we need user interactions in order to
enable and to get a lock, a sentinel
lock. We can see in the meantime that now my phone got dim.
So let's imagine we have our dirty hands and this
is the case. Not very nice,
but if we explicitly
say to keep the user locked through this API
now we'll go on talking and we will see that without interacting
with my phone, the phone will stay active. And so we can
always keep going on with
the recipe. Of course, probably we have to scroll down a little bit,
but we can always use the tip of our notes for that.
It's much easier than having to unlock completely the phone.
So as I said that we need to get the Sentinel unlock
and I also provide another checkbox
once the first one is checked to say that we wanted to
get the lockback if we navigate again
to the page, because when we lose
the focus or when we navigate somewhere else, we lose this
wake lock sentinel. But doing
this we can instruct and implement some custom logic that we can
request it again if the user asked like that explicitly
and then automatically re enable it.
So for the users it will be completely transparent.
As you can see the screen didn't dim,
so I now interact with it, but all the time
that passed so far the screen didn't go on
idle. So it's very interesting for these kind of
scenarios. And as
we can see in the template is very simple.
When the checkbox change its state, we invoke the lock
enable change method. What this does it
toggle the
sentinel state and we can, if not enabled
we can request the wakelock. Otherwise we
release it. To request Wakelock we invoke the request
method of course on the Wakelock object and we pass the
screen parameter. This at the moment is the only value
we can pass. In the past there was also a system value,
but this has been removed and that from this moment on
our screen will be locked and dim.
If we move away.
I can eventually, but this part is only for demo
purposes, to show through this sentinel active variable,
to show in the UI the state of our sentinel, so we
can listen eventually to a release event in
order to know whether the unlock has been released, when we navigate
somewhere else or when we
navigate completely away from our application. Of course we can
release all the resources in the NG and
destroy hook and here for example we release the lock as well.
And this is done by invoking the release method on
the lock sentinel object that we got initially.
So very simple yet very powerful to implement this
kind of functionalities.
Like for example if we have to provide a cook application,
web application ambient
light API is still a very experimental APIs,
but I think is really interesting because it might
make us think already to some new scenarios and new capabilities
that we can use to leverage new functionalities
on web applications, especially on mobile phones
or at least devices that are capable of
reading the light level around the
device itself. The ambient light sensor
interface is a part of the sensor APIs collection
and gives information about the light
registered by the target device, and then we can use
this within our application.
Again, for this APIs
I will use my phone and as you can see already in
the browser compatibility section is not exactly well
supported yet. It's still experimental. Plus we need
an extra flag in Chrome to be enabled in order
to use it. But still, let's see how does it work
if now with the phone I go close to
my lamp or my window. So as source of light we can
see that the light level increases or
decreases according to how far or how strong is the light.
Now if I slowly go away from it until
the specific threshold is reached,
we can see that the live level, for example is 50 or now even
dimmed is 50, and then the whole interface
will automatically switch into,
let's say dark mode. This is the feature I implemented
by using the light level
around the phone. There are for sure many other
cool scenarios of usage of this, but I found this
could be quite interesting. So the possibility of
giving our users exactly as it
was also for the Google Maps
native app so that we can switch if we are
in a gallery, for example, automatically the application switch in the night mode or
in dark mode, and we can provide exactly through our web application
exactly. We can offer the same functionalities.
Again, it's very simple to implement this API.
It's always good when we use these APIs,
especially experimental one, to prove if these are supported
by our browser or not, and eventually to provide a message if
not, or if available some fallback or
otherwise simply silently ignore the functionality so
we can read the ambient light by creating
a new instance of the ambient light sensor object.
Once we have this sensor instance,
we can listen on errors, of course,
especially if we didn't provide permissions for
it. Then if granted, we can
try again to read again the ambient light,
and if the user provided so granted permission
for that, we can start done reading.
And then here for example,
while we start reading
with a start method, we can keep on lively
reading the new values and then act accordingly.
In this case, after we start reading light values,
I constantly react by
invoking an update theme method and passing the
illuminance value. This illuminance value gives
us an idea of how bright or
how dark is the area or room around the
devices. I used these values as threshold. Of course you
can go much more specific, but broadly speaking,
between ten and 50 is a dark or not well illuminated
environment, and then above this level
is a normal or even a bright environment.
Therefore you can even use different values in order
to provide even more specific or advanced scenarios.
But for this example it was enough
to check whether it was a dark environment or not,
and then simply switch to bright
or dark mode accordingly. That's it. Very simple,
and yet could be very
interesting to provide some functionalities that are indeed nowadays
are still quite innovative. And new file
system assets is another very interesting API
because it offers a new way to manipulate files in
the local file system and provide in this way a
much better user experience because it offer
the same functionality that is given or provided through a native
app. When we wanted to
save a file or apply the changes to
that file, we don't have to download it and save to a specific new
location every time we want to edit or edit its content, but we
can apply the changes straight and dirtily to it. We can
even implement an autosave functionality that gets
automatically triggered, maybe with a timeout.
This system assets API offers two methods.
One is called show open file Picker
and another is a show save file picker. Of course, as the
name suggests, once we invoke these files, it trigger
a file picker dialog to open or save a
file, respectively. Once a
file is open, we get an API.
Through this API we get a file handle object we can use to
interact with the file itself, getting for example information about the
size or the name extension and so on.
And typically we want to keep somewhere
a reference to this file handle so that we can reuse it to
save the changes that we did to the file straight
to the physical file itself without having to trigger
this picker dialogue or file dialogue again
and again. Let's see now in action. How does
it work? This is our file system assets
API demo, and it's a typical case where we
have, for example, to create a text file.
We don't want to save it with a file save
dialog every time and overwrites and overwrites the
file more and more again. So let's say welcome
conf 42 and then we want
save as invoke. We want to save somewhere. Let's say
we want to save to a new file we call conf 42.
Txt and we save it. We can see
on the top right of
the browser that a new icon appeared that grants
the read permission and also
read and write because we create a brand new.
So this explicitly gives read and
write permissions to edit the file.
And we can see this reflected here.
Now if we edit this file,
we can see also we have a new button, just save edited
and I click save. These changes is already applied
to the file itself thanks to this file ender that keeps this
connection open to our file. This is very cool. We didn't have to open
again the file dialog for that.
And to prove that, let's say we go back in
the home, I go back here. So brand new page and I open
now the same file we created. So conf 42.
Txt. When I open it, we can see that the content
that we in the new content that we added before has
been reflected as indeed
it was saved correctly. And we have
on the top always the information with the icon that
we can edit this file. So very nice to
give really a new set of possibilities to
our applications.
So looking at the code,
everything starts when we depend,
when we save a file. Let's say we create a brand new file
and we click on save. The file handle in this case
has not been created. Therefore we
invoke the save as method. The save as
method tries to get the handle
itself and the etle itself is given back from
these respectively, these methods that I mentioned before.
So the show save file, richer or otherwise
if we have just to open it, is the show open file picker.
If we want to open a file and not save a new file
when we save it, or when we provide
options for the file dialogue, we can of course provide a description
which kind of accepted extension
we want to allow and so on. But the
cool thing is that now once we get back this file
handle and we keep it in
the context of our application, so it's up
here. Then let's imagine we changing
the content of the file and we click on save. So not save
as if we click on save that
the file we can check the file handle is existing.
Yes. In that case we go straight to
update the content using this file handle.
So create a writable and the rest is just a normal write
to file code. So the cool thing is to keep in mind
is this file handle and the possibility that it opens for
us. So kind of keeping an open
connection to the instance of the physical
file and then allow us to push all
the saves and the changes straight
to the file itself.
Web share is also another cool API.
It allows to share an object with a text content, a title
and a URL using the device native content sharing capabilities
let's imagine native apps like Twitter,
Facebook, WhatsApp, but also email
or text messaging and other
native apps that allow to receive content and so
share specific content through the app
itself. At least one of these three properties.
So title, text or body text or URL must
be defined. But it's best practice to define them all anytime
because we don't know which application the user at the end will choose.
If a text application like WhatsApp for example,
the title might not be that important. But if
an email app is chosen instead then the title
is used automatically as can email subject.
Therefore we have always to provide and define
always if possible of course, always the three properties.
Let's see now how does it look like? It's interesting to
see that Chrome recently allowed also
on the desktop version the
possibility to use this web
share API. Now if I click on it,
the desktop detects some
applications that are capable of interact or
use this API. A few months ago or
in some previous version of Chrome this was not possible.
I will throw for the demo use my phone because I think
it's more interesting. So here there are the
two cocktail or the two recipes
for the cocktail that I prefer. And let's imagine I want to
share it with some friend of mine. So we can provide the
typical share icon to say okay, this content
is shareable through some APIs, some native
apps. Here for example, I have a list that are
and I can even choose more, but I have a list of applications
that could accept these shared
objects. I take WhatsApp for example and select one
contact. We can already see at the bottom that there is this
object that is going to be shared and if I click on send
I can send to the target contact, a title
we can see and then body text and
eventually at the bottom URL. So a very easy
way to share content and to provide some more engagement for
our web applications. Let's see now how does
it look or how we can create it.
So our share object,
first of all we can define as we say title,
text and URL. And there is even a second
level web share that allows to share also
files for example images.
But that said the APIs is extremely
simple, we just have to invoke the share method
over the navigator object. Of course if the
share API is allowed and this
will prompt this share dialogue for
us as we saw before. And then the users can pass
the specific object that we can define here to the target
or chosen web application.
Native app sorry,
contact richer this is also very experimental,
but also discloses a wide set of potentialities
for web applications because assessing the devices contact
list so far has been always precluded to web application
and it was a peculiarity of only native apps.
But thanks to contact Picker API, we can now select
contacts from the device list and use them within the context
of a web application. We can extract properties
from the contact list like the name, the email address,
a phone number, the city address and
an icon. Of course if these are all provided and then
we can use this according how we want within the context
of our web applications. Again, since we
assess sensitive data, not only there
are some prerequisites that have to be fulfilled similar to other APIs
like must be a secure connection. So we have to
use HTTPs and we can use this API
only after has been triggered or by
the user. So a click or a specific user interaction.
Let's see how does it look like. Of course in
this case we can show that since it's not available on
the browser, I can grade out or
I can simply inform the user that for that
browser is not available, it's not
possible to use it. But let's look again
at my phone and in here we can share
API just checked, no contact pick in here, it's working.
So if I click on the select list, we can have a list
of all our users.
And then if I choose one and I click or I
select in it, I can extract this information from
my contact list and I can use within the context of
the web application. Here we can see we have
first name, last name, I have an icon, eventually the
full address and the country if available. Okay, of course
this is a test content, so the data,
it's all there, but sometimes cannot be always
defined. So you should plan or design your UI
that might be, that is not always available, all the information.
So in order to avoid that the UI might break.
Bad news is that the browser compatibility for this
API is not exactly wide
yet. It works mainly with Android browser,
opera mobile and
of course Chrome for Android. So we still have to wait
a little bit in order to make it working on a
wide set of browser. But still we can use it
in some cases in order to provide some enhanced
functionalities or scenarios to our users.
Vibration API is our last API for this
session and is quite interesting because again it allows
to interact with the hardware of
the device. Of course only if the target device,
for example a phone or a tablet,
allows it to vibrate. Probably a desktop
or a laptop probably does not allow that.
As we said, it gives the possibility to our web applications
to interact with the vibration hardware and maybe
disclose some interesting scenarios.
Like for example we might provide a tactile feedback when
it's needed. For example, if it's a web game,
some explosion occur, then the device vibrate,
or if the users has
some errors in a form, then we
can provide some visual
errors and also eventually let it vibrate.
We can pass an array of values, an integer to
the vibrate method of the navigator object and each
value tells respectively for how many milliseconds
the device must vibrate and how many milliseconds should
pose. Therefore we
can create any kind of vibration pattern using these alternate
values of vibration and pose respectively.
But let's see how does it work?
Vibration? Yeah, in ear, but I will not go
through because I won't spoil it. I will let it play with it. Once you
assess the repo
I make a quiz so there are very few steps with some
simple questions and if you answer in
a correct way then it will have three short vibrations.
Your device of course you have to do it most probably with the phone in
order to make it working. If you
provide the wrong answer then it will be just a unique
long vibration. This also could be another scenario where we can
provide some more interesting or
entertaining features to our quiz
or web app according to the case
of course, and as we say that the implementation is
extremely simple because we just create eventually
a specific pattern and according to how the
answer was so correct or not, we can invoke and
use one or the other pattern.
So wrong or correct pattern and
then the only thing we have to do is to invoke
the vibrate method using the specific
pattern chosen over the navigator object.
This is all what we need to do. And of course it
works only if the phone or
the device is having or is proposing
or providing a vibration hardware for that and
this concludes our browser API collection.
Here you can find the GitHub repo URL where
you can find the demo and the code itself, plus some
links to social media like Twitter and the dove to portal
where I wrote some articles, especially about progressive web APIs,
angular and some of the modern web APIs that we discussed
tonight. I hope you enjoyed this session
and I could provide some new hints, especially show some APIs
that maybe you were not aware of. That said,
thank you very much again for your attention and I
hope you enjoyed it.