Transcript
This transcript was autogenerated. To make changes, submit a PR.
I'm really excited for this opportunity to share what's possible for
robotics with AWS us. Let's get started.
The use of robots continue to accelerate as industries
and organizations realize the role of automation
in improving productivity, gaining business continuity,
and ensuring flexibility and adaptability.
We see increased adoptions across all major sectors,
including industrial services, consumer agricultures
and more. The number of robots is estimated to grow to 20 million
by 2030. People have been
creating devices and machines to perform mundane
tasks for hundreds of years. However,
the ability to create a device that can sense,
compute and act with a degree of autonomy
is relatively new. The first generations of
robots, such as the robotic arm, could perform simple,
systematic tasks. They were largely pre programmed
or human directed. Since those early years
in robotics, there have been remarkable advancements in
computing, sensor technology and machine learning
that has enabled a shift towards next generation robots.
Robots with a wide range of autonomy, powered by the
advanced data analytics and natural human interfaces
that can perform advanced tasks in collaboration
with humans. We believe that
connecting robots to the cloud and cross edges
is powering the next evolution in robotics. With AWS,
connected robots can capture, store and aggregate limitless
amounts of data to train and develop new functionality,
optimize and drive efficiency. Connected robots
can update themselves to ensure reliability and
reduce downtime. Operators and technicians can remotely
connect and interact. So why do customers choose
AWS to build, deploy and manage their robots?
First, AWS helps you build mature solutions
from day one. With AWS, you can become agile
in your experimentation and innovation. With AWS,
you can deliver a differentiated value proposition by collecting
and harness the power of data to drive innovation.
AWS provides unmatched scalability,
continuity and reliability. The elasticity
of the cloud means more compute power to provide increasing
levels of autonomy and facilitate orchestration.
AWS enable you to extract the full value of
robots automation. Whether you are looking to reduce downtime
or implement a new business case,
let's look at how customers are using AWS for robots.
Robot builders use AWS to build intelligent
connected robots faster. You are able to train robots
to execute complex tasks, collect telemetry data securely,
and continuously improve your robotics devices and
generations software. In addition, you are able to test the
new capabilities using simulation across a variety of scenarios
as well as accelerate development. By using pretrained machine learnings
and artificial intelligence model, you are able
to build for scalability and operating robots solution at
scale as well as maintaining the solution remotely.
With cloudbased software provisioning and over the air update,
you are able to connect and monitor homogeneous robot fleets
with vendor integrations and the ability to build applications
that help optimize traffic flow.
We have a portfolio of services that can help support
all of these use tasks, whether you want to connect and
manage your robotics application or to enable new workflows.
At AWS, we always start with connectivity and
collecting data from the devices to enable many
other capabilities and use cases. You can leverage AWS
services to add capabilities such as over the air update,
predictive preventive maintenance generations,
or machine learning interfaces at the edge
if you are operating robots at a scale and are looking
for a single view glass, we have services that can
help you monitor desperate robots fleets more
efficiently. Interfaces of robotic things where
intelligent devices can monitor events,
fuse sensor data from variety of sources,
use local and distributed intelligence to determine the best course of
action and then act to control or manipulate objects
in the real world. It have four aspects sense connect
learn activate the sense is
about collect and process data streams from sensors,
lighters, cameras and other sensor
in the environment and store the data in Amazon SS three or Amazon Kinesis
data stream. Connect is about sending the data to
the cloud or even connect and interpret with other robotics
devices, system and equipments learn
is about using the data collected from the robots and run
and train machine learning models.
Then deploy the model in the robot
where robot can execute complex function and workloads
and make decisions. Accoutate is about interacting
the environment in a safe manner, interact with human with other robots
and other automation safely. Let's look
at common use cases architecture.
In this architecture, the robot is collecting sensor data
and log data and transfers the data to the cloud using AWS
IoT core where the data could be routed
to be stored in varieties of database systems
and also can be stored to build analytics data lake
which could be consumed by visualization tools
in order to provide reports and life dashboard.
In the same time, IoT core providing a function which
is called device shadow which is a mainly adjacent document
representation of the actual and the desired
state of the robots. So an application can
interact with the robots through a shadow where the application
sets the desired state for the robot and when the
robot is connected to the Internet, it will connect to the IoT core,
fetch the desired state from the shadow and start act
to present this desired state.
Also, when the robot state itself is a change,
it will connect to the IoT core and update the
shadow. Say this is my current state.
AWS IoT core lets you connect billions
of IoT devices and robots and route
trillions of messages to AWS services without managing
any infrastructure. IoT core is a managed cloud
platform that lets connected devices
easily and securely interact with a cloud
application and with other services.
There are different communication protocols
including MQTT, HTTPs,
MQTT over Websocket and LoRaWAN AWS.
IoT core also secure device connections and
data with mutual authentication and end to end
encryption. IoT core can filter,
transform and act upon devices data on the fly
based on the business defined rules.
Another architecture is about software development and managed
application architecture where you have two fleets of robots,
fleet a and fleet b, and you want to
deploy different version of your application like an
a b testing. For example, you can use IoT core
to create things group. In this case we create
thing group a and things group b and ensures that
all robots in fleet a is under things
group a and all robots in fleet b is
under things group b. Then you can use IoT green grass
to deploy software a to group a
and software group b to group b,
which in turn will need to be deployed
to only fleet a and fleet b.
So IoT Greengrass is an open source edge runtime
and cloud services for building,
deploying and managing device software. Iot greengrass
make it easy to bring intelligence to edge devices such
AWS, anomaly detection and powering autonomous devices.
You can deploy new or legacy app across fleets using
Java, bison node js or even running
a container image. IoT greengrass can collect,
aggregate, filter and send data locally,
also manage and control what data go to the cloud for optimized
analytics and storage.
Another architecture is where
you want to run machine learning at the edge and this is also
using IoT greengrass as it make it easily to
perform machine learning inference locally on robots
using models that are created and trained
and optimized in the cloud. IoT green grass gives
you the flexibility to use machine learning trained in
Amazon Sagemaker or even to bring your
own trained model and save it. In Amazon SSV,
you can use machine learning model that are built,
trained and optimized in the cloud and run interfaces
on robots. For example, you can build a predictive
model in sagemaker for sense detection
analysis, optimize it to run on any camera
and then deploy it to predict suspicious activity and
send an alert data gathered from
the robots itself running on it. Green grass
can be sent back to stagemaker where it can be
tagged so it can be used continuously to improve
the quality of the machine learning model.
In this architecture, the user want to stream videos
from robots and also have a playback through a mobile application.
Amazon can use things video stream make IoT easy
to securely stream videos from connected robots to AWS for
analytics, machine learning, playback and other processing.
Kinesis video stream automatically provision and scale
all infrastructure needed to ingest streaming video data
from millions of devices. IoT durably
stores, encrypt and index videos data in your
stream and allow you to access your data through an easy
to use API. Kinesis video streams enable
you to play back video for live or on demand viewing.
Quickly build applications that take advantage of computer vision
and video analytics through integration with Amazon recognition
video and libraries for machine learning frameworks
such as Tensorflow and OpenCV.
Kinesis video stream is also supporting WebRTC.
This is an open source project that enable real time
media streaming and interaction between web
browsers and mobile application and connected
robots via simple API.
Case video stream supports media ingestions over
WeBRC connection for secure storage,
playback and analytics processing in
things architecture the user want to use a
simulation to test the same robot application
in different simulation walls, world one and world
two. And here I would like to mention three
of the core benefits we have heard from customer for using simulation
in robotics development. First,
the ability to reproduce and test exact scenarios
that have triggered unexpected behavior in the past,
including edgy cases and unsafe condition.
This is difficult when testing with physical robots.
Second, you can speed up the clock and run
simulation faster than real time,
producing results in a fraction of time it would take
on physical robots. Third,
you can expand test coverage by programmatically testing
many scenarios as here we are testing simulation one and
simulation two and this can be multiplied using
parapterized and repeatable simulation.
Robomaker is a cloud service that make IoT easy
to build, test and deploy robotics application.
In this architecture, we want to build an application that
can work with robots from system a and also robots from
system b where you have different robots from various vendors
and you want them to work seamlessly with each other.
So AWS IoT robo Runner make it easy to
build application for optimizing fleets of diverse
robots. IoT Robo Runner provides
central data repository for storing and using data
from different robots management system and
enterprise system. Once robots are connected,
you can use sample application and software development
libraries to build management application on
top of the centralized data repository.
IoT Roborunner help you to build complex management application
that require robot's interoperability such as
task orchestration and view information in a single unified
display. As you can
see here, you can use a robots runner for designing
a shared place. You can define the entry points and
the exit points. Robots wait at the entry
points if they are not cleared for entry. By the time
they get there, robots notify system
of exiting the space at exit points.
In this architecture we will walk through a sample solution that
implements CI CD pipeline with automated scenarios
based testing and simulation. First,
AWS code pipeline is a fully managed continuous
delivery service. You can connect with your code repositories in
GitHub, GitLab, BitBucket and automate
a set of build and test actions at each stage
of your release. Once the code pipeline
is set up, developers work in agile sprints and build
new functionality in a feature branch. When ready,
they submit a board request with new code. The board
request gets reviewed and eventually merged into integration branch.
This starts first two stages in the CI CD pipeline.
First, the source of code is copied into a build server
running in AWS code. Build a fully
managed continuous integration tool that will run the
ROS and Docker build command. Then copy build
container into Amazon Elastic container registry,
a center repository of container images.
Once the container images are built and published, the next
step is to run batch of tests. Automated tests in robomaker
simulation in this solution, we use AWS step function,
a low code solution for building state machine in the
cloud to track and send notification on the progress
of the simulation. Rotest results are automatically copied
from a robomaker simulation to Amazon SS three bucket
where they can be queried, analyzed and visualized
using Amazon Athena and Amazon quicksight.
Once a test in the simulation pass,
the new container images can then be automatically deployed to real
robots in a test area using AWS IoT green
grass.
Results from the test on the real robots can also be uploaded
to Amazon SS three, then can combined with simulation test results.
After all of the test validation check tasks,
the code can be merged into the main branch,
then it be deployed over the air to production fleet.
In things architecture, we are using a spot developed by Boston
Dynamics for industrial facilities inspection
with AWS services. So a spot is a
robot developed by Boston Dynamics and we are using it to
run two inference deployed by AWS
IoT greengrass. So we are running
AWS it greengrass which is hosting two interfaces.
One is running locally to detect a valve and
a second one is running remotely to detect the state of the valve
if the valve is open or closed, and the spot will be
an auto walk mission where it detects the two valves and
uploads the data to Amazon SS three. Once the data
uploaded to Amazon SS three, we are using lambda function to
read the data and update DynamDb and Cloudwatch
through GraphQL API by AWS
Appsync we're also hosting a dashboard
dashboard which displays the state of the vault represented
in Amazon Dynamodb and also in Cloudwatch.
Let's watch a real life demo that being presented on
reinvent this weekend.
Right now, spot is detecting the valve over there.
It will go in can auto OC mission to detect the second valve
over there and update
the state of the valve on a dashboard that being
hosted on a monitor here.
So it detects the valve image.
It's going to send the image itself to a Tensorflow model
and based on that one it was updated
a file to Amazon SS three where a lambda function will process the
file, update the lost state. It will
go back to where the mission started.
And this is just like a pose from a
spot to indicate it's uploading data and eventually
the data will be updated on the dashboard.
Robots are doing amazing things and AWS
is helping the builders behind the robots you see in this video
innovate in our Amazon fulfillment centers,
you see robots helping make our environment
safe for people and deliver value to Amazon customer.
This is also true for many of our industrial customer.
Other customers are focused on using robots to solve
some of humanity's biggest challenges,
such as making our world more sustainable.
So we are just getting started but see the
incredible potential in the space.
It's one small step per man,
one diaphragm permanent.
Let's get started today. So I would encourage
you to look at AWS robotics blogs. Where are
there many blogs talking about robots connectivity,
robots simulation, robots integration
from different vendors using apprunner and robomaker. Also have
a look at IoT greengrass as we've been talking before,
it's a core component that can help you run a smart
software at the edge. IoT can help you with
over the air update and running interfaces at the edge.
Also, there are open source sets available
in AWS samples and AWS robotics.
Thank you very much for joining my session and please
give me your feedback.