Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone, this is Ivan. I will introduce the
topic implementing a virtual physical environment system
based on rust using Python and threejs days Twins
was created from my previous company
Paia focusing on educational
system. We aimed to
university student to help them using a
kind of tool create, manipulate and
the chair model in the virtual environment which
is synchronized physical one.
It's really helpful to decrease the
cost in the physical environment. And here we list
all agendas that we will talk about
later. First of all I will explain
why it's us and why is three z.
And then we will introduce digital
twins concepts. And also we provide a user interface
to manipulate digital twins assistance.
And also we provide
a control panel to the physical robotic arm.
And also we use threejs library and
byte framework integration
in the virtual environment. And also
we use WebrtC skill to connect IP
camera to display the physical environment in the virtual
page. And then we use
a connection to ROS cluster
by RoS Bridge and it's training
in Python language. Also we subscribe it
in the front end side by RoS library and
also provide RASC
cloud API to integrate the systems
between physical and virtual environments.
And also in the end we provide a
conclusion for overall of
the topic. I will tell you later, my background
is a little complex. Not only
focus on programming but I am
director as well. Already did some
short twins and also I
was volunteer some
years ago working Central
America on the islands near the
Caribbean Sea. And also I
like cycling,
I already cycled many parts
of different countries.
Sometimes I use in twins kind of stories to
encourage my students, especially young
generation young engineers
to help them figure out what
they really want in their future careers.
Here are tools and skills we use in
our project. First of all is a design tool
we use is figma. It's a well known tool,
most designers use it usually.
And then we use some frameworks. Here one
is white and the other one is view framework.
Yeah we integrate both in our project and
also we use view router and Paia to code
our back end API. And also we
use tailwind and sauce to style our HTTP
pages. And also we
choose some skills. Here one is
three z to operate the 3d model of
robotics on in our virtual environment.
Also we use Rosebridge to
receive and subscribe messages from the
party arm between physical and virtual
environment. And also we use
WebRTC to connect the IP camera
to display the physical robotics on
in our virtual page. And also we
provide ROS cloud API to
update and reiker the status of robotic on.
And then we use
some others tools like Docker
and Jenkins for our continuous integration
and deployments.
Yeah, it's very helpful for quickly
implement deployments in our projects. It's very
important skills. Then we can
see the architecture here.
The left part is the front end side, we will talk about
later. And the middle is DevOps
or we call CI CD for
integration deployment. Then we see
under the DevOps with the backend side we use Python
combined with Raspbridge and we also have
local side. In the right part we
use physical robotic arm and IP
camera to record the
environment to show in our virtual
pages. Let's see the demonstration in the
lab part we see the robotic
arm in the virtual environment and see how it works.
And also in the middle part we see there
are many control gears here. And its
button has information of
the access angles and positions.
So we can control each one assessed by
each button. On the top
right display we saw the
physical robotic in the IP cameras.
You see that it's synchronized in the virtual one
layer under the display.
There are some action scripts here recorded
each steps. You control the gears in
the middle, so you can record
each action and repeat
it many times.
Beside action scripts
we see some message here. Those messages
is to tell user about how
the robots, how the system works well, or if
it has errors. It will show some message
here to alert people now
have some problems here. Okay, let's explain why is
ROS. And ROS is full name
is Lobar operating system. It's not
really operating system. It must be installed on main
leveraging system like Windows or Linux.
So it's similar like a kind of framework
to control and map device
of robotic well. Also it
provide some modular criteria,
like stacks, packages and nodes.
So an order can contain
many packages. And it's the
basic unit of the robotic operating system.
Also a stack can contain
many packages well. But a stack is more similar
like a collation of packages.
So for example, when you use
anaconda in Python language,
you use command to install Jupyter
notebook or tensorflow.
And after you execute the command line conduct,
install, it will install not
just one package and many packages will be
installed. It's a kind of stacks
of those packages provided
twins kind of function to help user can easily
install their service and functions well.
And also it provide communication mechanism between
Rasnode. It must
contain topic, service action leap
topics is similar like a channel. Then right.
Each node can subscribe and
listen the same topic. And then they will get the message
only by those node they have subscribed
this topic. And also it
provide a kind of service. So a service
is a kind of function
to provide Rasnow to block current
process, to excuse others actions
provided by action Lib. So many
actions lib, they combine together to
provide different kind of actions for a service.
And each action is blocking
processor will be executed in
the background of operation system. Okay,
so let's talk about threejs.
Three JS is useful APIs and
developed by the Webgro specification.
And also it is collapsed
either and simplified well, so user
can easily use it on
their 3d environment from inside.
So let's see from
inside, we use the Figma tool to
design our markup, right? So you can see how
we decide markup through Figma
and artificial intelligence as
well for generated automatically
by prompt message. As you see in
the center of picture, we put a lot of keywords
in the prompt box. Yeah.
After we kill the random button,
we can see this kind of picture to
help us design our markup. Well,
to decide our main idea
and things,
to decide our main color. Well,
it's really helpful to
use generative artificial intelligence.
Yeah. After many steps of the kind
of prompt message by generated
AI, after many times execute
the generated AI,
you finally get this kind of markup
with cyberpunk style. Yeah,
I list many keywords in the
left side of this page. You can see here, Neil, lab backgrounds
are high contrast and metal
and cool color manned styles in
cyberpunk. So yeah,
it's a cyberpunk style markup. You see the
whole mug out here. So three parts of the user interface
as well. You can see I already
using box to separate three parts.
In the green box here, we see
it's an information displayer. So it contains
virtual 3d robotics on here,
and physical iv camera here,
coordinated axis here.
And the arrow master here. Inside the
information display,
it have a small part
of control panel here. You see the red
box, the red box show the
reset button here. And the
six access angle control gear here.
And the motor movement speed here.
Okay. And action recordings
are added here. Action recording
here, the last part of
the sidebar in the orange box.
In the orange box here, it shows logo
and the device, lister. Yeah, we can aid
the device or edit snan as well. And the user
login here,
we continue to show the
physical robotic arm here. So here we see
the physical robotic arm with this axis
and control. So you see many pictures
here, different angle
to see the physical robotic arm.
Yeah, it's bigger ones almost is
higher to health of adult
bodies. So each access
to see each picture and
overall has six
axis in this robotic arm.
The skill we use in virtual environment threejs.
So in the three zs fundamentals we
see layer some components here.
One is scene, scene is
virtual 3d stage where can
use cameras, objects,
light resource and all
models we can present here. And the
second one is camera is determined the position
and perspective and projection of the display.
The next one is objects like
we see in the right side of pictures.
The virtual robot is on. We put these kind
of objects in the virtual environment. So it
contains many operations such as
rotation, scaling, translations on
one object like cubes,
severes and models. And the next
one is light. So light is the brightness
and shadow effect of objects are determined
by the position of light sources such as
lights and direction lights and spotlight
as well. So you can see the light pictures.
We put the directional light on the top
to brighten the robotic
on to make its models
show the brightness and shadow effects
on its model. And last one
is renderer is to
convert 3d objects and lighting information
into two dimension. So yeah,
actually we see the older 3d environments
on website page is in two dimension.
Ultimately we should convert all
the 3d objects, include the light information into
two dimension. Like transform
the images on the screen by our
camera perspective to the sim.
So we will expand more details
in later sliders.
So let's see,
first of all, we put a scene here in
our virtual environment. Yeah, it's empathy in
the sims at the beginning. Okay,
let's start to put camera into scene.
So in this case we use perspective camera.
It contains four parameters. First of
all is field of view and aspart ratio and
near and far clear pan. Here we see
the pictures in the left side there's a camera its
direction to the right side. So its angle
and the direction decide a field of view here.
And how far the camera
can see is decide by
the two clear plans
here. One is a near clear plan. The other one
is far clear plan. So only between
the near clear plan and far clear plan objects
here we can see and display on
our web page. Later we will talk more details
in later sliders and
how we decide. Each plan's size
is by the spared ratios here.
So we decide Aspar ratios by its width
and high decide the
size of the clear plan. So the
linear clear plan and the far clear plan. Its ratio is
the same and only
between the two clear plans. Objects here that we can see
on our screens later. Okay, so in
the nest
I will explain is to describe
objects between the near clear
plan and the far clear plans. So as I
mentioned, between the two clear plans,
all objects here we can see only on
the screen. So here we see there are
many blue cubes here. So only
those blue cubes cubes we can see
on our display and ours, like the
purple cubes outside the two clear plans
we cannot see on our screen.
So you can see, it's very simple to create
a cube into the sim by
those function here. The first one you use in
the bus geometry to create a new geometry
object and decide its mesh standard
material by its color. Then we
put the material and the geometry into the mesh object
to create a cube object. Then we can use sim
eight to put the cube into the scene.
Okay, so let's continue to set a
light into our scene so
you can see the pictures. We put duration of light on
the top of those cubes.
Like we mentioned in previous slider,
we put light on the top of
our robotic arm so the light
can make those cubes looks well to
obviously see its darkness and shadow.
So you see only,
it's the sense only the two clip plants
objects, we can make
the effect of lightness on those objects.
So out of the two clips,
those purple objects cannot be affected by
the light. Then we
use the rendered function to display
the scenes into the screen.
Like you see, only the blue cubes here
will display the on the screens outside
cleaver plans. Those purple
cubes won't be rendered on the screens.
So, yeah, it's the rendered functions you
will use in the final stage also.
Yeah. If we want to make
the purple cube into the scenes,
we just backwards a
little. The near
Korea plants measure one
purple cube into the
scope of the two Korea plants.
Like you see now, purple cubes already
in front of those blue
cubes. And behind,
finally, you will see the purple
is very close to the front of the camera.
And behind purple cubes,
they are all blue cubes.
So, yeah, it's the perspective from the camera.
And then we continue
to see how we integrate a robotic arm
in the virtual environment. With the physical robotic
arm here, we create a model,
contains six access here on
the buttons. There's one base here. So it's
very simple architecture.
So we can easy to build up in the
virtual environment and synchronize each access
to the physical one. Finally, you can see
you can control the virtual robotic arms.
Very easy to synchronize the physical one.
Okay, so how we achieve the synchronization of virtual and physical.
So you see, we have
some steps here in the left side. On the right side,
you see. Yeah, it's a demonstration we see before
the CIS control and
ancient execution. Then we combine it to
change six Asians angles so
those angles data will deliver to both virtual
and the physical robotic arms. Yeah, so you can
see it's automatically synchronized
virtual and physical at the same time.
Okay. It's based on
the robot operating
system message format.
So they are both in the ROS cluster so
they can easily get this message.
And we maintain those messages and data into the cloud
by the ROS cloud API.
Let's continue to see another
skio web RTC we use
for the IP camera
information sent to the web
page. So you see the web RTC
by the IP cameras. By this hardware
we use a TP link table we buy it's
cheaper and we're easy to use in
our country. So it's very simple. We're using
it's by the RTSP protocol.
It's a general protocol.
We set the username and password and the
IP address is a public IP address.
Later I will tell you how we get the
public IP address.
This kind tool is
open source and already
someone create docker files.
We can easy to start this service
by WebRTC streamer this GitHub
project. Yeah it support RTSP
protocol and also can start HTTP server.
You can easy to connect this server by another
server as well and compatible with Windows and
inas. Actually it's a docker image
easy to use in a containerized.
So you see yeah it's very simple to
use. We don't need to modify too much then we
can use it well okay let's see the programming
code of WebRTC how to use it in
our project. So there are two
IP addresses we need to set up. One is camera
IP address. It's RTSP
protocol to public for outside services
and the otherwise RTC IP address is
local address for our
service to connect to the
physical IP camera. When the components
start up we need a WebRTC
streamer to start a server
in our local environment and
then we use the connect function
to public the RTSP
protocol to outside service when
we use WebRTC we encounter some issues here.
Yeah because a little different the
usage of WebRTC streamer in development and production.
In development stage we run the
WebRTC streamer locally so
we don't care about the public IP address but
in production our main
system digital twins and also the web
artist tumor API server all deployed on cloud
simultaneously. So we choose
the docker compose to solve twins issue to
make land run at the same time.
Then we still encounter
website appearing back during the
development. Finally we found
we must change UDB protocol to
TCP protocol for our
connection between the IP camera and
the WebRTC server.
In the next stage we
master to public our IP address.
So our IP camera without the
external IP. So finally
we choose the poor forwarding. To solve this question
you can see we use common Socat in Linus
we threw the UDP, that's why we use
UDP can work as well to
inside service of our IP
camera and web update servers to
public from the outside
computer through the public IP address. So it
means other service can connect this
web party service through the public IP address.
Okay, let's continue to talk about the
front end side analog View
framework to subscribe
ROS bridge to listen the ROS messages
in the front end side. Okay, so let's talk about
how ROS cluster connect virtual
and physical environment through Rosebridge library.
So ROS library defines a
topic each node and then ROS
node can deliver a message through the topic to
each other from inside using the
subscribe function to subscribe this topic
to get the message and three
s components will change their angle and position
depends on those messages delivered
from the last cluster. So you
can see we get a lot of messages in the console log.
So in the backend side of
Python language also we need subscriber
to the topic in the last cluster. So we
see how the backend side we subscribe
it. We use create subscription here to the
send topic. Both the send on
the front end side and the back inside.
So the listener callback
we see when we get the message from
the loss cluster we send those messages
into the cloud
ROS API. We save all
the information and data into
cloud database and also we send
those information by different angles and
position to the robotic arm.
Later we will tell more details about
how we use the Python language connected
with c sharp language to control the
physical robotic arm. And so
in the background side of Python language we also
define Roscow API. So first
of all it's a category device.
So user can use this device
API to register a new
device when they have a new
device into the ROS cluster.
Also they can depend on their password
and username to deliver the right
certificate to the right device to make sure
they can control or manipulate each ROS
node in ROS cluster so they can
do some actions by the ROS API
to synchronize virtual and physical environment
devices. So we continue to
introduce ROS cloud API.
Here we show how we
interact with get method of
device API. It will
respond to JSON object to show the
device IP, name brand and others attributes.
You see how we put JSON
object into our device asian API
asian we define move and then we put some data
like angle speed for each access and
we execute the API. The physical
and virtual robotic arm will follow
those information to change the right angle
and speed. Here we see how Python code the
rascal API. We use the
function compared to angle lister to make sure the
current angle they want to change is totally different than previous
one and if it's did
we will code the API instantly.
So here the second part
we see the device asian page function we mentioned
in previous slide. We put the asian move and
put some data angle here to synchronize
both virtual and physical robotic arm
at the same time. Here we get the
last message from the Python websocket
into the C sharp library. To control the physical
robotic arm we use C
sharp SDK to encapsulate those functions
to control the physical device like we can
change each access and
also we can set its angles and position
speed as well by those functions.
In the end we use DevOps
skills, something tools like
Jenkins and Docker containerize
our applications to make them
easy deploy. You see how
we use Jenkins and Darkrise tool to deploy
both on cloud and on premises at
the same time. So you
see the script we state
by state to execute and then
after the image created we put the
image into the AWS
ECR service. Then later
we get the image from that to deploy on
AWS environments.
So all I want to present to you are
already finished here.
So let's give you a summary.
So in this presentation you
get the based concept of ros and threejs
as well and also know
how we integrate our system into
virtual and physical environments and know
how we create a cloud based interface
to integrate rust cloud API we
defined and the local IP
webcam. And also you know
how we deploy conveniently by
the Docker and Jenkins.
We dockerize all application and
make them easy to deploy by
the Jenkins. So I hope
you can get some knowledge from my
presentation. So if you still have any questions,
just yeah, send me message. Thank you.