Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi, welcome to this session on several architecture for product defect detection using
computer vision. My name is Mohsin Khan. I'm a solutions architect
at Amazon Web Services. Let's get
started. In this session I'll be taking you through how computer vision
can be applied to industrial use cases and why is quality
such an important aspect to manage for manufacturing organizations?
We'll have a look at Amazon Lookout for vision, followed up a review
of a solution architecture, product product, product defect detection, computer vision
and serverless services. And finally, there's going to
be a solution demonstration and some resources that you can have a look at
for further reading.
Computer vision can be utilized across the various
stages of industrial processes. Now, manufacturing by its
nature is a very complex and convoluted process, started from starting from production
down to assembling resource, assembling a product,
packaging, logistics, and then storage of the products.
It can be applied to a variety of scenarios from asset
management, worker safety, quality assurance and process control.
In this session, I'll be focusing on the quality management aspect
of the industrial processes.
Within quality management, there are a number of use cases
that we can cater to with computer vision, such as automated quality inspection,
root cause analysis, reducing product defects,
and optimizing yield.
Let's first have a look at why quality is such an important
aspect for manufacturing or industrial organizations.
Now, quality impacts the cost of operations,
costs to operations and customers, and many organizations will
have true quality related cost as high as 15% to 20%,
as reported by Aberdeen Strategy Research.
Now, some customers even go as high as 40%. This is a
huge fraction of their sales revenue.
So what sort of factors and metrics contribute to these costs?
Some of the top metrics include defect rework,
scrapping, customer returns, complaints and warranty and
corrective action processes. To give you
a reference point, in 2018, us based manufacturing are
estimated to spend about $26 billion in total
claims. So managing quality is
a very important challenge to solve.
So what are organizations currently doing for quality
assurance? For starters, there's a manual inspection
process both inline and in offline inspection.
And though this process is agile and flexible, it lacks in
throughput. It has a slower feedback loop,
it's a matter of days, hours or
minutes, and again, it's prone to human
error, so the results could
be subjective or incomplete as well. There are machine
vision solutions available as well, and although they're fast and
repeatable with a lower cost of inspection and
faster feedback loop, but they have high upfront costs
that limit coverage and are inflexible and may need purpose
built hardware or cameras to be able to cater
to automated defect detection.
Now, this is where Amazon Lookout provision comes in. It's a machine learning service
that allows you to detect or spot product defects and visual representations
using computer vision.
Let's have a look at the different stacks that we have for
our machine learning services. Now at AWS, we are innovating
on behalf of our customers to deliver the broadest and deepest set of
machine learning capabilities so that builders of all levels
of experience can use them and remove the undifferentiated heavy lifting
that's required in building, training and deploying machine learning models.
Amazon Lookout provision lies within the specialized
stack of AI services,
so it's completely focused on industrial use cases and it's
been specially designed to cater to such scenarios
and use cases.
Apart from Amazon lookout for vision, there are a number of industrial AI services
that are also there. There is Amazon lookout for
equipment and Amazon monitoring that help you with real time condition
monitoring. There's also AWS Panorama that enables your
standard IP cameras with computer vision.
But we'll keep the focus on Amazon lookout for vision and the
automated quality management use case. Here,
let's have a look at how computer vision can be done at scale and
what are the challenges that it presents. I'll focus on three main
areas. To be able to do computer vision and use computer
vision, you have to first have access to sufficient images of product
defects. Once you have that, you'll need to
spend a lot of time in training, validating and testing machine learning
models. Secondly, once you have a model with
a traditional on premises model, you will have to deploy it on
some infrastructure that you'll have to manage that presents challenges
with scalability and compute requirements as
well as security AWS well. And finally, from a liability
point of view, you will have limited support for improving your machine
learning models at the plant or a manufacturing facility.
So how does Amazon lookout for vision tackle these challenges?
With lookout for vision, you can create a custom machine learning model
with as few AWS 30 images, and importantly, without any
sort of machine learning expertise. You can run your machine
learning models in the cloud and even use low resolution
cameras for gathering your
data and training a machine learning model.
Amazon lookout for vision can help you train machine
learning models in diverse conditions
as well. Finally,
your process engineers and quality managers or operators can provide feedback
from machine learning models in real time, and thereby
it allows you to iteratively and continuously improve the performance of your models.
Now, all of this, you don't need to have any machine learning expertise,
and also you don't need to maintain any sort
of servers or differences
or need to deploy your model anywhere.
So what are some of the use cases that lookout for vision can tackle?
Lookout for vision can tackle use cases ranging from detecting surface
defects to shape defects. It can also
help you identify the absence, presence or misalignment
of objects in an image. It can also help you uncover
consistency issues such as in a steel coil or a paper roll.
What is a typical customer journey for lookup
revision? So we start with gathering the
data set or gathering the images, which is known as image acquisition part.
Once we have the images for our defects and normal
products, we can upload them to Amazon history or
we can import them into Amazon lookout for vision via its console.
The images can be labeled or put in prenamed folders called
as normal and anomaly. Once we have the images and the
data set created, we can start model training.
Once a model is trained, we can visualize
its performance via the console dashboard, which provides
us metrics like precision, recall and f one score.
And finally, once our model is trained and we are satisfied with its performance,
we can integrated inferencing into our application via
simple API call.
And once we get more and more data, we can work on iteratively
improving a model through feedback.
Lookout for Vision provides you binary image classification differences
result. And once you get that result,
it allows a user to make a number of decisions ranging from classification
rating of the product, binning the product,
scrapping it, or maybe using the result for
investigating a process and also for improving the machine learning
model. So overall you are able to make your decisions with
more accuracy and in less time.
Let's take a look at a quick demo.
So as I mentioned, we start
with an image acquisition process. So first we have to gather
images for our product. Here we
have can image of a defective printed circuit board which
has got scratches on its side.
There's another image for a printed circuit board
which has got bent pins here.
And finally, there's another defect here with improper
soldering. Once we
have gathered our images for the defects,
we can get started with creating a lookout for vision
model. So we start by logging into the
AWS console, go to lookout for vision service,
and we get started with the initial setup which is creating an
s three bucket which is going to host our project's artifacts.
Then we'll create a project,
we give a project name, and then we
move on to creating a data set.
To be able to set up a data set, we first create a folder on
s three. In that folder. We can create a
couple of folders called anomaly and normal.
This will enable lookout for vision to infer the label on the images automatically.
Now that we have uploaded our images onto s three, we can copy the s
three Uri for the bucket and we go on to lookout provision
console and create a data set.
We've got the option to create a single data set or training and test data
set. In this case, we go forward with a single data set.
We import images from the s three bucket by providing
the s three bucket Uri and we check on automatically
attach labels to images based on the folder the
images have already been placed into. Normally a normal folder, so lookout for vision
can simply infer the labels by looking at the folder name.
Now that we have the data set up, let's move on to training a model.
Training a model is as simple as just clicking the train model button.
Once we click the train model button and process with
the next steps, it's going to take some time depending on the number of images
we have in the data set. As soon as the model training is complete,
we can have a look at its model performance
metrics like precision, recall and f one score
and have a look at the test results as well.
Apart from this, once we have a model train and
running we can do trial detections which allow us to provide
feedback so that we can integrate
that feedback into a new model version to
be able to do trial detection, we create a new task for
trial detection. We select the right model and select the images
against which we want to run inferencing.
We choose the files or the image files that we want
to test against our model.
Once the images are uploaded into lookout provision for the trial detection task,
we're going to get some results.
Looking at the once a trial detection task is completed, we'll have a
look at the results and we can provide feedback so we get to know
if the model classified those images AWS anomalous or normal,
and we also get their associated confidence score.
Now we can verify machine productions so
we can provide feedback whether the inference results were correct or
incorrect. And once we have done that, we can
feed this back into the model, into the data set
and retrain our model.
This allows us to improve our
model iteratively over time.
Finally, let's see how we can work with the AWS CLI
or the command line interface. Now, to use
the model, we first have to start it so we can do it via the
CLI or the SDK as well.
So we start the model first via the command line.
There are a number of API or command line interface commands
that you can use. So once
the model is started, we can run detect anomalies by providing
an image and get
an anomalies result along with the associated
confidence score. Now that we have set up our
model and have some context, let's move on to the
solutions architect for product defect detection. So we start by
establishing our users or personas who are going to be
involved in the overall solution. So first up we've got the camera that
has got some compute capability or a client application
that's responsible for aggregating images, collecting them and
then sending them across to the cloud.
Then we have our data science or admin users and their main responsibility
is going to be managing the training, the lookout for vision
model, and managing its startup and shutdown. We have
our business users and these could be our c level or executives or
our vps who'd be mainly interested in gaining or visualizing those
insights from the manufacturing process or the defect detection process.
And finally, our quality managers and operators would be mainly concerned about
getting notifications and alerts whenever a defect is detected so they can
take appropriate actions.
At the heart of this solution is going to be lookout for vision
as we have already seen and we've seen how we train a
model. With lookout for vision, the data science or
admin users can train the model
via the console or via the CLI, and they
can have a lightweight static website
that would allow them to start or shut down the model so that they don't
need to access the console directly.
Next, we move on to the image ingestion and storage part.
Now here the camera or the client
application. Once it's captured the image, it can invoke an API
via the Amazon API gateway. It can optionally authorize
it via custom authorizer lambda function, and once
it's authorized, it can invoke a lambda function which would allow it to
get a signed URL from Amazon simple storage service or Amazon S
three. Now the sign URL is going
to be returned back to the camera or the client,
and it can then associate different metadata to the image,
like a camera id, assembly line id, an image id,
or a facility Id, et cetera,
and then upload that image into s three.
Once the images lands in s three, it's going to initiate an image notification
which is going to start AWS step functions workflow
with step functions workflow, there are going to be three different steps.
Firstly, a lambda function is going to invoke the detect anomalies
API for lookout for vision. It's going to take the image
from s three, send that to lookout for vision to get an inference result,
and the results that it's going to get, it's going to send to another lambda
function which is going to store them in Amazon DynamoDB, which is
going to be a persistent store for this solution. Amazon DynamoDb
is a highly scalable, reliable NoSQL
key value based store. Once the
record is added to DynamoDB, it's going to be sent across to a DynamoDB
stream and from there a lambda function which is going to
be a stream reader is going to take that
record and then enrich it with additional data,
or process the data to add or modify some values
and then send them to Amazon kinesis data firehose to
bash them up. Kinesis data firehose allows us
to bash together a number of records and send them to s
three which is going to be a data lake. For the inference results
from this data lake, business users can use Amazon Quicksight,
which is a serverless business intelligence visualization tool
to build dashboards and analysis to
get answers to a number of business queries.
Finally, the third lambda function in our step functions workflow is using to
send a notification via Amazon simple notification service.
This is going to publish a message to SNS topic
which is going to send an email notification to quality managers
or operators would be subscribed to that SNS topic.
We can replace emails with SMS,
or you can also hook up your custom application with
SNS as well via HTTP or HTTPs endpoints.
And finally, from a monitoring and alerting standpoint, we use Amazon Cloudwatch,
which allows us to create alarms and dashboards,
along with providing a single pane of glass for looking
at logs that are generated by our serverless application.
So the various lambda functions that we have here would generate some logs
and we can visualize them via
Cloudwatch. Importantly, we can also
create alarms, such as if the number of defects exceed
a certain threshold, our quality managers or operators could be alerted accordingly.
This is how everything comes together. It's a completely
serverless solution and you can get started by deploying it
with a one click cloud formation
deployment. Apart from the
visualizations that you need to create in quicksight, everything else
in the solution can be set up via single cloud formation deployment.
Let's move on to our demo. So in this demo,
we're going to simulate the process of a camera taking images
and then uploading them to Amazon S three and from there which triggers
the step functions workflow which will detect whether
that image is defective or normal. And based on that it's going to show the
results as well as send email notifications. And finally, we can
have a visualization via Amazon Quicksight as well as Amazon Cloudwatch
dashboards.
So we start up by logging into our
management front end which would allow us to have a view of the different
projects and the different models that have been created for the purpose
of the demo. The model has already been started.
Let's take a look at the data set we used in our model. So this
is for metal casting products. It's got
about 6000 plus images that we've used for training
the model. Moving on to the lookout provision console,
let's have a look at our data
set. As I said, it's got 6000 plus images and
we've trained a model that's got very good
model performance metrics.
And now we're going to simulate the process of uploading images
via a simple python script which is also going to pass some additional
metadata like assembly line id or a camera
id. We've initiated
the script and it's uploading the images into Amazon script. And as
these images are added and those event notifications have been triggered,
the images are being sent to lookout provision for inferencing and the
results are being stored in Dynamodb. And we'll shortly
get email notifications in our email inbox.
So here's the first notification. Let's wait for a few more.
There we go. So we get email notifications. Looking at
the notification itself we see the images id, we see where
it's stored in s three. What's the date, time for the inference result
and the associated confidence score that we get from lookout provision.
Now as more and more images are being processes, these emails are going to pop
up. Now looking at from a monitoring standpoint into
the Amazon Cloudwatch dashboard, we can see as
more images are processed, the processed image count is going to increase along
with whenever a defects is detected. The detected anomaly
count metric is also going to increase and this dashboard
is completely extensible so you can add
more widgets to it depending on your requirements.
Now moving on to the lookout
provision dashboard in quicksight. So this
is a custom dashboard that a user or
a business user can create which allows you to create
a number of visualizations like bar charts, pie charts,
so count of records by assembly line or by
anomalous or non anomalous. It can also allow you to create these line charts
for tracking the inference results over time and
also distribution of your confidence. So what
are the confidence associated with your inference results? High, low,
medium, et cetera. And you can also create
heat maps. Now this is just an example.
It completely depends on your business case and scenario on the
types of visualizations you want to create. Now once we've done with
the inferencing, we can stop the model so that we don't incur any sort of
additional cost.
I hope you enjoyed the demo. Here are a few more resources that you can
have a look at. So there's a blog for detect manufacturing defects in
real time using Amazon lookout provision that you can have a look at. I also
mentioned a solution earlier, so the solution is available on GitHub called
Amazon Lookout for vision serverless app that allows you a one click deploy
way to set up all the backend or the serverless solution into
your AWS account. And there is a reference architecture
as well that you can have a look at. Here's a quick
view for this and
to add to this lookout provision now also supports edge inferencing.
There's a reference architecture available for it that you can
review for further information. So for use
cases where you need low latency inferencing
or you've got intermittent networking connectivity, you can also
train your lookout for vision model and deploy it on edge devices
and run inferencing from there.
There are a few more resources blog posts that
you can also review. Thank you
so much for listening. I hope you enjoyed this session and thank
you so much conf 42 for giving me the opportunity to speak here.