Amazon Lookout for Vision Accelerator PoC Kit (APK) is a kit containing hardware and software for AWS customers to experience Lookout for Vision and Basler Cameras.
The POC accelerator kit as shown in the figure below is composed of a Basler ace camera (color) with a USB3Vision interface, Nvidia Jetson device, lighting and accessories and is compliant with Lookout for Vision image capture best practices (resolution, image size).
Let’s go through the steps of acquiring an image, extracting the region of interest with image pre-processing, upload training images to a S3 bucket, train a LFV model and run inferences on test images.
We will capture training images of a normal printed circuit board and images of various anomalies like scratches, bent pins, bad solder, missing and damaged components. We will train the LFV model to detect these anomalies and finally run inferences on various test images of the printed circuit board. Show some sample training images here.
You are interested in or have any questions about Amazon Lookout for Vision Accelerator PoC Kit (APK) - please contact our Basler customer service: AWSBASLER@baslerweb.com
Step 0: Installation / setup of the kit.
After unboxing the kit and make sure you have all the main components: a Basler ace camera, a camera lens, USB cable, network cable, a Basler standard ring light, its power supply, the Jetson Nano Board itself (in its housing), its power supply, and a micro SD card.
Now:
Turning on the system for the first time a monitor, keyboard and mouse has to be attached. This is purely to accept the EULA from NVidia and set up the location and main user of the system. The initial boot up of the system can take up to half an hour with just a blank screen and hourglass mouse pointer. Please be patient. After booting the system the monitor, keyboard and mouse may be disconnected (or used in the next step to find the IP address).
There are many ways to obtain the IP address of the board in the system. Which one you use depends on the environment your system is set up in.
Plug into the board a keyboard and monitor, login and get the IP address. Using the user/password combination from the set up. This may be done from a linux terminal with the command
ip addr show eth0
or you can use a network scanner on the same network.
Once you have the IP address Go to a browser on a machine on the same network and enter the IP address. The kit’s web page should come up with a live stream from the camera. Now we can do the optical setup and start taking pictures.
Once you have the IP address Go to a browser on a machine on the same network and enter the IP address. The kit’s web page should come up with a live stream from the camera. Now we can do the optical setup and start taking pictures.
The optical setup is where you try and get the best possible image of the object. With the best possible image you will get the best training and best results from Lookout for Vision.
Whatever you build up at this stage, try and make it stable, as you will have to maybe take lots of images from the same angle. Although books are often used to quickly prop things up on the table, consider aluminium profile building kits here.
Step 1: Image acquisition and pre-processing
With the browser running and showing the web page of the kit, choose the tab “Configuration”.
After a short while a live image from the camera will be shown.
First you have to set up the connection in the cloud to your AWS account. Click on the blue button labelled “Create AWS Resources”.
(the "Create AWS Resources" button is blue first, and then turns green when the connection to AWS was successful)
There the dialog will explain you have to click on a blue button “Create AWS Resources”, you will be taken to the AWS console where you will be asked to run the AWS CloudFormation script.
This will also allow you to change the region to be used and the S3 bucket name (if necessary). After any changes you might make, accept the IAM box and click “Create Stack”.
Wait for the stack to be created. Afterwards under the “Outputs” tab please copy the “DeviceCertUrl”, change tabs back to the kit and paste the value into the box provided.
Clicking OK will bring you back to the live image and the set up is finished.
Place the camera some distance away from the object to be inspected, so that the object is fully in the camera live view, but fills up the view as much as possible.
Optical setup rule of thumb: If the human can’t see the anomaly in the image, then most likely the Neural Network will not either. The supplied lens has a minimal distance to the object of 100mm, this is normally not a problem. If at this distance the object doesn’t fill up the image, don’t worry you can still cut out the background using the region-of-interest (ROI) tool described below.
Now check the focus. Either change the distance of the object to the lens and/or turn the focus on the lens (most likely a combination). If the live image appears too dark or too light adjust the “Gain” and “Exposure Times” sliders until it looks right.
Please remember too much gain will cause more noise in the image and a too long exposure time will cause blurriness if the image is moving.
When the object is focused and taking up a large part of the picture, use the ROI tool to reduce the amount of unnecessary “background information”.
This selects the relevant part of the image and reduces background information.
Click “Apply” to reconfigure the camera to concentrate on this region. This will be shown on the live view. Click again on “Select Region of Interest” to change it again if necessary.
Step 2: Upload training images
Now choose the “Training” tab on the browser web page.
Make sure to choose from the pull-down menu (next to the trigger tab) whether the images you will be taking are to be used for the “normal” sample or the “anomaly” sample and at the same time if they are for the training sample or testing sample. Images will then be sent to the appropriate sample directory.
To start with, let's collect our training sample, choose “Training: normal” and trigger some images from an object with no anomalies on it. Use the blue “Trigger” button to do this. Currently the kit only supports a trigger over the trigger button in the browser. The cameras may also be triggered by a hardware trigger direct to their IO pins. See documentation for more information (https://docs.baslerweb.com/aca1440-220uc#connector-pin-numbering-and-assignments).
After every trigger the image will be sent up to the S3 bucket. When enough pictures have been uploaded, click on the button to bring up the Lookout for Vision console in the browser. You can always see from the table under the Trigger button which was the last image sent up (as a thumbnail) and the number of images you have in every category of your samples.
How many are enough? Well this depends on how robust you wish to make your classification. With enough images and variations you (almost) always get a correct answer. However it will take more time to generate the images. Don’t forget if you want your neural network to be insensitive to lighting variations or rotations of the object, then you have to produce images with lighting variations and/or rotations. Otherwise these may be classified as bad. How much is enough, perhaps start with 20 images, you can always add more later and retrain.
Now find objects with anomalies you wish to detect and repeat the above process with the pull-down menu set to “Training: anomaly”. Again maybe start with 10 images. When you think you have enough images from both types (normal and anomaly) for a first test click on the “Lookout for Vision” button to be taken to the AWS Lookout for Vision console. If you have time and wish to check the quality of the training remember to repeat the process and collect a testing sample using “Test: Normal” and “Test: Anomaly” from the pull down menu.
Step 3: Train the LFV model
When you have your 20 normal images and 10 anomaly images click on the “Add to Lookout for Vision” button.
This will produce a pop up dialog telling you where your images are stored. Click “Create Dataset in Lookout for Vision” to start the training project.
In the diagram the links to the S3 image directories where the images are stored are highlighted in a red boxes. Please remember this for later on when you need to copy these links. Choose the “Create a single dataset” option
Choose “Import images from S3 bucket”
Copy the URI of the S3 images directory into the S3 URI. Check to mark the "automatically attach labels to images based on the folder name". This will import the images with the correct label in the dataset. (remember you can jump back to the kit pop up dialog to copy the URI). Now click on “Create dataset”.
Click on the “Train model” button to start training
The model reports the precision, recall, and F1 scores. Precision is a measure of the number of correct anomalies out of the total predictions. A recall is a measure of the number of predicted anomalies out of the total anomalies. F1 score is an average of precision and recall measures. Click on “Models” and the status will indicate “Training in progress” and change to “Training complete” once the models are trained. Click on Model 1 to see the model performance.
Step 4: Run inferences on test images
Now go back to the kit web page in the browser and choose the “Inference” tab. Then click the “Start the model” button.
Choose from the pull-down menu which network you wish to use and which version of it (In the diagram you can choose from two versions of the model which have been trained).
Place a new/unique object that the model has not seen before in front of the camera, and press the trigger button in the browser web page of the kit. IMPORTANT: Please ensure the object pose and lighting is similar to the training object pose and lighting. This is important to prevent the model from identifying a false anomaly due to lighting or pose changes.
Inference results for the current image are shown in the browser window. Repeat this exercise with new objects and test your model performance on different anomaly types.
Cumulated inference results are available in the Amazon Lookout for Vision console under “Dashboard”
In most cases, you can expect to execute the steps in a few hours and get a quick assessment of your use case fit by running inferences on unseen test images, and correlating the inference results with the model precision, recall, and F1 scores.
Don’t forget to stop hosting your model (to save costs) after you have finished your inference (go back to the “Start the model” button and click on “stop hosting” in the dialog).
Step 5: Improve model performance
What happens when the results are not good enough? Perhaps the objects are badly classified. Firstly one would try and logically reason under which conditions where the falsely classified images taken. Was the lighting slightly different? Was the orientation of the object different?
For instance: if you are working without the lamp even different times of the day change the lighting due to sunlight coming through windows. Here you have two options to solve this:
Either control the lighting (close curtains and use the provided ring light)
or take many more “normal” and “anomaly” training images in different lighting situations.
Also watch out for shadows. If the direction of the lighting changes, so do the shadows and this can affect the classification. Again with the necessary images you can train this away. If instead you wish to understand more about the complex topic of lighting please go to: https://www.baslerweb.com/en/vision-campus/vision-systems-and-components/how-to-find-the-right-lighting-for-your-vision-system/
Always remember the optical setup rule of thumb: if you can’t see the anomaly in the image - chances are the model won’t either.
Troubleshooting
If the software does not start after a power outage, please login to the board (via ssh or directly with keyboard and monitor) and type:
docker system prune -a
and then restart the board.
Important hint
The training and testing images need to have all the same dimension - otherwise it will fail!
Want to comment this ... Show more