New Generation Computer Vision: AWS DeepLens

New Generation Computer Vision: AWS DeepLens

How to deploy an object detection model by using AWS DeepLens

AWS Deep Lens

AWS DeepLens is a programmable video camera that enables developers to get started to practice on deep learning techniques in a less amount of time. Inside of AWS DeepLens, Intel Atom® Processor as a CPU, Intel Gen9 Graphics Engine as GPU, Ubuntu OS-16.04 LTS as OS. In addition to these features, it has 4 MP camera (1080P), 8GB RAM, 16GB expandable memory, Dual-band Wi-Fi, Intel® Movidius™ Neural Compute Stick, and Intel® RealSense™ depth sensor. Below, you can find its dimensions, audio out, micro HDMI display port, and USB ports in a depicted way.

AWS Deep Lens

Currently, it can only be shipped to these eight countries: US, UK, Germany, France, Spain, Italy, Canada, Japan. If you are outside of these 7 countries, you might consider ordering one of these countries and pick it up from this location.

Three different types of modeling are possible to be deployed to AWS DeepLens. In this post, we will be working on the Pre-Trained Object Detection model.

1. Pre-Trained Model

This project enables users to deploy an initially trained model to their devices. It can be selected by the following path of Projects > Create project

AWS Deep Lens
Pre-trained model import project page

2. Amazon SageMaker trained model

With this model type, you can create and train your models in AWS SageMaker and provide their following information and then click on Import button: “Job ID”, “Model name”, “Model framework”.

AWS Deep Lens
Amazon SageMaker trained model selection page

To be able to deploy your models into your device by using this model type, AWS Sagemaker is a required service to open up a SageMaker Notebook instance as a code editor.

If you are new to AWS services and have never used AWS SageMaker before, AWS serves you an AWS Free Tier. By using this account, you can get started to use AWS SageMaker for the first two months. In this two month period, the following usages will be free by monthly basis with AWS Sagemaker:

  • Building models: 250 hours of t2.medium or t3.medium notebook usage
  • Training models: 50 hours of m4.xlarge or m5.xlarge for training
  • Deploying models (real-time & batch transform): 125 hours of m4.xlarge or m5.xlarge

Note that the month you initiated the AWS SageMaker instance will be the month of the beginning of your free tier.

3. Externally Trained Model

By choosing this type of model, it is expected that you already trained your model outside of the AWS environment and uploaded your model into an AWS S3 bucket. To be able to upload your model to DeepLens, you are required to fill the following fields and then click on Import button: “Model artifact path”, “Model name”, “Model Framework”.

AWS Deep Lens
Externally trained model selection page

AWS DeepLens

Before start using any service, necessary permissions shall be set as in the link to be able to properly use them. The first service that will be used is AWS DeepLens. To be able to use this service, your region shall be selected among one f these regions:

Europe (Frankfurt) eu-central-1
US East (N. Virginia) us-east-1
Asia Pacific (Tokyo) ap-northeast-1

After setting up development environment policies and regions, by using AWS Management Console, under the “Find Services” heading, you will be able to quickly find any services by using the search button by typing the name of the service as shown below.

AWS Deep Lens
AWS Management Console

This page includes basic pieces of information about the service. For more detailed technical details, you can visit “Documentation” under the “More Resources” tab.

AWS Deep Lens
AWS Deeplens Introductory Page

Device Registration

When the product is unboxed, the first thing step is to properly register your device into the AWS DeepLens service.

AWS Deep Lens
Register Device Introductory Screen

After you connect your device to your PC, click on “Register device” button, then select your hardware version and click on “Start” button as shown in the figure below.

As a first step, the device shall be connected to the power source by using its adapter and turn on the power button. When the device is on, power led will turn into blue color.

You can connect your PC with your device by plugging in the USB cable to the device’s “Registration” port.

When you can successfully registered AWS DeepLens, you will be able to see your device under Resources > Devices tab on the left-hand side of the page by having a “Registered” status.

AWS Deep Lens
Devices Tab

Deploying a Pre-Trained Model

Under “Projects” section, you need to click on the “Create new project” button that is located on the top right to be able to see the project types.

AWS Deep Lens
Create new project

In this step, one of the pre-populated project templates needs to be chosen. Choose “Use a project template” as the project type and choose the “Object detection” from the list and scroll down to screen to click on “Create”.

AWS Deep Lens
Choosing a project template

In the “Specify project details” page, accept the default values in the project name and description feeds.

AWS Deep Lens
Specify project details section

On the bottom of the same page, you will be viewing the Project Content selection settings. Both for Model and Function, accept the default values and click on “Create” to continue.

AWS Deep Lens
Project content section

In this step, you will deploy the object detection project to your device. Your currently created project shall be listed successfully in the “Projects” section. After you view the corresponding project, click on the radio button and choose “Deploy to device” on the right top.

AWS Deep Lens
Projects Page

On the “Target device” page, you need to choose your device and click on the “Review” button.

AWS Deep Lens
Target Device Page

There will an additional page that has details of your deployment including information about “Type”, “Lambda” and “Model”. After you carefully check them, select the “Deploy” button to continue.

AWS Deep Lens
Review and Deploy Model

When clicked on “Deploy” your model will be uploaded to your device by showing its download percentage to AWS DeepLens.

AWS Deep Lens
Deployment to the device in progress

After deployment, on the “Devices” tab, after your project deployment, click on “View Output” to select your browser for the corresponding streaming certification import.

AWS Deep Lens
Browser Selection for Streaming Certification

Model Output

There are 2 different ways to view our model output. These are listed below and explained in separate topics.

  • JSON-formatted MQTT topic-valued output
  • Project Stream

1.AWS IoT Core — MQTT Topic Value

After you successfully imported your certificate, your browser will ask you to choose the appropriate version of your certificate via the pop-up screen.

When you would like to have a JSON-formatted output, you may “Copy” the uniquely generated topic and click on “AWS IoT console” to open up the AWS IoT Core service.

AWS Deep Lens
JSON-formatted output

After copying the topic with the following format “$aws/things/deeplens_/infer”, paste it under “Subscribe to topic” and click on “Publish to topic” button.

AWS Deep Lens
AWS IoT Core — MQTT Client

After clicked on “Publish to topic”, json-formatted outputs started to be published. If you would like to stop publishing, you can select “Pause” on the right top.

AWS Deep Lens
AWS IoT Core — After the topic is published

2.Project Stream

After the certificate for our browser is imported, we can click on the “View video stream” under the “Video streaming” section to open a new tab with the IP address “192.168.1.47:4000″.

AWS Deep Lens
Video Streaming section

When the stream is enabled on the IP address specified, we can see two different tabs. The first tab is called “Project stream” which is the stream where our object detection model was applied. On this stream, we see blue frames around the objects and on the top of the frames, detected names of the objects with their likelihood percentages. Not all of the objects in the frame can be recognized since the model was trained in a limited amount of objects. If we would like to detect more objects than the pre-trained object detection model, we need to import our custom model by importing an external custom model.

AWS Deep Lens
Project Stream output

The second stream is called “Live Stream”. When we select this tab, we can view the normal camera stream which shows frames faster than “Project Stream” since it is not applying any model on the objects.

AWS Deep Lens
Live Stream output
17/04/2020

Reading Time: 9 minutes

How to deploy an object detection model by using AWS DeepLens

AWS Deep Lens

Don’t miss out the latestCommencis Thoughts and News.

    AWS DeepLens is a programmable video camera that enables developers to get started to practice on deep learning techniques in a less amount of time. Inside of AWS DeepLens, Intel Atom® Processor as a CPU, Intel Gen9 Graphics Engine as GPU, Ubuntu OS-16.04 LTS as OS. In addition to these features, it has 4 MP camera (1080P), 8GB RAM, 16GB expandable memory, Dual-band Wi-Fi, Intel® Movidius™ Neural Compute Stick, and Intel® RealSense™ depth sensor. Below, you can find its dimensions, audio out, micro HDMI display port, and USB ports in a depicted way.

    AWS Deep Lens

    Currently, it can only be shipped to these eight countries: US, UK, Germany, France, Spain, Italy, Canada, Japan. If you are outside of these 7 countries, you might consider ordering one of these countries and pick it up from this location.

    Three different types of modeling are possible to be deployed to AWS DeepLens. In this post, we will be working on the Pre-Trained Object Detection model.

    1. Pre-Trained Model

    This project enables users to deploy an initially trained model to their devices. It can be selected by the following path of Projects > Create project

    AWS Deep Lens
    Pre-trained model import project page

    2. Amazon SageMaker trained model

    With this model type, you can create and train your models in AWS SageMaker and provide their following information and then click on Import button: “Job ID”, “Model name”, “Model framework”.

    AWS Deep Lens
    Amazon SageMaker trained model selection page

    To be able to deploy your models into your device by using this model type, AWS Sagemaker is a required service to open up a SageMaker Notebook instance as a code editor.

    If you are new to AWS services and have never used AWS SageMaker before, AWS serves you an AWS Free Tier. By using this account, you can get started to use AWS SageMaker for the first two months. In this two month period, the following usages will be free by monthly basis with AWS Sagemaker:

    • Building models: 250 hours of t2.medium or t3.medium notebook usage
    • Training models: 50 hours of m4.xlarge or m5.xlarge for training
    • Deploying models (real-time & batch transform): 125 hours of m4.xlarge or m5.xlarge

    Note that the month you initiated the AWS SageMaker instance will be the month of the beginning of your free tier.

    3. Externally Trained Model

    By choosing this type of model, it is expected that you already trained your model outside of the AWS environment and uploaded your model into an AWS S3 bucket. To be able to upload your model to DeepLens, you are required to fill the following fields and then click on Import button: “Model artifact path”, “Model name”, “Model Framework”.

    AWS Deep Lens
    Externally trained model selection page

    AWS DeepLens

    Before start using any service, necessary permissions shall be set as in the link to be able to properly use them. The first service that will be used is AWS DeepLens. To be able to use this service, your region shall be selected among one f these regions:

    Europe (Frankfurt) eu-central-1
    US East (N. Virginia) us-east-1
    Asia Pacific (Tokyo) ap-northeast-1

    After setting up development environment policies and regions, by using AWS Management Console, under the “Find Services” heading, you will be able to quickly find any services by using the search button by typing the name of the service as shown below.

    AWS Deep Lens
    AWS Management Console

    This page includes basic pieces of information about the service. For more detailed technical details, you can visit “Documentation” under the “More Resources” tab.

    AWS Deep Lens
    AWS Deeplens Introductory Page

    Device Registration

    When the product is unboxed, the first thing step is to properly register your device into the AWS DeepLens service.

    AWS Deep Lens
    Register Device Introductory Screen

    After you connect your device to your PC, click on “Register device” button, then select your hardware version and click on “Start” button as shown in the figure below.

    As a first step, the device shall be connected to the power source by using its adapter and turn on the power button. When the device is on, power led will turn into blue color.

    You can connect your PC with your device by plugging in the USB cable to the device’s “Registration” port.

    When you can successfully registered AWS DeepLens, you will be able to see your device under Resources > Devices tab on the left-hand side of the page by having a “Registered” status.

    AWS Deep Lens
    Devices Tab

    Deploying a Pre-Trained Model

    Under “Projects” section, you need to click on the “Create new project” button that is located on the top right to be able to see the project types.

    AWS Deep Lens
    Create new project

    In this step, one of the pre-populated project templates needs to be chosen. Choose “Use a project template” as the project type and choose the “Object detection” from the list and scroll down to screen to click on “Create”.

    AWS Deep Lens
    Choosing a project template

    In the “Specify project details” page, accept the default values in the project name and description feeds.

    AWS Deep Lens
    Specify project details section

    On the bottom of the same page, you will be viewing the Project Content selection settings. Both for Model and Function, accept the default values and click on “Create” to continue.

    AWS Deep Lens
    Project content section

    In this step, you will deploy the object detection project to your device. Your currently created project shall be listed successfully in the “Projects” section. After you view the corresponding project, click on the radio button and choose “Deploy to device” on the right top.

    AWS Deep Lens
    Projects Page

    On the “Target device” page, you need to choose your device and click on the “Review” button.

    AWS Deep Lens
    Target Device Page

    There will an additional page that has details of your deployment including information about “Type”, “Lambda” and “Model”. After you carefully check them, select the “Deploy” button to continue.

    AWS Deep Lens
    Review and Deploy Model

    When clicked on “Deploy” your model will be uploaded to your device by showing its download percentage to AWS DeepLens.

    AWS Deep Lens
    Deployment to the device in progress

    After deployment, on the “Devices” tab, after your project deployment, click on “View Output” to select your browser for the corresponding streaming certification import.

    AWS Deep Lens
    Browser Selection for Streaming Certification

    Model Output

    There are 2 different ways to view our model output. These are listed below and explained in separate topics.

    • JSON-formatted MQTT topic-valued output
    • Project Stream

    1.AWS IoT Core — MQTT Topic Value

    After you successfully imported your certificate, your browser will ask you to choose the appropriate version of your certificate via the pop-up screen.

    When you would like to have a JSON-formatted output, you may “Copy” the uniquely generated topic and click on “AWS IoT console” to open up the AWS IoT Core service.

    AWS Deep Lens
    JSON-formatted output

    After copying the topic with the following format “$aws/things/deeplens_/infer”, paste it under “Subscribe to topic” and click on “Publish to topic” button.

    AWS Deep Lens
    AWS IoT Core — MQTT Client

    After clicked on “Publish to topic”, json-formatted outputs started to be published. If you would like to stop publishing, you can select “Pause” on the right top.

    AWS Deep Lens
    AWS IoT Core — After the topic is published

    2.Project Stream

    After the certificate for our browser is imported, we can click on the “View video stream” under the “Video streaming” section to open a new tab with the IP address “192.168.1.47:4000″.

    AWS Deep Lens
    Video Streaming section

    When the stream is enabled on the IP address specified, we can see two different tabs. The first tab is called “Project stream” which is the stream where our object detection model was applied. On this stream, we see blue frames around the objects and on the top of the frames, detected names of the objects with their likelihood percentages. Not all of the objects in the frame can be recognized since the model was trained in a limited amount of objects. If we would like to detect more objects than the pre-trained object detection model, we need to import our custom model by importing an external custom model.

    AWS Deep Lens
    Project Stream output

    The second stream is called “Live Stream”. When we select this tab, we can view the normal camera stream which shows frames faster than “Project Stream” since it is not applying any model on the objects.

    AWS Deep Lens
    Live Stream output