EDITING BOARD
RO
EN
×
▼ BROWSE ISSUES ▼
Issue 116

How to Build a $100 Smart Robot Powered by ML.NET, IoT Libraries, and Raspberry Pi ---

Daniel Costea
Senior Software Developer @ EU Agency



PROGRAMMING

I know there are thousands of projects about building robots, but I’m talking about really simple robots with some moving parts and AI. We’ve seen them programmed with Python, C, Java, but what about .NET?

It has been 6 years now since Microsoft launched .NET Core and we got access to new platforms like Linux, Mac or ARM. There are 6 million .NET developers around the globe. Most of them will be more than happy that they don’t have to deal with integration pain anymore when trying to integrate machine learning, deep learning and the IoT functionality.

In this article, I will introduce you to an open-source, cross-platform, code-first technology stack from the .NET ecosystem to build a $100 budget robot.

For practical reasons, I chose Linux (Raspbian OS is a Debian distribution for Raspberry Pi) and Raspberry Pi 3b+ for the hardware. .NET Core on ARM is supported starting with version 2.0, and ML.NET is supported starting with version 1.6. Neither all machine learning scenarios, nor deep learning scenarios are supported by the ML.NET framework at this point. However, this is work in progress and we are expecting more additions with each new version. For example, TensorFlow seems to have support for Linux ARM64 (64 bits distributions offer support for predicting TensorFlow models but you need to compile the binaries from the source code since they are not offered for download). So, long story short, we can do machine learning on ARM CPUs, but we have to rely on another platform with x64 CPU for a little help in order to train and predict deep learning models.

I hope predicting TensorFlow models will soon be supported by ARM64 and later some training too, in order to implement a real autonomous robot (so we don’t have to rely on external help from compatible platforms like x64!) and the diagram will get simplified as follows.

Until then, let’s refer to our first diagram.

Microsoft ML.NET

ML.NET is built upon .NET Core and .NET Standard (inheriting the ability to run on Linux, macOS, and Windows) and it’s designed as an extensible platform. Therefore, you can consume models created with other popular ML frameworks (like TensorFlow, ONNX, CNTK). ML.NET is an open-source, cross-platform, code-first, on premise framework developed by Microsoft and the .NET community.

.NET IoT Libraries

.NET IoT libraries is a recent nuget package developed by Microsoft and the .NET community, which gets you access to the GPIO layer. You can use a large variety of sensors (digital signals are supported by RPi by design, but analog sensors can be used with ADC converters), lots of shields and hats.

We can work either with the lower GPIO layer (and send digital signals to the pins) or with the Iot.Device.Bindings which is a collection of wrappers around various types of sensors, hats, shields, displays…

dotnet add package System.Device.Gpio
dotnet add package Iot.Device.Bindings

For moving parts such as motors, tires, and motors controller, I bought a very cheap kit (it costs 18 EURO, https://camjam.me/?page_id=1035). The kit has its own software written in Python, but I have completely ignored the software part. (Well, I took a peek to figure out how Python uses the motor shield and I have developed a few classes to work with.)

Let’s get it started!

I don’t have a degree in electronics, but having some previous experience working with analog sensors, my first intention was to add an analog to digital (ADC) converter like MCP3008. Yet, surprise, I had some problems installing MCP3008 and the motor shield on the same Raspberry Pi, because they occupy some common pins like MISO and MOSI pins (they both use the SPI interface) and I didn’t know how to make them work together. So, an easier alternative was to use digital sensors.

What is a digital sensor?

Raspberry Pi has a large array of GPIO pins and they are working with digital signals (on/off or high/low), but a binary value is not very rich in information compared to a range of values.

Most digital or hybrid sensors (with DO/AO or digital/analog output) are equipped with manual potentiometers to adjust the threshold of digital output (which is a binary value of high/low). As I said before, a binary value depending on a manual threshold is not a very good option for something like an autonomous robot. I would rather have a range of values as output for my dataset which I plan to use with machine learning.

So what other alternatives do we have?

What is an I2C sensor?

The I2C bus is a simple and flexible way to transfer digital data between two electronic devices. I2C is the bus of choice for many sensors between many vendors. In order to use I2C on Raspberry Pi, we have to activate the I2C bus running the raspi-config tool in privileged mode.

sudo raspi-config

Raspberry Pi 3b+ has a pair of GPIO pins reserved for only one I2C bus (SDC and SLC pins). Bus 0 and 2 are reserved. Bus 1 is set by default and we plan to use it for illuminance and we want to add buses 3 and 4 for infrared and temperature/humidity.

i2c bus 0 - reserved
i2c bus 1 - illuminance
i2c bus 2 - reserved
i2c bus 3 - infrared
i2c bus 4 - temperature/humidity

In order to use multiple I2C buses I had to reconfigure the pins by adding the next lines to /boot/config.txt:

dtoverlay=i2c-gpio,bus=4,
i2c_gpio_delay_us=1,
i2c_gpio_sda=27,
i2c_gpio_scl=22

dtoverlay=i2c-gpio,bus=3,
i2c_gpio_delay_us=1,
i2c_gpio_sda=25,
i2c_gpio_scl=24

After reboot the new buses can be checked as follows:

sudo i2cdetect -l

You will now see that i2c buses 3 and 4 are also listed. Also, run:

sudo i2cdetect -y 3
sudo i2cdetect -y 4

Working with digital binary sensors using the GPIO layer is trivial (let’s assume we have a digital binary sensor connected to GPIO pin 23):

public double ReadInfrared()
{
  _gpioController.OpenPin(23, PinMode.Input);
  var infrared = _gpioController.Read(23);
  _gpioController.ClosePin(23);
  return infrared == PinValue.High ? 0 : 1;
}

Working with digital I2C sensors (Embedding layer):

I2cConnectionSettings settings = 
  new I2cConnectionSettings(busId: 1, 
 (int)I2cAddress.AddPinLow);

I2cDevice device = I2cDevice.Create(settings);

using (Bh1750fvi sensor = new Bh1750fvi(device))
{
    // read illuminance(Lux)
    double illuminance = sensor.Illuminance;
}

Let's see the I2C sensors used for this project.

Bh1750fvi - Luminosity Sensor

Sht3x - Temperature and Humidity Sensors

Mlx90614 - Infrared Sensor

Controlling the motors

Controlling the motors is a piece of cake. We simply send commands to each motor, wait as long as we need to maintain the state and the full stop for everything.

_gpioCtrl.Write(_settings.RightBackwardPin, 
  PinValue.Low);
_gpioCtrl.Write(_settings.RightForwardPin, 
  PinValue.High);
_gpioCtrl.Write(_settings.LeftBackwardPin, 
  PinValue.Low);
_gpioCtrl.Write(_settings.LeftForwardPin, 
  PinValue.High);

System.Threading.Thread.Sleep(milliseconds);

_gpioCtrl.Write(_settings.LeftBackwardPin, 
  PinValue.Low);
_gpioCtrl.Write(_settings.LeftForwardPin, 
  PinValue.Low);
_gpioCtrl.Write(_settings.RightBackwardPin, 
  PinValue.Low);
_gpioCtrl.Write(_settings.RightForwardPin, 
PinValue.Low);

Build a machine learning model

I’m not a data scientist but to find the right trainer we can make use of Model Builder (or Automate ML).

Model Builder is able to determine the best trainer for a specific scenario, by training a model and measuring its quality for a specific time interval.

This seems to make the training of machine learning models trivial, but don’t get me wrong, I’m not saying a data scientist is not needed anymore. What I’m saying is you don’t have to be a data scientist to get it started. Most of the time you will get a decent working machine learning model, but the model accuracy can be dramatically increased by data preparation.

The next picture shows the training pipeline and some instruments like cross-validation, confusion matrix, or permutation feature importance. They are used to increase the quality of the machine learning model.

Before everything else, we need some data and we plan to read that data from the sensors.

Other sensors

We had the chance to see some of the sensors used for our robot, now let’s see the ultrasonic proximity sensor.

HC-SR04 - Ultrasonic Proximity Sensor

The observations from the dataset are labeled, which means we have a target feature called “IsAlarm“ for every observation.

ML.NET support for ARM and Blazor

Starting with ML.NET version 1.6.0 we can train and consume models in Blazor and on ARM CPUs. All you have to do is add the following settings to the csproj file.

<PropertyGroup>
  <TargetFramework>net6.0</TargetFramework>
    <EnableMLUnsupportedPlatformTargetCheck>
     false
    </EnableMLUnsupportedPlatformTargetCheck>
</PropertyGroup>

...

<ItemGroup>
    <PackageReference Include=”Microsoft.ML” 
    Version=”1.6.0” />

    <PackageReference Include=”Microsoft.ML.FastTree” 
    Version=”1.6.0” />
</ItemGroup>

Video Capturing

Another part of the AI used for our robot is computer vision, but with Iot.Device.Media. This is a piece of cake.

using Iot.Device.Media;
var settings = new VideoConnectionSettings(
    busId: 0, 
    captureSize: (width, height), 
    pixelFormat: PixelFormat.JPEG
);

var device = Iot.Device.Media
  .VideoDevice.Create(settings);

If we read the sensor's data in a loop and print it to the console, this is what we get.

Inception model - Transfer Learning

Transfer learning is a machine learning method where a model developed for a task is reused as the starting point of another task. We can use the well-known Inception model which is primarily trained to detect 1000 different classes. However, most of the time we don’t need to classify our object in those predefined classes. We rather want to define our own classes (no more, no less) and to retrain the model (here transfer learning kicks in – transfer learning keeps the feature extraction part, but replaces the classification part from the end of the model).

var data = mlContext.Data
  .LoadFromTextFile(path: tsv, 
  hasHeader: false);

var pipeline = mlContext.Transforms
 .Conversion.MapValueToKey(
    outputColumnName: LabelToKey,
    inputColumnName: nameof(ImageNetData.Label))
 .Append(mlContext.Transforms.LoadImages(
    outputColumnName: INPUT_LAYER,
    imageFolder: imagesFolder,
    inputColumnName: nameof(ImageNetData.ImagePath)))
 .Append(mlContext.Transforms.ResizeImages(
    outputColumnName: INPUT_LAYER,
    imageWidth: ImageNetSettings.imageWidth,
    imageHeight: ImageNetSettings.imageHeight,
    inputColumnName: INPUT_LAYER))
 .Append(mlContext.Transforms.ExtractPixels(
    outputColumnName: INPUT_LAYER,
    interleavePixelColors: ImageNetSettings
    .channelsLast,
    offsetImage: ImageNetSettings.mean))     
    .Append(mlContext.Model.LoadTensorFlowModel
   (inceptionModel)
    .ScoreTensorFlowModel(
        inputColumnNames: new[] { INPUT_LAYER },
        outputColumnNames: new[] { OUTPUT_LAYER },
        addBatchDimensionInput: true))
    .Append(mlContext.MulticlassClassification
    .Trainers
    .LbfgsMaximumEntropy(
        labelColumnName: LabelToKey,
        featureColumnName: OUTPUT_LAYER))
    .Append(mlContext.Transforms.Conversion
    .MapKeyToValue(PredictedLabelValue, 
    PredictedLabel))
    .AppendCacheCheckpoint(mlContext);

    ITransformer model = pipeline.Fit(data);
    var predictionEngine = mlContext
    .Model.CreatePredictionEngine(model);

    var trainData = model.Transform(data);
    mlContext.Model.Save(model, trainData.Schema, 
    modelLocation);

Conclusion

Now we have a robot able to make decisions by processing the data from the sensors (using machine learning) and by processing the images from the camera (using deep learning). Such decisions can be made by movement using its motors, therefore our robot is able to detect fire danger by reading the sensors and by validating the danger with images from its camera.

Future development will include complete autonomy for our robot, by moving the deep learning part into the Raspberry Pi (as I mentioned at the beginning of this article).

References

  1. Apex Robot repository - https://github.com/dcostea/Apex.Robot

  2. .NET Core IoT repository - https://github.com/dotnet/iot/blob/main/Documentation/README.md

  3. Microsoft ML.NET repository - https://github.com/dotnet/machinelearning

  4. Model Builder - https://dotnet.microsoft.com/apps/machinelearning-ai/ml-dotnet/model-builder

  5. Jon Wood youtube channel - https://www.youtube.com/channel/UCrDke-1ToEZOAPDfrPGNdQw

  6. Rubik's code blog - https://rubikscode.net/blog/

VIDEO: ISSUE 109 LAUNCH EVENT

Sponsors

  • Accenture
  • BT Code Crafters
  • Accesa
  • Bosch
  • Betfair
  • MHP
  • BoatyardX
  • .msg systems
  • P3 group
  • Ing Hubs
  • Cognizant Softvision
  • Colors in projects

VIDEO: ISSUE 109 LAUNCH EVENT