Chapter 12. Artificial Intelligence: BatBot

Have you ever wondered if robots will take over the world? If so, you’re not alone. Hollywood and the media have done an excellent job of trying to convince us of an impending robot revolution.

To complete such a mission, robots would need to be smart enough to work together and develop a plan to take over the world. To accomplish this, however, they would need some human-enabled intelligence, also known as artificial intelligence. Fortunately, we are a long way off from enabling robots to work together at such a large (and certainly scary!) scale, but artificial intelligence is still very useful for all sorts of applications, and that is why we’re going to spend this chapter playing with it!

mjsr 1201
Figure 12-1. BatBot

In this chapter, I’m going to talk about artificial intelligence (AI) and three categories of artificially intelligent robots (remote-controlled, semi-autonomous, and fully autonomous). And then you’re going to build BatBot, as shown in Figure 12-1, a semi-autonomous robot.

Artificial Intelligence: The Basics

As humans, our ability to make decisions and act on them is what has propelled us to the top of the intellectual food chain. Our abilities to reason, rationalize, and remember give us the leg up on other species who can’t do it quite as well.

When we talk about artificial intelligence, all we really mean is using algorithms to simulate decision making. An artificially intelligent being (i.e., a robot) relies entirely on these algorithms to know what to do in certain situations. In most circumstances, robots don’t learn like humans do; they have to be taught absolutely everything.

Machine learning is the one exception to the AI rule. To be clear, however, even machine learning has its limitations. Machine learning is simply a series of increasingly complex algorithms (like Haar Cascades in neural networks) that can be used to “teach” a robot how to discern different pieces of its environment, to which it can apply other decision-making models.

I like to group robots into one of three categories, ranging from least to most artificially intelligent: remote-controlled, semi-autonomous, and autonomous.

Bill of Materials

Table 12-1. Bill of materials for the sonar sensor array
Count Part Source Estimated price

1

MaxBotics Ultrasonic Rangefinder LV-EZ2

AF 980; SF SEN-08503

$25

1

Generic high-torque standard servo

SF ROB-11965; AF 155

$12

1

100 ohm resistor

Electronics retailers

$10 for a variety pack

1

100 uF capacitor

Electronics retailers

$1.50

Headers

Online Electronics retailer

$2 for a variety pack

Jumper wires

MS MKSEEED3; SF PRT-11026; AF 758

$2 for a variety pack

1

XBee wireless kit

SF KIT-13197

$100

Glue gun and glue sticks

Online retailer

$10

Table 12-2. Bill of materials for the chassis
Count Item Source Estimated price

1

Arduino Uno

MS MKSP99; AF 50; SF DEV-11021

$25

1

BOE Bot Robotics Shield Kit for Arduino Uno

MS MKPX20; SF ROB-11494

$130

5

AA batteries

Online retailer

$5

1

PS3 or PS4 DualShock controller

Online retailer

$45

Table 12-3. Additional (optional) materials
Item Source Estimated price

Bat wings, cut out of felt or similar

Local craft store

Varies

A ridiculously large paper bag, like a lawn paper bag for disposing of autumn leaves, with most of the sides cut down

Your local hardware store

$3 for a pack of five

Decorative accessories (e.g., googly eyes, a feather boa, etc.)

Local craft store

Varies

Assembly

Let’s start building!

  1. First, assemble your chassis according to the manufacturer’s instructions. We won’t be making any special modifications to the chassis except adding to it, so building it should be fairly straightforward.

  2. Solder some headers to the sensor (Figure 12-2) , so that you can easily fiddle with your connections. The most important connections for this project are GND (ground), +5 (voltage in), and AN (analog output signal).

    mjsr 1202
    Figure 12-2. Ultrasonic sensor with headers before soldering
  3. Attach your sonar to a servo horn using hot glue (Figure 12-3): it’s easy to take apart if you mess up, but holds really well. As an added bonus, it won’t damage any of your components, and your bot shouldn’t be heating up hot enough at any point to risk the glue melting again.

    mjsr 1203
    Figure 12-3. Ultrasonic sensor with servo horn, attached with hot glue
  4. Attach your standard servo to the front of your bot using hot glue, as shown in Figure 12-4. Be sure to attach the servo in such a way as to ensure that the sonar will rotate left to right, pointing ahead of the robot.

    mjsr 1204
    Figure 12-4. Servo attached to batbot
  5. Wire up the sensor array according to Figures 12-5 and 12-6, which will also require the 100 ohm resistor and the 100 uF capacitor.

    mjsr 1205
    Figure 12-5. Fritzing diagram of sonar sensor array
mjsr 1206
Figure 12-6. Schematic diagram of sonar sensor array
  1. Once you’ve attached the sonar, now is a great time to add some finishing touches. Add a pair of wings, flame stickers, or whatever suits your fancy. Just make sure it doesn’t get in the way of the wheels or the sonar; you want to make sure the robot can still make readings and move freely so it can finish its task!

Now that you have all of the major bits in place, let’s get to the meat of this project: artificial intelligence!

Step 1: Remote Control

Before you can get to the really awesome fun part of making BatBot find its way out of a paper bag, you’re going to need to figure out how to talk to BatBot:

  1. Ensure you have the latest stable version of Node.js and npm installed on your computer. If you still need help with installing Node.js and need a primer on how to use npm to install modules, see the appendix.

  2. Get the code for BatBot, located in the batbot/ folder in the Make: JavaScript Robots repository on GitHub.

  3. On your local copy of the code, find your way into the batbot/ directory and run npm install to install all of the packages listed in the package.json file. I’ll introduce each module as you need them. The first and most important one is johnny-five, which allows you to send commands to the Arduino and thus move the servos and read from the sensor.

Moving the Robot

Now that you have a robot and your software environment is set up, the next major task is to get BatBot moving around under your direction. From there, you can move on to encouraging BatBot to drive itself.

Let’s take a closer look at the chassis.

Notice that the BOE Bot comes with two continuous servos, one for each wheel. Each wheel moves independently, which will allow the robot to move in any direction: forward, backward, left, and right.

A continuous servo moves continuously in a single direction (i.e., clockwise), like a motor. Where it differs from a standard motor, however, is that we can programmatically tell it to move in the opposite direction (i.e., counterclockwise). (To make a standard motor switch directions, on the other hand, you would have to physically change the polarity of its inputs.)

As you can see in Figure 12-7, the two servos are pointed in opposite directions. This means that in order for the robot to move in a straight line, the servos are going to have to turn in opposite directions (i.e., one will turn clockwise while the other turns counterclockwise). Keep in mind, though, that both wheels will still turn in the same direction.

mjsr 1207
Figure 12-7. Simplified diagram of robot movement

Similarly, if both servos turn in the same direction, the robot will turn! For this project, when the robot turns, you want it to turn in place. To do this, one wheel needs to turn backward at the same rate that the other wheel turns forward.

By implementing each servo separately, you have the logic shown in Table 12-4 for moving the robot.

Table 12-4. Continuous servo logic for robot movement
Direction of Movement Left Servo Direction Right Servo Direction

Forward

Forward (ccw)

Forward (cw)

Backward

Backward (cw)

Backward (ccw)

Left

Backward (cw)

Forward (cw)

Right

Forward (ccw)

Backward (ccw)

Remember, all of the source code for the examples in this book can be found on GitHub. You’ll need to follow these steps:

  1. Go ahead and create a new file in the batbot/ directory called moveBot.js. Initialize johnny-five and begin our program like so:

    var five = require("johnny-five");
    
    var board = new five.Board();
    board.on("ready", function () {
        // do stuff
    });
  2. Next, implement each continuous servo using the johnny-five servo API. Add the following code after requiring the johnny-five module, but before initializing the board:

    var leftServo =
       five.Servo.Continuous(10);
    var rightServo =
       five.Servo.Continuous(11);

    The pin number corresponds to the connection of each servo to the BOE Bot shield and thus to the Arduino. You must also specify that these servos are continuous servos, as opposed to standard servos.

  3. To make it easier to control each servo, write a move function, following the logic for robot movement described in Table 12-4:

    var moveSpeed = 0.1;
    
    function move(rightFwd, leftFwd) {
      if (rightFwd) {
        rightServo.cw(moveSpeed);
      } else {
        rightServo.ccw(moveSpeed);
      }
    
      if (leftFwd) {
        leftServo.ccw(moveSpeed);
      } else {
        leftServo.cw(moveSpeed);
      }
    }
  4. For a given movement, you want the right wheel to move forward or backward, and the same for the left wheel. Your code uses booleans to dictate the direction of each wheel. With this, you can abstract each movement out even further with easier-to-remember functions:

    function turnLeft() {
      move(false, true);
    }
    
    function turnRight() {
      move(true, false);
    }
    
    function goStraight() {
      move(true, true);
    }
    
    function goBack() {
      move(false, false);
    }
  5. Don’t forget to include a stop() function as well:

    function stop() {
      lServo.stop();
      rServo.stop();
    }
  6. You can play with the servos and move functions in the johnny-five REPL by passing them into the johnny-five REPL object:

    this.repl.inject({
      left: leftServo,
      right: rightServo,
      turnLeft: turnLeft
    });

    and then in the johnny-five REPL:

    leftServo.cw();
    
    leftServo.stop();
    
    turnLeft();

Pointing and Reading from the Sonar

Now let’s hook up the sonar and its associated standard servo into the mix:

  1. Because the sonar is an analog sensor, you need to wire it up to one of the analog pins on the Arduino:

    var sonar = new five.Sonar("A2");
  2. Throw the sonar into the REPL, and start playing around with the readings, using sonar.cm or sonar.inches. What happens when you point it at different materials? Do you notice a pattern in readings? Is there a minimum reading or a maximum reading? You can read more about the johnny-five sonar API on GitHub.

  3. You should notice that the sensor emits higher values for objects that are farther away. You may also notice that the object doesn’t necessarily need to be directly in front of the sonar to get a reading. This is due to the MaxBotix sensor’s beam characteristics.

For maximum control of the sonar sensor, it is attached to a standard servo. Unlike a continuous servo, which moves continuously, a standard servo moves to a specified angle measurement. The benefit of a standard servo in this application is that we can specify exactly where we want the sonar to point. We can see the johnny-five servo API on GitHub.

  1. Initialize the sonar servo:

    var sonarServo = new five.Servo({
      pin: 12,
      range: [10, 170]
    });

    The range parameter allows you to set a minimum and maximum angle for the sonar servo; this way, instead of having to constantly remember which angle is “left,” “center,” and “right,” you can simply say sonarServo.max(), sonarServo.center(), and sonarServo.min(), respectively.

    Depending on how you’ve mounted your servo to the robot, sonarServo.max() and sonarServo.min() may mean right and left, respectively. This is perfectly fine; just be sure to make the adjustments to your code as necessary.

  2. To finish, map the servo movement and sonar readings to your DualShock Controller:

    var angle = 15,
      sonarStep = 10;
    
    ds.on("r2:press", function() {
      console.log(sonar.cm);
    });
    ds.on("l2:press", function() {
      angle = (range[0] + range[1]) / 2;
      sonarServo.center();
    });
    ds.on("dPadLeft:press", function() {
      angle = angle < range[0]
        ? range[0] : angle + sonarStep;
      sonarServo.to(angle);
    });
    ds.on("dpadRight:press", function() {
      angle = angle > range[1]
        ? range[1] : angle - sonarStep;
      sonarServo.to(angle);
    });
    ds.on("dpadUp:press", function() {
      angle = range[1];
      sonarServo.max();
    });
    ds.on("dpadDown:press", function() {
      angle = range[0];
      sonarServo.min();
    });

    The r2 button press gives you sonar measurements, while the l2 button centers the servo. Then you’re using the direction pad to incrementally move the servo in steps of sonarStep (and stopping at the minimum/maximum we set when we initialized it). You’re also using the direction pad to move entirely to the maximum and minimum ranges. You’re keeping track of the angle yourself to ensure you have the maximum amount of control.

Be careful when driving your bot around—make sure it’s on a flat surface, preferably on the floor. If you must drive it around on a table, make sure you have someone standing guard who can catch the bot if (when!) something goes wrong.

Drive it around the room—how does it handle? Would you like to do anything differently? Feel free to play around, tweaking numbers. Make it your own!

Step 2: Autonomy

The next step on your journey to artificial intelligence is to take your remote-controlled robot and make it smart! To achieve this goal, you need to fully understand the problem at hand, break it down into smaller problems, “teach” the robot to handle those smaller problems, and walk away once the greater problem has been solved. The steps you follow now will be useful for any artificial intelligence problem, beyond helping BatBot find its way out of a paper bag.

Start by clearly identifying the problem: you have a paper bag, situated on the floor. The opening is pointed out, so that the robot can drive into the bag. Once in the paper bag, its task is to navigate its way back out, using only the ultrasonic sensor and its driving mechanism.

The robot goes into the bag, pointing at the back wall. How should it get out?

If you’re having trouble seeing the world from the robot’s perspective, pretend that you are the robot. Imagine that you are blindfolded, or the room is very dark. The only information you have is that there is or isn’t a wall in front of you. On top of that, you can only turn 90° or move forward/backward. Now how do you get out of the room?

Your first thought may be to turn around by 180° and walk out.

While that’s a perfectly valid answer, you’re using a priori data (facts you knew before you walked into the room, like knowing that the room has three sides and you walked in through the open end). The robot doesn’t have that information.

The goal of this exercise shouldn’t be to answer this specific question, but instead to answer a general set of problems. This problem of the paper bag is essentially three walls, but it can be easily extended to solving a simple maze.

A common maze-solving algorithm is the wall follower algorithm, also known as the lefthand rule or the righthand rule. The general idea is that by following a wall with either your left or right hand along the wall, you will eventually find the end of the puzzle.

But going back to this problem’s limitations, you only know if there’s a wall in front of your eyes, not if you’re parallel to a wall (though you can play around with that idea in a future iteration!).

It’s important to note that the robot has no idea about where it is relative to anything else. It only knows the information it has at a particular moment, with no sense of memory.

For this specific application, then, there is an even simpler algorithm:

  1. Check if there is a wall to the left of me, in front of me, and to the right of me.

  2. If one of the directions has no wall, turn 90° or drive toward the opening; go back to step #1.

  3. If there isn’t, turn 90° to the right and go back to step 1.

By using this pattern, you don’t need to have any information about the room, and you can use the sensors you have available to make decisions.

Implementing the Algorithm

Now that we’ve settled on an algorithm, let’s write it up in the code:

  1. First, check to see if there is a “wall” to the right of the robot. Point the standard servo to the right with sonarServo.max() and take a sonar measurement with sonar.cm.

    Next, turn the servo to the front (sonarServo.center()), take a measurement, and finally to the left (sonarServo.min()) with a measurement as well:

    sonarServo.max();
    var rightVal = sonar.cm;
    sonarServo.center();
    var frontVal = sonar.cm;
    sonarServo.min();
    var leftVal = sonar.cm;
  2. Bind the start (and stop!) of the scanning algorithm to buttons on your DualShock Controller, to make it easy to put your robot in (and out of) autonomous mode:

    ds.on('select:press', function () {
      console.log('IN AUTO MODE');
    
      var loop = setInterval(function () {
        ds.on('r1:press', function () {
            clearInterval(loop);
        });
    
        // your algorithm goes here
      });
    });

Try it out and see what happens. What kind of values are you getting?

It’s not quite working, is it? It turns out that, with this piece of code, the sonar readings are happening too fast for the servo to keep up.

JavaScript, as a language, is unique in that it is asynchronous by nature. When it sends a command, it doesn’t wait for the command to complete before sending the next one. In the case of the servo/sonar combination, you’re sending the commands in succession, almost instantaneously, and you’re moving on to the next command before its predecessor has had a chance to complete.

What you want to do, instead, is give each command as much time as it needs to complete before beginning the next command. As a result, for this specific application, you’re going to have to force the algorithm to be synchronous.

Fortunately, there’s a library called temporal that will allow you to specify when each servo movement/sonar measurement takes place:

  1. Add the temporal package to your code:

    var five = require("johnny-five");
    var dualshock =
      require("dualshock-controller");
    var temporal = require("temporal");
  2. Using temporal, create a queue that moves the servo and takes a measurement every 1,500 milliseconds (i.e., 1.5 seconds):

    var scans = [];
    temporal.queue([
      {
        delay: 0,
        task: function () {
          sonarServo.max();
          scans.push({ dir: "left",
                       val: sonar.cm });
        }
      },
      {
        delay: 1500,
        task: function () {
          sonarServo.center();
          scans.push({ dir: "center",
                       val: sonar.cm });
        }
      },
      {
        delay: 1500,
        task: function () {
          sonarServo.min();
          scans.push({ dir: "right",
                       val: sonar.cm });
        }
      }
    ]);

    You may have noticed that now you’re pushing our sonar measurements into an array. By using the array-extended module, you have some very useful utilities for manipulating arrays and extracting useful data.

  3. Add the array-extended module to the code.

  4. Take the array of three directional measurements and find the one that is mostly likely to be the open wall (given that higher sonar measurements indicate the wall is farther away):

    var maxVal = array.max(scans, "val");
  5. Now use that information to implement the rest of the algorithm:

    WALL_THRESHOLD = 15; // cm
    
    var direction =
      maxVal.val > WALL_THRESHOLD
      ? maxVal.dir : "right";
    
    if (direction === "center") {
      goStraight(1000);
    } else if (direction === "left") {
      turnLeft(700);
    } else {
      turnRight(700);
    }

    The WALL_THRESHOLD is the value at which anything below it implies that there is a wall present; anything above it is too far away and can be considered an opening.

  6. To improve accuracy, take multiple scans in each direction and average them out using the array-extended module:

    var scanSpot = function (cb) {
      var sServoReadings = [];
      var read = setInterval(function () {
        sServoReadings.push(sonar.cm);
        if (sServoReadings.length === 10) {
          clearInterval(read);
          cb(null,
             array.avg(sServoReadings));
        }
      }, 100);
    };

    Here’s what’s going on: when you call scanSpot(), you’re taking a servo reading every 100 milliseconds (i.e., one-tenth of a second), and logging that in an array. After 10 sonar measurements, you use array-extended to find the average, and return that value via the callback. The callback ensures that you wait for this function to finish before you move on to the next step in the algorithm.

Put together, Example 12-1 shows the complete algorithm.

Example 12-1. Finished algorithm
var scans = [];
temporal.queue([
  {
    delay: 0,
    task: function () {
      sonarServo.max();
      scanSpot(function (err, val) {
        scans.push({ dir: "left", val: val });
      });
    }
  },
  {
    delay: 1500,
    task: function () {
      sonarServo.center();
      scanSpot(function (err, val) {
        scans.push({ dir: "center", val: val });
      });
    }
  },
  {
    delay: 1500,
    task: function () {
      sonarServo.min();
      scanSpot(function (err, val) {
        scans.push({ dir: "right", val: val });
      });
    }
  },
  {
    delay: 1500,
    task: function () {
      WALL_THRESHOLD = 15;
      minVal = array.min(scans, "val").val;
      var maxVal = array.max(scans, "val");
      var direction = maxVal.val > WALL_THRESHOLD ? maxVal.dir : "right";
      if (direction === "center") {
        goStraight(1000);
      } else if (direction === "left") {
        turnLeft(700);
      } else {
        turnRight(700);
      }
    }
  }
]);

You’re going to want to repeat all of this indefinitely, or until you tell it to stop. Take a look at the sonarscan.js file for the complete version.

Time to try it out! Drive your robot into the paper bag and turn on autonomous mode! How does it do? Feel free to make adjustments until your robot achieves success.

What’s Next?

Congratulations! Your little BatBot can now, on its own, find its way out of a paper bag, as shown in Figure 12-8!

mjsr 1208
Figure 12-8. BatBot’s movin’ on out!

While artificial intelligence requires quite a bit of concentrated thinking, it also really takes your robots to the next level. Want to go deeper? Try some of these exercises to go further in your artificial intelligence mastery:

  • Instead of making the robot turn in place, make it turn in an arc

  • Make the robot find its way out of a longer paper bag

  • Make the robot solve a maze

  • Implement the wall follower algorithm

  • Find and implement more interesting/complex algorithms to solve this puzzle

  • Make a robot that avoids obstacles

  • Add other sensors to the robot to make it even “smarter”