Experiential Robotics Platform (XRP)

What Is the XRP Platform?

The XRP (Experiential Robotics Platform) [beta], developed through a collaboration between WPI and DEKA Research & Development Corp., aims to level the STEM playing field globally and create a future generation of STEM innovators and technology leaders.

The robot kits you received are designed to operate autonomously and perform basic tasks. Its simple, tool-free assembly means robots can be built quickly, and replacement parts can be easily 3-D printed. As part of this platform, WPI will provide virtual support through online courses and will guide students and teachers through the new system, including the ability to scale up using the same hardware with free software updates.

The XRP platform is part of WPI’s global STEM education initiative, which will bring inspiration and possibility to STEM education in ways that make it available to all.

Welcome to Introduction to Robotics

Welcome to the WPI Global STEM Education Initiative Introduction to Robotics class using the new XRP Robots. This class is intended to be used by instructors to learn about the basics of robotics and programming using either Blockly or Python. The course is designed to use the Python language. Or you may start with Blockly and switch to Python as you gain more familiarity with programming the robots.

The class has several modules you can work through, starting with an introduction to robotics. Modules include driving, using sensors, and using the manipulator (robot arm). Finally, there is a challenging final project that brings together everything you have learned in the previous modules. The final challenge optionally has a rubric for scoring the runs completed by your students by adding some competitiveness to the class if you would like to use that.

_images/deliveryRobotImage.png

Final project delivery robot challenge (see final project module for more information)

We hope that you find this course fun and engaging for you and your students. This is a brand-new course with new robots and software, so there may be bugs and unexpected problems. We will strive to be responsive to any questions you might have. If you have any questions or find anything not working as you expect it to, feel free to contact us.

For additional information about the program, please see the XRP web page.

Course Overview

This course is designed to teach the knowledge and skills necessary for a basic understanding of programming for your XRP robot using either the Python or Blockly programming languages. The intent of this course is that it is interactive, and we expect you will work on every intermediate challenge. At the end there is a final project challenge requiring all the skills that are taught in the modules leading up to it. Each module builds on the previous by adding new robotics and programming concepts. New programming techniques are introduced to solve concrete robot problems, not as abstract unattached learning. This way students will see the relevance and need for each concept introduced.

The modules are outlined in the syllabus along with the learning objectives associated with each one. There are also short quizzes throughout the course to help students make sure they are learning the important concepts.

The course can be run with one robot per student or with teams of a few students working on each robot. Working in teams allows students to help each other as they work through the course. But one student must not be doing all the work, preventing the other team members from learning the material.

Joining Platforms

Alongside this Canvas course, you will be using several online resources for getting your questions answered. The support tools for any technical Q&A will be monitored by the development team and a growing community of users like you. We are using Canvas Discussions and Discord for online support during this course.

Discord

The Global STEM Initiative Discord channel provides an easy way to communicate your questions, concerns, ideas, and conversations across the world (within this robotics program). Technical questions can be asked here, but it is strongly advised to ask them in Canvas Discussions so others can find the answers in the future. You may find important resources, links, event information, and office hours here.

Please follow these steps, or follow the video below, to join the Discord Channel:

  1. You will want to create your account, go to https://discord.com/

  2. On the top right of the screen, click on the login button, and at the new screen on the bottom, it will say “Need an account?” press the register button highlighted in blue next to this text.

  3. Input all the information to the new screen, and press continue.

  4. Now you should have a new screen that says “Create your first Discord server” at the top. At the bottom, in blue text, it will ask you if you have an invite already. Click the blue text, and give the prompt this https://discord.gg/fAE2YhVM4H

  5. Finally, you should verify your email with a link that Discord has sent you to the email you provided them.

  6. And you are done! You can now ask any questions you have about the course here or just chat with the other members of the course. We hope you will find this course fun and interesting!

If you want Discord on your phone, for Android or iPhone there should be a Discord app in the App Store that you can download. You can also download a desktop app for Discord, the link can be found on the website in step 1.

If you have any problems following the above steps, here is a step-by-step video to join our Discord Channel.

Canvas Discussion

Canvas provides an integrated system for Class Discussion that both teachers and students can contribute to. We would like to utilize these resources as our main technical Q&A platform where you can post any question regarding the course or the robot.

All information regarding the question will be kept within the question thread to provide ease in later searches.

To participate or post questions in the Discussion:

Navigate to the Discussions section on the left side of this page. The discussion will be organized by topic.

To search, post, or contribute to any questions, please first join the discussion that is relevant to your topic.

  1. Search for a question

  • Select discussion topic

  • Utilize the search bar and search for either title or name

  • Once you find the discussion you are interested in, double-click into the discussion to access the full thread.

  1. Post a question

    • Select discussion topic

    • We suggest you search and make sure your question is not there yet before deciding to post a new question to avoid duplicates.

  2. Reply or contribute to a question

    • Select discussion topic

    • Search for the question thread

    • Now you can hit ‘reply’ to answer or contribute more to the discussion

    • We encourage you to participate in any discussion with your own findings or solution if possible

The course staff will also be monitoring all the discussions to make sure questions are being answered as soon as possible. The best answer will be liked by the course staff and will be placed first in the thread.

Contact Us

If you have any trouble following any of the above steps, please reach out to the Course Staff via Canvas or Discord.

Or email your questions and concerns to gr-globalsteminitiative@wpi.edu

Introduction to XRP

In this module you will:

  • Learn about the field of robotics

  • Assemble your robot

  • Install the software required for programming the robot

  • Write a short program to familiarize themselves with programming the XRP robot.

At the end of this module, you will be able to:

  • Recognize the characteristics of robots

  • Talk about the core disciplines that make up the field of robotics

  • Understand the components of the robot and how it is assembled

  • Have familiarity with the tools for programming

To finish this module, review and complete all tasks outlined in each section. Then successfully pass the module quiz at the end.

What is a robot

In this course we talk about robots as devices that can:

  • Sense their environment

  • Think and perceive what is happening around the robot

  • Carry out actions using actuators (motors)

There are many ways to define a robot, and this is only one of them. Still, it is a good idea to have a definition of precisely what a robot is. Let’s look at different devices and try to decide whether each of these are robots according to our definitions.

Examples

Radio Controlled Airplane
Alternative text

A radio-controlled airplane is operated by a person who holds a controller and uses the joysticks to control the airplane’s flight path. It requires a remote pilot.

Actuators:

  • propeller motor

  • three control surface motors

Sensors: None

Summary: No sensors, no thinking, and it has to be completely controlled by a human.

Not a robot.

Drone
Alternative text

A quadrotor drone is capable of being teleoperated or using autonomous flight. The drone can fly a programmed course, avoid obstacles, return to the landing point, and automatically land.

Actuators:

  • 4 propeller motors

  • camera aiming motor

Sensors:

  • GPS (Global Positioning System) receiver

  • gyros and accelerometers

  • heading sensor

  • altitude sensor

  • rangefinder

The drone senses the environment based on its sensed location and surroundings, and flies on its own from one place to another.

This is a robot.

Vacuum cleaner
Alternative text

A conventional vacuum cleaner is pushed by an operator around the area to clean the floor. Actuators: Motor to turn the fan that sucks up dirt.

Sensors:

  • Fan speed sensor to ensure consistent performance.

The motor does have a sensor that keeps the motor running at a predetermined speed, but it does not sense the environment or have perception. The operator must supply all the “smarts”.

Not a robot.

Autonomous vacuum cleaner
Alternative text

An autonomous vacuum cleaner can — on its own — start up, vacuum one or more rooms, come back to clean its home base, then continue vacuuming until the whole job is done. It maps out each room in the house for more consistent operation.

Actuators:

  • Drive motor for the wheels to get around.

  • Motor for the vacuum for cleaning.

  • Motor in the base (not shown) that will suck the dirt out of the vacuum cleaner so it can continue cleaning.

Sensors:

  • Camera for visualizing the room.

  • Switches on the bumper to allow it to turn around after hitting obstacles.

  • Rangefinders on the side to measure distance from walls.

  • Sensor to detect carpet vs floor to change the motor speed.

An autonomous vacuum is fairly smart. It can learn the map of a house after a few runs and efficiently clean rooms. It can avoid obstacles, clean rooms, stop to recharge, and continue where it left off.

This is a robot.

Self-driving car
Alternative text

A self-driving car can be driven conventionally by a human or driven autonomously on city streets and highways on its own.

Actuators:

  • Wheels for driving.

  • Motors for controlling turning.

  • Actuators to allow the robot to break on its own.

Sensors:

  • 8 cameras both outside and inside the car to view the environment and driver’s attentiveness

  • Rangefinders all around the car to measure the distance to adjacent vehicles

  • GPS to determine car’s location, and more.

The car is smart and represents state of the art robotics. It can sense the environment, understand where it will be over time, and drive to its destination while safely avoiding obstacles.

This is a robot.

What are the parts of robotics?

Robotics engineering is usually thought of as a combination of three disciplines. They are:

  • Mechanical engineering - the design and analysis of mechanisms and other mechanical systems.

  • Electrical engineering - the design of electronic circuits, especially all the sensors.

  • Computer science - the development of advanced software (computer programs) to interpret at all the sensor data, understand it, and drive the actuators.

Robotics can be thought of as the synergy of these three fields. Designing robots requires a “systems” approach to design. Having knowledge of all three subjects allows one to develop more complex and capable systems than one with only a unitary background.

Alternative text

Robotics is a 3-legged stool. Without any one of these subjects, it falls down.

Building the XRP robot

The attached PDF file contains preliminary assembly instructions for the beta XRP Robot kit distributed at the FIRST Global competition in Geneva.

TODO

Things to do: Fix the ‘Note’ box

Installing the programming tools

Two primary programming tools will be used in this course:

Blockly

This is a drag-and-drop graphical programming system similar to Scratch. Blockly is a good choice for programming the robot, especially for learners with little to no programming experience.  Does not easily scale for larger programs and is only usable within the Blockly programming environment.

Installing Blockly

In a browser, navigate to the github repository release page for XRP-blockly:

https://github.com/Open-STEM/xrp-blockly/releases/

Find the most recent release - as of this writing is v0.0.5. and download the version for the computer platform you are using. The file names below are for version 0.0.4, be sure to take the newest release version that is posted.

Windows

XRP.Blockly.Setup.0.0.4.exe

MacOS

XRP.Blockly-0.0.4.dmg

Linux

XRP-Blockly_0.0.4_amd64.deb

After downloading the release file, double-click on it to begin the installation. Follow all the instructions taking all the default settings unless you have a reason to change one of them. When it is finished, you may delete the release file. To start using Blockly, find it in the Start Menu on Windows or the Applications on a Mac.

Python

A more widely-used language that scales well for larger programs and works with professional programming tools such as Visual Studio Code and many others. The tool that will be used throughout this course for python programming is Mu Editor, although a more professional tool such as Visual Studio Code can also be used for writing CircuitPython code that is used by the XPR Robot.

Note

A way to get students comfortable with programming more quickly is to start with Blockly getting through basic concepts such as functions, conditionals, loops, and all the basic robot operations. Then move to Python to complete the course, especially for some more complex challenges, such as the final project. This will allow for quick onboarding without having to learn too many new concepts at the same time. But the decision will depend on the lever and experience of the students taking the course.

Installing Mu Editor

Mu is a simple and concise editor that allows us to edit our CircuitPython code, and view serial output from our robot. To install, go to the following webpage: https://codewith.mu/en/download

Alternative text

Download the file corresponding to your operating system, and then open the downloaded file and follow the installation prompts that are provided.

Mu allows you to run different implementations of Python for different microcontrollers. In our case, we want to run CircuitPython, which is compatible with our Maker Pi RP2040. Select the “Mode” button shown below.

Alternative text

Select CircuitPython and confirm by clicking ‘OK’.

Alternative text

In the Mu Editor, you may notice a symbol of a red ‘x’ on a microchip in the bottom right corner of the window.

Alternative text

This indicates that Mu has not detected a robot device attached to the computer. In order to write and download programs to the robot, you’ll need to connect your computer to the robot via a Micro-USB cable. Connect the robot and turn it on using the switch near the back of the robot chassis.

Note

Some micro USB cables are only designed to carry power. The one provided in your kit will carry the power and data which is required for programming your robot.

Alternative text

At the bottom right corner of the screen, the red “chip” icon will now be grey, indicating a successful connection between the robot and the computer running Mu Editor.

At this point, you can upload and run programs to the robot written in the main window.

Additional resources about using Mu Editor for programming your robot can be found on the project website: https://codewith.mu

Installing the software libraries on your robot

The software library makes it easy to write programs to control the robot and will be used throughout the course. To install it, go to https://github.com/Open-STEM/XRP-Library/releases Links to an external site. and download the “Source code” (zip) file.

_images/zipfile.png

Then, unzip the folder. To do this, open Finder on Mac, or File Explorer on Windows. Go to your Downloads folder and find the zip file. On macs, double click the zip file to unzip it, and on windows, right click the zip file, and click “Extract All…”

Note

Be sure that your robot is plugged into a free USB port and turned on.

Next, look for an external drive labelled ‘CIRCUITPY’. If the robot is correctly connected to the computer and turned on, then this folder should be visible in Finder/Windows Explorer.

_images/cp.png

This is where programs can be edited and run. Delete anything that is currently in the drive. Then, go inside the unzipped folder and copy the contents of the folder into the CIRCUITPY drive. The robot should now be set up with the library!

Note

The CIRCUITPY drive should have the code.py file, and the lib, SampleCode, and WPILib folders as shown in the picture above.

Writing your first robot program: Python

As mentioned earlier, we’ll be using Mu to edit and run CircuitPython code and view serial output from the robot. Be sure that your robot is plugged into the computer and turned on, and then open up the Mu application.

_images/load.png

Then, click the “Load” button, as shown above. Navigate to the CIRCUITPY drive and open code.py.

_images/code.png

code.py is the file that contains the robot code. In order to run a program, simply edit code.py, save your changes, and your program will automatically start executing on the robot!

Before we write any code though, there’s a neat feature that will prove quite useful when testing your programs.

_images/serial.png

Click the ‘Serial’ button. If the robot is connected to the computer, this will allow you to view your robot’s logs on your computer while running your program, which is very useful for debugging.

Python Programming Notes:

Note 1

This course is a hands-on learn-as-you-go course. We will not be teaching you how to program in Python. But if you follow the patterns presented to you, you should be able to fill in the blanks and accomplish the tasks. If you want to learn more about Python, we suggest looking at https://www.w3schools.com/python/default.asp Links to an external site. as a resource.

Note 2

The Mu program is designed to save the current program to your robot for execution. If you would like to save a program for future use we suggest that you copy and paste it into an editor program, such as notepad, on your computer for future reference. We also suggest saving a copy of the current program in the editor as a base for each new program or challenge you start.

Note 3

With Python, the amount of indentation of a line of code is important. For instance, in the program below, all the indented lines are part of the main program. We suggest using the ‘tab’ key for indenting so that all the code is indented the same amount.

Note 4

Any line that starts with a # character is a comment. Comments are not executed as part of the program they are notes to help you remember what you were doing in that part of the program.

Finally, let’s test some code out. Often, it’s a tradition when learning a new programming language for the first program to display “Hello World!”. Let’s put our own twist on this tradition with our robot. Add the following code to code.py (note: the go_straight function has been renamed to straight in the final version. We’ll update the screenshot soon):

_images/helw.png

Then, click the ‘Save’ button. CAUTION: As soon as you hit save, your robot will start moving. Remember, saving your changes causes the robot to initiate running the program automatically! Note that pressing Control-D will also start running your code.

You should see the robot immediately start going forward for 20 centimeters. Then, it should log “Hello World” on your computer through serial output. If nothing goes wrong, then you have successfully set up the software and are ready to write more intricate code!

Getting the robot to drive

In this module students will:

  • Learn how to set power to the robot’s drivetrain

  • Understand the relationship between the motor efforts and the robot’s motion

  • Be introduced to motor encoders and how to use them to correct the robot’s motion

  • Use the pre-made drive, turn, and button functions

At the end of this module, students will be able to…

  • Control the robot’s basic motions

  • Understand the basics of while loops and exit conditions

  • Have familiarity with making and running functions

  • Break down a larger path into components and then convert into code

Effort vs. Speed

Getting the Robot to move

Getting your XRP robot to move is pretty easy. On a basic level, all movement commands boil down to this one method:

drivetrain.set_effort(left_effort: float, right_effort: float);

A method is a grouping of code for a common purpose. For example, the drivetrain.set_effort function sets the drivetrain to move at the effort values you as the programmer specify. These input values into the method are called parameters, and are essential for giving methods context that may change at various points.

So what is an effort value?

An effort value is a measure of how much voltage is sent to the motors. Sending more voltage to the motors results in it running at a higher speed or it producing more torque or both.

In our case, effort values are bound between -1.0 (full power in reverse) and +1.0 (full power forwards), with 0 being off. Any thoughts on what 1/2 effort would be?

Let’s test this so we can get a better idea of what effort values actually mean: In code.py, inside of def main():, add the following line of code to run the motors at full power. This is where you will be putting most of your code moving forward.

Python Programing Notes:

Note1:

The while loop will continue to loop through the statements that are indented underneath it until the while condition is no longer true. The while True: will continue to execute forever, also known as an infinite loop.

Note2:

time.sleep(0.1) tells the program to do nothing for 0.1 seconds.

Note3:

It is good practice to put the robot up on something like a roll of tape where the wheels can run free without hitting the table. This keeps the robot from running off the end of the table. When a program in an infinite loop like this is started, it will keep spinning the wheels until the program is changed. To stop the wheels from turning, you will need to delete or comment out the drivetrain.set_effort(1.0, 1.0) line and save the program again.

def main():
    while True:
        drivetrain.set_effort(1.0, 1.0)
        time.sleep(0.1)

Notice that the effort values used above are 1.0 which represents half effort forward. What would you use for half effort forward and what would use for half effort backward?

Then, upload your code to the robot and let the robot drive on a flat surface. Take note of how fast it goes. Try measuring how fast it travels in a few seconds!

Afterward, place the robot on a ramp and run it again. Take notice of how the robot moves slower when on the ramp. Why does this happen?

Ramp ascend

Ramp descend

Mini-Challenge: Climbing Slopes

So if a robot drives slower up a ramp, then the natural question would be: how steep of a slope can the robot climb?

Have your robot drive on a ramp and then raise the ramp until the robot is no longer able to move forwards. Is that angle what you expected? If your robot started sliding back down the ramp, think about why that happened.

How far have you driven?

“Circumference” and making the robot move forward

The “circumference” of the circle is the distance around the circle, and the “radius” is the distance from the center of the circle to the edge of the circle.

_images/Circle-Graphic-1024x576-2.png

If a car wheel (which is a circle with a circumference of 100 inches) rotates 5 times, how far does the car go? How would you find that?

_images/P316_1-1.png

You would rotate the wheel once, and find that it has travelled 100 inches, because the wheel would trace out it’s circumference on the ground. Then, you would rotate it a second time, and see it move another 100 inches. Then a third, a fourth and a fifth time, and see the wheel has traced out it’s circumference on the ground 5 times.

How far have you driven?

The amount it has travelled is 5 times the circumference. This can be used for any number of rotations. If you rotate the wheel 3 times, you would move forward 3 times the circumference (300 inches), if you rotated it 1 and a half times, you would move forward one and a half times the circumference (150 inches).

\[distanceTravelled = numberOfRotations \cdot circumference\]

This is going to be super useful when you need to make the robot go forwards a certain distance, or when you need to calculate how far the robot has moved!

How many times have the wheels turned?

We know that, in order to find out how far we have driven, we can use \(numberOfRotations \cdot circumference = distanceTravelled\).

We can find the circumference of the wheel (remember, it is \(diameterOfWheel \cdot pi\)), but how do we find the number of rotations?

How many times have the wheels turned?

The robot has sensors that look at how far the wheels have turned. If you don’t remember, the sensors on a robot give it information about it’s environment and actions –similar to your five senses – that we can ask about, and use to make decisions. Because the robot senses this information, we can just ask it “how much have the wheels turned?”

The Encoders

The Encoders are the sensors on the motors.

The Encoders tell you how many times the wheel has turned 1.25 degrees. If you ask “how many times has the wheel turned 1.25 degrees”, and the encoder says “100 times”, that means you have turned \(1.25 \cdot 100 = 125\) degrees.

What would the encoder say if you had turned one rotation? Remember, there are 360 degrees in one revolution.

There are \(\frac{360}{1.25} = 288\) divisions in one rotation.

_images/blog017-image001-disks-resolution.jpg

These are examples of encoders, where the wheel is divided into multiple sections. Each one represents one “click”. There are 288 sections in the motor you are using.

In the video below, the encoder has 60 clicks in a rotation.

https://youtu.be/u-aMnayYO6c

How many times did it rotate? how did you find this out? can you find how many degrees it has rotated each time it clicks from this?

Asking the robot about rotations To get the encoder ticks on the left motor, you can use

leftEncoderPosition = drivetrain.get_left_encoder_position()

You can print the left encoder position to your computer using

print(leftEncoderPosition)

Challenge 1: Design Thinking

Use the tools you learned about here and the internet to print the distance your robot has driven.

How would you find the distance?

What information do you need to find that out?

What information do you not have? Where might you be able to find that?

You can test this by moving the robot by hand along a measuring tape.

Challenge 2: Abstraction

Create a function that takes in an encoder value and returns the distance travelled by the wheel.

Call it getDistanceFromTicks( numberOfEncoderTicks ) because it gets the distance travelled from the number of encoder ticks.

once you have created the function, you can use it to find the distance a wheel has driven

leftEncoderPosition = drivetrain.get_left_encoder_position()
rightEncoderPosition = drivetrain.get_right_encoder_position()

left_Wheel_Total_Distance_Travelled = getDistanceFromTicks(leftEncoderPosition)
right_Wheel_Total_Distance_Travelled = getDistanceFromTicks(rightEncoderPosition)

Differential steering

Driving Straight

Driving straight is pretty easy. The wheels just have to go the same distance.

https://youtu.be/NeNV5lUYcgo

What if they don’t go the same distance?

What happens if the left wheel goes slower than the right wheel?

https://youtu.be/kd2-mhI2CgE

The robot goes in an arc. The wheels draw out 2 different circles with 2 different radii. The arc with the smaller radius (\(r1\), the white line) has a smaller circumference. That means the left wheel has driven a smaller distance. It has gone slower.

How would you drive in an arc with a given radius?

We know the wheels trace out arcs when we make the wheels go at different speeds, but how do we decide what the wheel speeds should be?

What do we know?

Let us start with what we know about the two arcs, and the arc we want the center of the robot to go through.

The first thing we can say about any circle is that the radius is proportional to the circumference. This means that the inner and outer arc lengths are proportional to the inner and outer radii, or

\[\frac{leftWheelDistance}{rightWheelDistance} = \frac{r1}{r2}\]

This is the same as saying that

\[ratioBetweenWheelSpeeds = \frac{r1}{r2}\]

Hey! We found out what the ratio between the wheel speeds is supposed to be if we know the two radii. But how do we find what the two radii are? We only know what we want the radius of the orange circle to be.

But we know what the distance between the two wheels are.

_images/Screenshot2023-03-07142430.png

If \(r_{bot}\) is the distance between the two wheels, then

\[r_1 = r_{desired} - \frac{r_{bot}}{2}\]

since r1 is less than the desired radius, and

\[r_2 = r_{desired} + \frac{r_{bot}}{2}\]

We found out the radii of the circles we want the left and right wheels to trace, and we know how to find the ratio between the wheel speeds from that.

Remember: .. math:: ratioBetweenWheelSpeeds = frac{r1}{r2}

Putting them together, we get,

\[ratioBetweenWheelSpeeds = \frac{leftWheelSpeed}{rightWheelSpeed} = \frac{r_{desired} - \frac{r_{bot}}{2}}{r_{desired} + \frac{r_{bot}}{2}}\]

Try it yourself

Try it yourself! Make your robot drive in a circle. Choose the radius of the circle to be whatever you want!

What will you set your wheel efforts to? Does it drive in a circle?

What happens if you set the wheel efforts to half of what you decided in part a?

What happens if you flip the wheel efforts (set the left wheel effort to what you decided as the right wheel effort and vice-versa)?

Additional challenges

Try to make the robot do a point turn! What is the radius of the circle that you want it to drive in then?

(Try drawing the robot and the point you want it to turn around. What can you infer about it then?)

Try and make the robot turn around one of the wheels. What is the new desired radius?

Try making the robot drive backwards in an arc

Calling Drive Functions

Here are functions that are built to make the robot go straight and turn

drivetrain.straight(distance: float, speed: float = 0.5, timeout: float = None) -> bool
drivetrain.turn(turn_degrees: float, speed: float = 0.5, timeout: float = None) -> bool

These versions of the drive functions have a few extra features we didn’t cover in the last section, such as the use of an optional timeout for if you anticipate the robot being unable to reach its target position for any reason.

Calling the Drive Functions:

Similar to most of the code you’ve written so far in this section, we call these methods from inside of def main():, located within code.py:

def main():
    # Drive forwards 20 cm and then turn 90 degrees clockwise
    drivetrain.straight(20, 1)
    drivetrain.turn(90, 0.8)

So now that we have the fundamentals of driving down, what else can we do?

Mini-Challenge: An A-Maze-ing Path

[Picture of 1001 maze (with dimensions)]

If we wanted our robot to navigate this maze, what would we do? Try breaking down the path into simple “drive straight __ cm” and “turn __ degrees” segments. This will allow us to easily convert real world instructions into code for the robot.

Once you have your path written out informally, try converting that into instructions for the robot using the go_straight and go_turn functions. Place your robot into the maze and run your code. If your robot touches the tape at any point, try to adjust the distances and turns so that it doesn’t.

A Shapely Surprise

Let’s take the ideas we just exercised for the maze and do something a little simpler. Let’s try to get the robot to drive in the shape of a square. You can choose how big you want the square to be; for this exercise it doesn’t really matter. Follow the same steps as before, and write down the segments in pseudocode (words describing what the code will do informally) before translating that into actual code.

XRP tracing a square

You may notice that this code is pretty repetitive, consisting of the same two instructions 4 times. There’s got to be a cleaner way of doing that, right?

Python Programming Note: For Loops

Similar to the While loops we covered earlier, for loops are a special type of loop that are usually used to run a section of code a specified number of times. The syntax for a for loop is as follows:

for counter_name in range(number_of_loops):
    # Loop content goes here

As a for loop cycles, “counter_name” becomes a variable equal to the number loop that is currently occurring, starting at zero. This means that on the third time running the loop content, counter_name = 2.

Mini-Challenge: An Application

Now that we know about for loops, we can try a new approach to driving. Take your square driving code and adapt it to have the robot outline a triangle instead of a square. Then, rewrite it using a for loop.

Waiting for Button Input

You may have noticed that your code runs immediately after uploading it. This is nice sometimes, but sometimes you aren’t coding in the same place you will be running your code, and the robot suddenly driving itself off the table isn’t an ideal result. In order to have the code run on command, we can use the on board buttons to tell the code when to run.

Using Button Inputs

_images/MakerPiPinOut.png

There are two buttons on the board from which we can get input, labeled as GP20 and GP21 respectively. As such, there are two functions we use to check if either one is being pressed:

buttons.is_GP20_pressed()
buttons.is_GP21_pressed()

These methods both return a boolean value (a True or False value), which means they can both be used directly as a condition for a while loop:

Waiting for a button input

Since these functions return True if they are being pressed, and False if they aren’t, waiting for a button press is as simple as:

while not buttons.is_GP20_pressed():
    time.sleep(0.01)

The time.sleep statement is necessary in order to not overload the button pin. Checking the button 100 times a second is still more than enough precision for almost any application.

Waiting for a button input allows us to start our program on command, which is a very convenient feature.

Try putting this code at the start of any of the programs you have written so far. Take note of how the program begins as soon as you press the button.

Mini-Challenge: Multiple Button Presses

But what if we instead wanted to check for multiple consecutive button presses? If you just place the previous segment of code a few times, you may see that pressing the button once may allow for all or most of the checks to pass, as you aren’t telling the code to wait for you to release the button before registering the next check.

Try writing your own implementation where you have to press either button 3 distinct times before your program begins.

Using the Sample Code’s Implementation

We have provided some sample code to handle the waiting for a button code. But first you have to tell Python to import that code into your program. Use the following line at the top of your program to import that code.

from SampleCode.sample_miscellaneous import *

This line gives everything in code.py access to everything in sample_miscellaneous.py, including wait_for_button().

It can be used very simply, by placing the call at the beginning of your def main():

def main():
    wait_for_button()
    #
    # Your Code Here!
    #

Measuring distances using sensors

Topic Description

In this module students will:

  • Understand the basics of measuring distance and its application

  • Learn to use distance sensors to determine the robot’s location with respect to the target.

  • Use reading from the sensor to avoid obstacles or stop before the object

  • Learn the concept and basic application of On/Off control and Proportional Control

  • Write a quick program for Robot Wall Follow

At the end of this module, students will be able to:

  • Determine the robot’s current position with respect to the target

  • Get their robot to stop before the mark or avoid obstacles using sensors

  • Use Proportional Feedback Control to control the robot’s speed and achieve more accurate results.

  • Develop an autonomous wall-following program

To complete this module, review and do the tasks outlined in each section, and successfully pass the module quiz at the end.

Measuring Distances

Real-world examples of distance sensing in robot systems

In the programs you have written so far, the robot has been able to drive measured distances based on the circumference of the wheels and the number of rotations. This works as long as the objects the robot is driving toward are always precisely the same distance away. But what happens if the robot is trying to manipulate an object that might be different distances from run to run? If the robot could see the object, then it would always be able to drive to the object and stop and a repeatable distance from it. For example, if the task of the robot is to pick up a box with its arm that extends.

Welcome to the distance measuring module! The ability to measure the distance between a robot and the objects in its surrounding environment is crucial. This information allows the robot to avoid collisions and determine its current location with respect to the target.

Applications

Autonomous Vehicles

Distance measuring is a fundamental feature of Tesla’s Autopilot software for self-driving cars. Autonomous driving is a complex task and requires an incredible amount of information on its surrounding obstacles to avoid a collision. Measuring the forward distance to an obstacle allows the vehicle to maintain a healthy distance. Measuring sideway distance determines whether it is safe to merge highway lanes. While parking, measuring backward distance informs whether it is safe to continue backing up.

Tesla uses radars, a system using radio waves, for long-range sensing. A device, known as a transmitter, produces electromagnetic radio waves, which reflect back after hitting an object. Another device, known as a receiver, captures this reflected wave to calculate the distance of the object based on the wave’s travel time and speed.

Alternative text
Marine Echolocation

Measuring distances isn’t only important on land; it is important 20,000 leagues under the sea as well! Submarines use sonar, a system using sound, to navigate in the murky waters, measure distances from nearby objects, and detect notable presences in their surrounding environment, such as sunken ships.

Similar to radars, sonar detects objects by transmitting ultrasonic waves and captures the reflected echoes. Based on the travel speed and time of the wave, the distance to the object can be calculated.

Alternative text
Animal Echolocation

While the integration of these distance-measuring sensors into technology is impressive, animals do it better!

Bats navigate through dark caves and find food through echolocation by emitting high-frequency sound waves using their mouths. They listen to the echo of the sound waves bouncing back from the environment with their highly sensitive ears, allowing them to determine the size, shape, and texture of objects.

Similar to bats, whales also use echolocation to navigate underwater and locate food. They emit high-frequency clicking sounds using their nasal passages.

Distance Sensors

Distance Measuring Sensors

So, what are some sensors that allow you to measure distances?

Mechanical Sensors

If you touch it, then you know it’s there. Mechanical sensors detect some form of mechanical deformation (bend, press, etc) and translate that into an electrical signal. This is a very naive, but still valid way to approach distance measurement. The obstructing object could activate a mechanical switch upon contact, signaling its presence.

An example of a mechanical sensor is a Whisker Sensor. The whisker itself is a long flexible piece of metal that can bend and trigger a mechanical witch to notify the obstacle ahead. Whisker sensors can consist of multiple whiskers to sense obstacles from multiple directions.

Reflective sensors

Lidar (Light Detection and Ranging), Sonar (Sound navigation and ranging), and Radar (Radio Detection and Ranging) all follow the same principle. A transmitter emits light, sound, or radio waves, which bounce back from an obstructing object, resulting in echoes. A receiver captures these echoes, allowing for the calculation of distance based on the travel time of the wave.

Using the Ultrasonic Sensor

_images/setpointtarget.png

We will be using an Ultrasonic Sensor which is one type of Reflective Sensor that uses Sonar to measure and calculate the distance. We have just one function for getting input from the ultrasonic sensor.

sonar.get_distance()

This function returns the distance, in cm, from the sensor to the nearest object.

Mini Challenge: Show distance

Try writing code that checks the distance every 50 ms (0.05 seconds) and prints the output.

Controlling Behavior: Introduction

The main purpose of sensors is to introduce feedback into the system. During actuation, the state of the robot and its environment is continuously changing. Feedback informs the system how much the state has changed and how much more the state should change. Feedback introduces more control over autonomous behavior - a fundamental characteristic of all intelligent robots!

Incorporating Feedback

Your sensors tell you where you are. Robots need to understand where they are in order to make decisions about where to go, and so do you. When driving, you can choose to go faster or slower. In order to know whether to go faster or slower you need to know the speed limit, the speed of your car, and the speed of the car in front of you.

Alternative text

You can tell how fast you are using your speedometer, and you can see how fast the car in front of you is going. These are sensor inputs, and you want to take these into account when deciding whether to drive faster or slower. How a robot decides to move is called it’s control law.

For the XRP robots, your control law will decide how much effort to use on each wheel depending on where you want to go and what sensor inputs you have. You’re going to design your own robot control law.

Python Programming Note: If Statement

An if statement will execute its inner code block if its specified condition is met. For example:

if True:
    print("Hello World!")

The if statement above will print “Hello World!” because its condition is true.

if False:
    print("Hello World!")

The if statement above will not print “Hello World!” because its condition is always false.

int i = 3;
if i < 5:
    print("Hello World!")

The if statement above will print “Hello World!” because the variable “i” is less than 5, satisfying the condition.

You can imagine how this might be used in a control law.

Maybe it could be something like – “If your sensors see that you are exactly where you need to be, you don’t need to do anything!”

Controlling Behavior: Bang Bang

Parking your XRP

You want to create a control law that parks your XRP at a set distance of 20 cm from a wall. You know that, in order to do that, you only need your distance sensor!

Alternative text

In order to build your control law, you build a table. For each of these scenarios, do you want to go forwards or backwards?

Title

Sensor Input

Action, (go forwards or backwards?)

You’re a lot closer than 20 cm away

You’re a little closer than 20 cm away

You’re 20 cm away

You’re a little farther than 20 cm away

You’re a lot farther than 20 cm away

Try implementing this table using if statements on your robot. How many if statements would you need?

Design Thinking

What are some problems you’re running into?

  • Does the robot stop?

  • Does it move too fast?

  • Does it move too slow?

Why are these problems happening, and how can they be solved?

Hint: Do you need the robot to always move at full speed? When should the robot slow down?

Bang Bang control

The Robot only needed to move forwards when it was too far, and backwards when it was too close.

[insert video]

if sonarDistance > targetDistance:

 set a positive effort (move forwards)

if sonarDistance < targetDistance:

 set a negative effort (move backwards)

if sonarDistance == targetDistance:

 set the effort to 0

This is called Bang Bang control. What efforts did you choose? Did you set the efforts to 0.5 and -0.5? It’s called “Bang Bang” because it’s always either full throttle forwards, or full throttle backwards (Bang forwards, Bang backwards). When it’s close to it’s goal, does it need to be so fast? How can you change that?

Activity: Trains!

Now that you made your robot park, can you make it do these?

Activity 1: Following a moving wall

Set the robot to be a fixed distance from a wall, and do the same parking activity from before, but this time, try moving the wall. See if the robot will follow the wall well. Move it back and forth and your robot should follow.

Activity 2: Making a Train

If they can follow walls… why can’t they follow other robots? Try putting multiple robots, one behind another, and doing this same activity. In front of the first robot, put a wall, and make it move. Now, you’ve created a way to control all of these robots!

Note that this train may break apart because the robots end up facing sideways after some time.

Controlling Behavior: Proportional Control

Updating Behaviors:

That worked! But it seemed like the robot went too fast when it needed to go slow, see how it overshoots, then keeps going back and forth? It doesn’t seem to stop, does it?

[insert video]

How should we change that?

Let’s go back to our Control Law.

Title

Sensor Input

Action

You’re a lot closer than 20 cm away

Move Back

You’re a little closer than 20 cm away

Move Back

You’re 20 cm away

No Need to Move

You’re a little farther than 20 cm away

Move Forward

You’re a lot farther than 20 cm away

Move Forward

When we’re closer, we don’t have to go as fast, so let’s change the table.

Title

Sensor Input

Action, (go forwards or backwards? Fast or slow?)

You’re a lot closer than 20 cm away

(far away from the target)

You’re a little closer than 20 cm away

(close to the target)

You’re 20 cm away

(at the target)

You’re a little farther than 20 cm away

(close to the target)

You’re a lot farther than 20 cm away

(far from the target)

Alternative text Alternative text Alternative text Alternative text

This is called proportional control. The control effort is proportional to the error.

That means that if the error is large (you have to go a far distance), so is the control effort. If the error is negative (you have to go backwards), the control effort is in the other direction.

Remember – the error is the distance between the point you are at, and the point you want to go to.

How do you find the distance on a number line like the one above? What is the error between “30” and “20”? How do you find it?

Implementing Proportional Control:

If we set the control input equal to the error, we might be satisfied, right? After all, this would make the control become bigger when the error is bigger, and when the error is negative the control input would be negative. You would be telling the robot to drive backwards.

The control law used here would be \(effort = error\)

But the effort is capped at “1”. If \(effort = 10\), then your effort would be too much, right? That is why in proportional control, you can scale down or scale up this control effort by an amount. This amount is called “the proportional constant” or \(k_{p}\).

If you scaled the effort down by 15, what would the effort be (at error = 10)?

In this case, \(k_{p}\) would be \(\frac { 1 }{ 15 }\), and new control law would be \(effort = \frac { error }{ 15 }\), or \(effort = k_{p} * error\)

Try implementing this control law on your robot. How did it work? what could be done to improve it? Remember, \(k_{p}\) can be set to any number you want.

Wall Following

You’ve built a control law for proportional control that lets you stand off from a wall. Now how do you use it to stay a certain distance from a wall while driving?

[insert video]

Building a control law

What do we want to do when we are too close to the wall? how do we start to move farther from the wall?

What do we want to do when we’re too far?

Remember:

If we want the robot to drive forward, we can set the motor efforts to the same value. Let’s start with “0.5” maybe.

If we want the robot to turn left slowly, we can slow the left wheel down by a little. If we want it to turn towards the right, we can slow the right wheel down.

Try implementing the control law you develop to follow the wall at a distance of 20 cm. Remember that you can tune values like the “driving forward effort” (as an example, we could use 0.2 instead of 0.5).

You can clip the sensor on the side to get the distance your robot is from the wall.

Problems

Once you implemented your control law, try and think about what the problems are.

What happens if you start too far? Is the distance you are getting from the wall accurate?

How would you solve the problems. Are there any variables you can tune to try to solve some of these problems maybe?

Improving Driving Forwards

In order to implement wall following, you could have used Proportional Control. If you are too close to the wall, you could slow down the wheel farther from the wall, and speed up the closer one. In this way, you could turn away from the wall. If you’re too far, you could to the opposite.

[insert video]

In the video, the left wheel is farther from the wall. If the robot is too far from the wall, the left wheel needs to speed up.

The control law can be summed up like this –

\(leftWheelEffort = defaultEffort + K_{p} * error\)

\(rightWheelEffort = defaultEffort - K_{p} * error\)

Using Proportional Control to drive straight

If we want to drive straight, we want both of the wheels to drive the same distance.

If the left wheel encoder reads that the left wheel has gone “300 clicks”, and the right wheel reads “280 clicks”, it probably means that the robot has driven in a little bit of an arc.

How do we keep these robots driving in a straight line using proportional control? What would we use as the “error”? In previous activities we used the distance between where the robot was and where it needed to be.

What can we use as the error now? Once we find the error, how do we correct for it? Try implementing a function that uses proportional control to drive straight for 30 cm.

After you implement these, try and find where it’s going wrong. How can you fix these problems? Are there any other ways we can correct for error?

Using the Reflectance sensor

Detecting the line

The XRP robot is not only able to measure distances with its rangefinder, but can use its two analog reflectance sensors on the front underside of the robot to measure how reflective the ground surface is. This is particularly useful for a specific use case - detecting and following lines!

The reflectance sensor:

This sensor consists of an infrared transmitter and a receiver. The transmitter emits infrared light which gets reflected from colored obstacles (for our curriculum, the colored obstacle is the black line which the robot would follow). The receiver absorbs and senses how much infrared light is reflected from the nearby obstacles. Based on the intensity of the absorbed light, logical decisions could be made by the robot to complete certain tasks, for example, following a line.

The API provides two library functions to read information from the reflectance sensors:

left = reflectance.get_left()
right = reflectance.get_right()

Both of these functions return a value that ranges from 0 (white) to 1 (black). However, these sensors are separated only by around a centimeter - why do we need two sensors instead of one? Later through this module, we will discuss how integrating the data from both sensors onto our code can yield more accurate results.

Let’s consider a previous exercise - using the rangefinder to drive until some certain distance to the wall. The code looks something like this:

drivetrain.set_effort(1,1)
while sonar.get_distance() > 10:
time.sleep(0.1)
drivetrain.stop()

Here, we command the robot to start going forwards, keep polling our rangefinder at quick regular intervals, and when we dip under the 10 cm distance, we break out the robot and stop the drive motors.

Consider a similar use case for the reflectance sensor: driving forward until a dark line is detected.

Insert Video Here

How could we go about programming this? Well, let’s consider what values the reflectance sensor would read throughout this program:

_images/reflectsensoutputgraph.png

This plots the readings of the left reflectance sensor on the y axis over time on the x axis, where the robot starts on a white surface and then crosses over a black line.

As shown in the plot, our reflectance sensor gives readings close to 0 while initially in the light surface, but then jumps close to 1 when it sees the dark line before going back down. Could we somehow adopt a similar code structure with a while loop as above to achieve this?

The key lies in the condition of the while loop - what causes the while loop to terminate. In this case, we want to check whether the sensor’s reading has dipped below a certain value, which would indicate detection of the dark line. Note that we don’t get values that are exactly 0 or 1 - surfaces never fully reflect or absorb heat - so we can’t have a while loop condition like: while reflectance reflectance.get_left() != 1:

A condition like this means that we would continue going forward until we detected a surface that was perfectly black, which is quite unlikely to happen. Instead, by considering any value over 0.7, for example, as “black”, this gives us considerable margin of error for different variations of darkish surfaces. So, the code should consist of starting the motors, waiting until the reflectance sensor’s value jumps above a value (i.e. 0.7), and then stop the motors.

Mini Challenge: Line Detection

  • Write a program to go forward until a black line is detected. It should look like the video above.

Following the line: On/Off control

Line Following Basics

Now, let’s turn our attention towards one of the core challenges in the final project - following a line. In the project, the robot will need to drive to multiple different locations - but doing this blind can result in the robot drifting to an unexpected direction - the drive motors may not be rotating at the exact same speed resulting in the robot moving in a small arc, and the robot might not even be aimed at the right direction when going forwards.

By following a line, we can ensure that the robot stays in the exact path it should, eliminating drift over time. But how?

Consider using one of the reflectance sensors. As a refresher, gives a reading from 0 (black) to 1 (white). Assuming that the reflectance sensor is approximately at the center of the robot, it will at least partially reading the black line when the robot is centered on the line. What type of logic would we need if we wanted to follow the center of the line?

Well, if the reflectance sensor reads black, it means the robot is perfectly on the line, and we’d want to go straight, setting the motors at the same speed. But if the reflectance sensor reads grey or white, it would mean that the robot is partially or completely off the line. We’d want to correct this by steering it back to the center, but does it turn left or right?

Unfortunately, there’s no way to tell. The robot has no way of knowing which direction it is drifting off the line. Instead, try following an edge of the line. If we try to follow the left edge, then there’s two possible states in which the robot reacts.

  • If the sensor reads closer to white, that means we’re too far to the left, so we need to turn slightly to the right.

  • If the sensor reads closer to black, that means we’re too far to the right, so we need to turn slightly to the left.

And that’s it! We want to keep polling (getting the value of) the reflectance sensor quickly, and at each time determine whether it’s closer to white (with a value less than 0.5) or closer to black (with a value greater than 0.5), and depending on the result, either set the motor to turn right (set left motor speed to be faster than right) or turn left (set right motor speed to be faster than left).

This seems like a solution involving an if-else statement. Our condition would be related to whether the value is greater or less than 0.5.

Python Programming Note: example of an if-else statement.

i = 21

if i > 20:

    print("greater than 20")

else:

    print("less than 20")
_images/onoffcontrol.png

Above is an illustration of how we’d want the robot to act based on the reading of the sensor.

Mini Challenge: Line Following

Follow the left edge of a black line by turning right when the sensor reads closer to white, and turning left when the sensor reads closer to black.

The perks of proportional control

Following the Line: Proportional Control with 1 Sensor

The perks of proportional control

Let’s circle back to the previous exercise - following the line by either turning left or right depending on whether the robot is situated to the left or the right of the line. This is a video showing it in action:

INSERT Video

What is immediately striking about following the line in this way? Well, the robot must constantly oscillate in order to stay on the edge of the line, because even the smallest deviation from the edge of the line results in the robot wildly turning to compensate. In addition, it does not react any more forcefully to bigger deviations like when the line starts curving, and as soon as it loses sight of the line, it has no way of recovering.

Instead of only having two cases, it seems like we’d want a whole bunch of cases, for anywhere from a sharp left turn to going perfectly straight to a sharp right turn, and everything in between, based on whether the reflectance sensor is completely on white, grey, black, or somewhere in between.

_images/pcontrol1.png

Having a long chain of if-else statements doesn’t sound fun. Perhaps we can look at this with a completely fresh approach?

From the previous module, we looked at proportional control to smoothly control the robot’s distance to the wall using the distance sensor. Can we use the same concept here?

Calculating error

With proportional control, we have an error value we desire for it to tend to zero, and some motor output is controlled proportional to the error to minimize that error - in the case of maintaining a certain distance to the wall, the error was the difference between the target and actual distance, and the output was the speed of both drive motors. In the case of line following, the error is the difference from 0.5 - since ideally, the robot follows the grey edge of the line and goes straight - and the motor output is how the robot should turn.

So, we can obtain error value with the following code:

error = reflectance.get_left() - 0.5

Above, we subtract 0.5 to normalize the reflectance value: the error is negative when the robot is too far left and needs to turn right, and positive when the robot is too far right and needs to turn left. Let’s put that code into the test. We can put it in the loop, print out the error at each iteration, and move the robot around the line to see how the error changes. The code is as follows:

while True:
    error = reflectance.get_left() - 0.5
    print("Error:", error)
    time.sleep(0.01)

Implementing proportional control

Based on the computed error, we want that to determine how much the robot turns.

_images/pcontrol2.png

This image illustrates how the error impacts how much we want to turn. Remember: making the robot turn is simply setting the left and right motors to different speeds. So, the solution is to set a base speed - say, 0.5 - that both motors move at when the error is at 0. Then, have the calculated error influence the difference in speeds between the two motors. As explained through code:

drivetrain.set_effort(base_speed - KP*error, base_speed + KP*error)

This would be run inside the loop. The base_speed represents the average speed of the motors, no matter how much the robot turns. KP scales how much the robot should turn based on the error - a higher KP means the robot will react more violently to small deviations in error.

Let’s do a sanity check to make sure the code makes sense. We assume base_speed = 0.5 and KP = 1. If the reflectance reads whitish-grey and yields a value of around 0.25, the error would be -0.25, meaning that the left motor’s speed is 0.5 - 1*(-0.25) = 0.75, and the right motor’s speed is 0.5 + 1*(-0.25) = 0.25. Motor speeds of 0.75 and 0.25 would indicate a turn to the right, and the code does as desired.

This is a video illustrating line following with one-sensor control. Notice the smoother tracking compared to on/off control, yet the robot is still unable to recover from the last bend, because even a small amount of strafing from the line results in the robot completely losing where it is. Also, the KP value was not equal to 1 here; it’s up to you to figure out the best KP value for your bot.

INSERT Video

Mini Challenge: Proportional Line Follow

  • Write code for the robot to follow the line with proportional control, as shown in the video above. Note: this isn’t much more than calculating error as shown in the previous section then integrating the above line of code in a loop.

  • Play around with the value of KP. How does a higher or lower KP affect the amount of oscillation when following the line, and how responsive the robot is to curved lines? What is the optimal value of KP?

Following the Line: Proportional Control with 2 Sensors

Motivation

Line following with proportional control using one sensor is quite an improvement to on/off control. Yet, it’s not perfect - if the reflectance sensor crosses over the center of the line, it’s game over.

_images/pcontrol2s1.png

The issue is - the robot has no way of knowing which side of the line it is following at all! If it sees the right edge of the line, it will assume it still is detecting the left edge, and thus keep turning right past the point of no return!

It would be neat if we could actually follow the center of the line, and can recognize both cases in which the robot drifts to the left or the right of the center of the line, and makes a correction. This would largely increase the “controllable” range of what the reflectance sensor sees and can correctly react to. Conveniently, it seems like we haven’t yet made use of the second reflectance sensor on the right…

If we recorded the reflectance of both sensors as we moved the robot around the line, there are a few major “categories” of behaviors the robot would perform. For a minute, assume the rectangle is a black line and the two red squares are the location of the reflectance sensors.

_images/pcontrol2s2.png

Sensors read ~0 and ~0. Robot does not know if it is on either the left or right side of the line.

_images/pcontrol2s3.png

Sensors read ~0 and ~1. Robot knows it is on the left side and turns right.

_images/pcontrol2s4.png

Sensors read ~0.5 and ~0.5. Robot knows it is on the center and goes straight.

The other two major categories you can extrapolate for yourself.

So, how can we effectively combine the readings of the left and right reflectance sensors using proportional control to have the robot follow the line? There’s quite an elegant solution that I encourage for you to try to figure out yourselves before the answer is revealed.

Implementation

The big reveal:

error = reflectance.get_left() - reflectance.get_right()

At the beginning this line of code may not make a lot of sense - but let’s dissect it. Remember our previous convention of positive error meaning the robot is too far left and needs to turn right, and vice versa.

In this case, if the robot is following the left edge of the line, then the left sensor detects close to white while the right sensor detects close to black, and so error = (0) - (1) = -1, and the robot turns right. On the other hand, if the robot is following the right edge of the line, error = (1) - (0) = 1, and the robot turns left. When the robot is right at the center, both sensor values are the same and so the error is 0, and as the robot starts drifting towards either direction, the magnitude of the error increases and thus the robot compensates accordingly.

The most interesting case is when the robot is completely off the line - in this case, both sensors read white, leaving an error of (0) - (0) = 0, and so the robot just goes straight. Given that the robot wouldn’t know which direction to compensate if it was completely off the line, this seems like a reasonable result.

And so, our final code is as follows:

KP = 1 # experiment with different values of KP to find what works best

while True:
    error = reflectance.get_left() - reflectance.get_right()
drivetrain.set_effort(base_speed - KP*error, base_speed + KP*error)

Here’s what that looks like. Note that KP used in this video was not equal to 1:

INSERT Video

Mini Challenge: Proportional Control with Two Sensors

  • Combine what you’ve learned with encoders to create a function that follows the line using two sensors for some given distance, and, then stop the motors.

  • What KP value is best?

  • Compare one sensor to two sensor line following. What bends in the black line is two sensor line following able to handle that one sensor line following cannot?

Challenge: Sumo-Bots!

_images/sumo.png

It’s time for SUMO bots! Two XRP bots battle it out in the ring in a completely autonomous match to push the other robot outside of the ring.

Robots start facing away from each other in the orientation above, and have one minute to knock the other robot outside. They may utilize distance sensors to detect the presence and location of the other robot, and use the reflectance sensors to keep themselves inside the ring.

Hint: A basic SUMO-bots program may consist of a robot continuously point turning until an enemy robot is found with the distance sensor, and then charging at the robot until the black line is detected, so that the robot stays inside the ring. However, worthy extensions include: aligning the robot to be perpendicular from the black line so that the robot is not misaligned, and devising an algorithm to attack the opponent robot from the side to avoid a head-on collision and gain more leverage.

Manipulation

Delivery Challenge

_images/deliveryRobotImage1.png

The delivery robot has to bring packages from one location to another. Introduction To help people in need in the current pandemic, you and your colleagues have decided to build an autonomous delivery robot.

Your robot will pick up food and other supplies and deliver them to residences with minimal contact, all to help fight the spread of the coronavirus.

Of course, a full-sized version will have to wait until you get millions of dollars in investments for your robotics company, but that doesn’t mean you can’t have fun dreaming with a scale model here.

Your robot will have an arm for lifting and carrying “bags” of goods and sensors to help it navigate a simple network of “streets.” Complicating things a little, not everything to be delivered is always put in the right spot for easy collection and sometimes road construction might block your path. But you’ll still need to make it work!

The world is counting on you! Can you make it happen?

Objectives

The final project is a chance for you and your teammates to demonstrate that you can apply concepts and strategies from the course to a specific challenge. You will apply theoretical knowledge to the design of your system and use focused testing to improve the performance. The project will culminate in a demonstration where you will prove your robot’s performance. To help us understand more about your system and your process, you will also produce a report describing the system development and an assessment of how well it met your goals.

The successful team will design, build, and demonstrate a robot that can accomplish a prescribed set of tasks. To be successful, you will need to:

Identify key performance criteria and develop a strategy for meeting your team’s objectives, Identify key factors that affect performance and use analysis and testing to specify them, Develop and apply a testing strategy to ensure performance, Evaluate the system performance, and Describe the system and your design process.

Challenge

Your challenge is to program your robot to pick up bags of supplies from known and unknown locations and deliver them to specified delivery points. The challenge is constructed such that the tasks have a range of difficulty. For example, located the free range bag and scoring them will earn more points, as will being able to navigate around the construction sign. Since you will have to perform multiple runs, reliability will be essential.

Course

The course will consist of a strip of tape (to simulate roads) with a designated place to pick up bags and three specified drop zones. Most bags will be placed at the end of the main road, though some will be placed in a “free range” zone. Figure 1 shows a typical arena, though we realize that there will be some variation in each course.

Bags

You will be expected to build your own delivery bags, for example from paper or card stock and paper clips. You should have at least two bags ready for the demo. You may “recycle” them as the demo progresses. [Insert approximate dimensions of bag that the XRP can pick up]

_images/deliveryRobotMap.png

Figure 1: Diagram of the arena. Individual arenas will vary.

Collection

Most of the bags will be placed on the line at the end of the main road. You may place a piece of tape near the pickup zone to indicate where it starts, but the bags will be placed at different distances from the tape.

To earn points for collecting the free-range bag, you must demonstrate that its position can be arbitrary within the free-range zone (with the exception that you may face the bail in whatever position you deem most favorable).

Delivery zones

Each delivery zone or platform may be no larger than 10 cm in any horizontal dimension. The platforms for the delivery zones will be marked on the ground. You can make them out of cardboard or any other material. To score points, each container must be placed in a delivery zone and left there (upright) long enough to prove that it is stable.

Operation

You will start with your robot on the main road and a bag in the pickup zone. On command (a press of either button on the robot), your robot will drive to the pickup zone, pick up a bag, and deliver it to the specified address, which will be determined by the button press (e.g., ’GP20’ indicates a delivery to address A, etc.). Your robot will then return to the starting point, stop, and wait for the next command.

After each delivery, you will place another bag in the pickup zone and repeat the process, pressing the button to indicate another delivery. You may recycle the bags as much as you wish.

At two random points in the demo, your instructor will place a road construction sign on the main road for 30 seconds, and you will not receive credit for any delivery during which your vehicle hits the sign.

The challenge will last 5 minutes. You may use any of the sensors that you’ve explored in this class to accomplish the challenge. Line following will be an important behavior, but collecting the free-range bags autonomously will require some creativity on your part.

[Video of a sample run coming soon!]

Scoring

[Tune point values over time]

In your run, your team should deliver as much weight as possible, plus and including the “free range” bags. Points will be allocated as follows:

You will receive 5 point for every package you deliver to addresses A or C. However, you may only get 50 points max (corresponding to 10 packages) for each delivery address; i.e., you must deliver to all three addresses to receive the maximum points. You will receive 5 additional points for each free-range bag (20 points max) scored at address B. Your total score will be multiplied by the number of unique addresses you delivered bags to. That is to say, if you scored 1 bag to Address A, and 1 bag to Address B, your final score would be 2 * (5+5) = 20 points. If you scored a free range bag on top of that, your score would be 3 * (5+5+5) = 45 points No points will be received for a delivery where the robot hits the road construction sign. You will lose 2 points for each time you have to touch your robot (e.g., to put it back on the line), other than to specify the delivery zone at the start of each delivery.

Indices and tables