Previous: [Chapter 4](Chapter 4.ipynb) [Learning Computing with Robots](Learning Computing with Robots.ipynb) Next: [Chapter 6](Chapter 6.ipynb)

# 5: Sensing the World¶

The Senses Photo courtesy of Blogosphere (http://cultura.blogosfere.it)

I see all obstacles in my way.

From the song I can see clearly now, Johnny Nash, 1972.

In the previous chapter you learned how proprioception: sensing time, stall, and battery-level can be used in writing simple yet interesting robot behaviors. All robots also come equipped with a suite of external sensors (or exteroceptors) that can sense various things in the environment. Sensing makes the robot aware of its environment and can be used to define more intelligent behaviors. Sensing is also related to another important concept in computing: input. Computers act on different kinds of information: numbers, text, sounds, images, etc. to produce useful applications. Acquiring information to be processed is generally referred to as input. In this chapter we will also see how other forms of input can be acquired for a program to process. First, let us focus on Scribbler’s sensors.

## Scribbler Sensors¶

The Scribbler robot can sense the amount of ambient light, the presence (or absence) of obstacles around it, and also take pictures from its camera. Several devices (or sensors) are located on the Scribbler (see picture on the previous page). Here is a short description of these:

Camera: The camera can take a still picture of whatever the robot is currently “seeing”.

Light: There are three light sensors present on the robot. These are located in the three holes present on the front of the robot. These sensors can detect the levels of brightness (or darkness). These can be used to detect variations in ambience light in a room. Also, using the information acquired from the camera, the Scribbler makes available an alternative set of brightness sensors (left, center, and right).

Proximity: There are two sets of these on the Scribbler: IR Sensors (left and right) on the front; and Obstacle Sensors (left, center, and right) on the Fluke dongle. They can be used to detect objects on the front and on its sides.

## Getting to know the sensors¶

Sensing using the sensors provided in the Scribbler is easy. Once you are familiar with the details of sensor behaviors you will be able to use them in your programs to design interesting creature-like behaviors for your Scribbler. But first, we must spend some time getting to know these sensors; how to access the information reported by them; and what this information looks like. As for the internal sensors, Myro provides several functions that can be used to acquire data from each sensor device. Where multiple sensors are available, you also have the option of obtaining data from all the sensors or selectively from an individual sensor.

Do This: perhaps the best way to get a quick look at the overall behavior of all the sensors is to use the Myro function senses:

In [1]:
from Myro import *
init()
senses()

You are using:
Fluke, version 3.0.9
Scribbler2, version 1.1.6
Hello, my name is 'Scribby'!


This results in a window (see picture on right) showing all of the sensor values (except the camera) in real time. They are updated every second. You should move the robot around and see how the sensor values change. The window also displays the values of the stall sensor as well as the battery level. The leftmost value in each of the sensor sets (light, IR, obstacle, and bright) is the value of the left sensor, followed by center (if present), and then the right.

## The Camera¶

The camera can be used to take pictures of the robot’s current view. As you can see, the camera is located on the Fluke dongle. The view of the image taken by the camera will depend on the orientation of the robot (and the dongle). To take pictures from the camera you can use the takePicture command:

In [4]:
takePicture()

Out[4]:
In [5]:
takePicture("color")

Out[5]:
In [6]:
takePicture("gray")

Out[6]:

Takes a picture and returns a picture object. By default, when no parameters are specified, the picture is in color. Using the "gray" option, you can get a grayscale picture. Example:

In [7]:
p = takePicture()
show(p)

In [8]:
show(takePicture())


Once you take a picture from the camera, you can do many things with it. For example, you may want to see if there is a laptop computer present in the picture. Image processing is a vast subfield of computer science and has applications in many areas. Understanding an image is quite complex but something we do quite naturally. For example, in the picture above, we have no problem locating the laptop, the bookcase in the background, and even a case for a badminton racket (or, if you prefer racquet). The camera on the Scribbler is its most complex sensory device that will require a lot of computational effort and energy to use it in designing behaviors. In the simplest case, the camera can serve as your remote “eyes” on the robot. We may not have mentioned this earlier, but the range of the Bluetooth wireless on the robot is $100$ meters. In Chapter 9 we will learn several ways of using the pictures. For now, if you take a picture from the camera and would like to save it for later use, you use the Myro command, savePicture, as in:

In [9]:
savePicture(p, "office-scene.jpg")


The file office-scene.jpg will be saved in the same folder as where you started Calico. You can also use savePicture to save a series of pictures from the camera and turn it into an animated "movie" (an animated gif image). This is illustrated in the example below.

Do This: First try out all the commands for taking and saving pictures. Make sure that you are comfortable using them. Try taking some grayscale pictures as well. Suppose your robot has ventured into a place where you cannot see it but it is still in communication range with your computer. You would like to be able to look around to see where it is. Perhaps it is in a new room. You can ask the robot to turn around and take several pictures and show you around. You can do this using a combination of rotate and takePicture commands as shown below:

In [10]:
for t in timer(30):
show(takePicture())
turnLeft(0.5, 0.2)


That is, take a picture and then turn for $0.2$ seconds, repeating the two steps for $30$ seconds. If you watch the picture window that pops up, you will see successive pictures of the robot’s views. Try this a few times and see if you can count how many different images you are able to see. Next, change the takePicture command to take grayscale images. Can you count how many images it took this time? There is of course an easier way to do this:

In [11]:
N=0
for t in timer(30):
show(takePicture())
turnLeft(0.5, 0.2)
N=N+1
print(N)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18


Now it will output the number of images it takes. You will notice that it is able to take many more grayscale images than color ones. This is because color images have a lot more information in them than grayscale images (see text on right). A $256\times192$ color image requires $256\times192\times3 = 147,456$ bytes of data where as a grayscale image requires only $256\times192$ = $49,152$ bytes. The more data you have to transfer from the robot to the computer, the longer it takes.

You can also save an animated GIF of the images generated by the robot by using the savePicture command by accumulating a series of images in a list. This is shown below

In [13]:
Pics = []
for t in timer(30):
pic = takePicture()
show(pic)
Pics.append(pic)
turnLeft(0.5, 0.2)
savePicture(Pics, "office-movie.gif")


First we create an empty list called, Pics. Then we append each successive picture taken by the camera into the list. Once all the images are accumulated, we use savePicture to store the entire set as an animated GIF. You will be able to view the animated GIF inside any web browser. Just load the file into a web browser and it will play all the images as a movie.

There are many more interesting ways that one can use images from the camera. In Chapter 9 we will explore images in more detail. For now, let us take a look at Scribbler’s other sensors.

## Light Sensing¶

The following functions are available to obtain values of light sensors:

getLight() Returns a list containing the three values of all light sensors.

getLight(<POSITION>) Returns the current value in the <POSITION> light sensor. <POSITION> can either be one of 'left', 'center', 'right' or one of the numbers 0, 1, 2. The positions 0, 1, and 2 correspond to the left, center, and right sensors. Examples:

In [15]:
getLight()

Out[15]:
[59773, 51338, 44796]
In [16]:
getLight('left')

Out[16]:
59752
In [17]:
getLight(0)

Out[17]:
59724
In [18]:
getLight('center')

Out[18]:
51328
In [19]:
getLight(1)

Out[19]:
51333
In [20]:
getLight('right')

Out[20]:
44799
In [21]:
getLight(2)

Out[21]:
44795

The values being reported by these sensors can be in the range $[0..5000]$ where low values imply bright light and high values imply darkness. The above values were taken in ambient light with one finger completely covering the center sensor. Thus, the darker it is, the higher the value reported. In a way, you could even call it a darkness sensor. Later, we will see how we can easily transform these values in many different ways to affect robot behaviors.

It would be a good idea to use the senses function to play around with the light sensors and observe their values. Try to move the robot around to see how the values change. Turn off the lights in the room, or cover the sensors with your fingers, etc.

When you use the getLight function without any parameters, you get a list of three sensor values (left, center, and right). You can use assign these to individual variables in many ways:

In [22]:
L, C, R = getLight()

In [23]:
print(L)

59778

In [24]:
Center = getLight("center")

In [25]:
print(Center)

51323


The variables can then be used in many ways to define robot behaviors. We will see several examples of these in the next chapter.

The camera present on the Fluke dongle can also be used as a kind of brightness sensor. This is done by averaging the brightness values in different zones of the camera image. In a way, you can think of it as a virtual sensor. That is, it doesn’t physically exist but is embedded in the functionality of the camera. The function getBright is similar to getLight in how it can be used to obtain brightness values:

getBright() Returns a list containing the three values of all light sensors.

getBright(<POSITION>) Returns the current value in the <POSITION> light sensor. <POSITION> can either be one of 'left', 'center', 'right' or one of the numbers 0, 1, 2. The positions 0, 1, and 2 correspond to the left, center, and right sensors. Examples:

In [26]:
getBright()

Out[26]:
[144547, 208019, 302703]
In [27]:
getBright('left')

Out[27]:
144779
In [28]:
getBright(0)

Out[28]:
144513
In [29]:
getBright('center')

Out[29]:
207812
In [30]:
getBright(1)

Out[30]:
208322
In [31]:
getBright('right')

Out[31]:
302898
In [32]:
getBright(2)

Out[32]:
302949

The above values are from the camera image of the Firefox poster (see picture above). The values being reported by these sensors can vary depending on the view of the camera and the resulting brightness levels of the image. But you will notice that higher values imply bright segments and lower values imply darkness. For example, here is another set of values based on the image shown below on the right.

In [33]:
getBright()

Out[33]:
[145412, 208559, 302680]

As we can see, a darker image is likely to produce lower brightness values. In the image, the center of the image is brighter than its left or right sections.

It is also important to note the differences in the nature of information being reported by the getLight and getBright sensors. The first one reports the amount of ambient light being sensed by the robot (including the light above the robot). The second one is an average of the brightness obtained from the image seen from the camera. These can be used in many different ways as we will see later.

Do This: The program shown below uses a normalization function to normalize light sensor values in the range $[0.0..1.0]$ relative to the values of ambient light. Then, the normalized left and right light sensor values are used to drive the left and right motors of the robot.

In [42]:
from Myro import *

# record average ambient light values
Ambient = sum(getLight())/3.0

# This function normalizes light sensor values to 0.0..1.0
def normalize(v):
if v > Ambient:
v = Ambient

return 1.0 - v/Ambient

def main():
# Run the robot for 60 seconds
for t in timer(60):
L, C, R = getLight()
# motors run proportional to light
motors(normalize(L), normalize(R))
stop()

In [43]:



Run the above program on your Scribbler robot and observe its behavior. You will need a flashlight to affect better reactions. When the program is running, try to shine the flashlight on one of the light sensors (left or right). Observe the behavior. Do you think the robot is behaving like an insect? Which one? Study the program above carefully. There are some new Python features used that we will discuss shortly. We will also return to the idea of making robots behave like insects in the next chapter.

## Proximity Sensing¶

The Scribbler has two sets of proximity detectors. There are two infrared (IR) sensors on the front of the robot and there are three additional IR obstacle sensors on the Fluke dongle. The following functions are available to obtain values of the front IR sensors:

getIR() Returns a list containing the two values of all IR sensors

getIR(<POSITION>) Returns the current value in the <POSITION> IR sensor. <POSITION> can either be one of 'left' or 'right' or one of the numbers 0, 1. The positions 0 and 1 correspond to the left, center, and right sensors. Examples:

In [44]:
getIR()

Out[44]:
[1, 1]
In [45]:
getIR('left')

Out[45]:
1
In [46]:
getIR(0)

Out[46]:
1
In [49]:
getIR('right')

Out[49]:
0
In [50]:
getIR(1)

Out[50]:
0

IR sensors return either a 1 or a 0. A value of 1 implies that there is nothing in close proximity of the front of that sensor and a 0 implies that there is something right in front of it. These sensors can be used to detect the presence or absence of obstacles in front of the robot. The left and right IR sensors are placed far enough apart that they can be used to detect individual obstacles on either side.

Do This: Run the senses function and observe the values of the IR sensors. Place various objects in front of the robot and look at the values of the IR proximity sensors. Take your notebook and place it in front of the robot about two feet away. Slowly move the notebook closer to the robot. Notice how the value of the IR sensor changes from a 1 to a 0 and then move the notebook away again. Can you figure out how far (near) the obstacle should be before it is detected (or cleared)? Try moving the notebook from side to side. Again notice the values of the IR sensors.

In [ ]:



The Fluke dongle has an additional set of obstacle sensors on it. These are also IR sensors but behave very differently in terms of the kinds of values they report. The following functions are available to obtain values of the obstacle IR sensors:

getObstacle() Returns a list containing the values of all IR sensors.

getObstacle(<POSITION>) Returns the current value in the <POSITION> IR sensor. <POSITION> can either be one of 'left', ‘center’, or 'right' or one of the numbers 0, 1, or 2. The positions 0, 1, and 2 correspond to the left, center, and right sensors.

Examples:

In [61]:
getObstacle()

Out[61]:
[6400, 6400, 6400]
In [62]:
getObstacle('left')

Out[62]:
6400
In [63]:
getObstacle(0)

Out[63]:
6400
In [64]:
getObstacle('center')

Out[64]:
6400
In [65]:
getObstacle(1)

Out[65]:
6400
In [66]:
getObstacle('right')

Out[66]:
6400
In [67]:
getObstacle(2)

Out[67]:
6400

The values reported by these sensors range from $0$ to $7000$. A $0$ implies there is nothing in front of the sensor where as a high number implies the presence of an object. The sensors on the sides can be used to detect the presence (or absence of walls on the sides).

Do This: Place your Scribbler on the floor, turn it on, start Python, and connect to it. Also connect the game pad controller and start the manual drive operation (gamepad()). Next, issue the senses command to get the real time sensor display. Our objective here is to really "get into the robot's mind" and drive it around without ever looking at the robot. Also resist the temptation to take a picture. You can use the information displayed by the sensors to navigate the robot. Try driving it to a dark spot, or the brightest spot in the room. Try driving it so it never hits any objects. Can you detect when it hits something? If it does get stuck, try to maneuver it out of the jam! This exercise will give you a pretty good idea of what the robot senses, how it can use its sensors, and to the range of behaviors it may be capable of. You will find this exercise a little hard to carry out, but it will give you a good idea as to what should go into the brains of such robots when you actually try to design them. We will try and revisit this scenario as we build various robot programs.

Also do this: Try out the program below. It is very similar to the program above that used the normalized light sensors.

In [ ]:
def main():
# Run the robot for 60 seconds
for t in timer(60):
L, R = getIR()
# motors run proportional to IR values
motors(R, L)
main()


Since the IR sensors report $0$ or $1$ values, you do not need to normalize them. Also notice that we are putting the left sensor value (L) into the right motor and the right sensor value (R) into the left motor. Run the program and observe the robot’s behavior. Keep a notebook handy and try to place it in front of the robot. Also place it slightly on the left or on the right. What happens? Can you summarize what the robot is doing? What happens when you switch the R and L values to the motors?

You can see how simple programs like the ones we have seen above can result in interesting automated control strategies for robots. You can also define completely automated behaviors or even a combination of manual and automated behaviors for robots. In the next chapter we will explore several robot behaviors. First, it is time to learn about lists in Python.

## Lists in Python¶

You have seen above that several sensor functions return lists of values. We also used lists to accumulate a series of pictures from the camera to generate an animated GIF. Lists are a very useful way of collecting a bunch of information and Python provides a whole host of useful operations and functions that enable manipulation of lists. In Python, a list is a sequence of objects. The objects could be anything: numbers, letters, strings, images, etc. The simplest list you can have is an empty list:

In [5]:
[]

Out[5]:
[]

or

In [6]:
L = []
print(L)

[]


An empty list does not contain anything. Here are some lists that contain objects:

In [7]:
N = [7, 14, 17, 20, 27]

In [9]:
Cities = ["New York", "Dar es Salaam", "Moscow"]

In [11]:
FamousNumbers = [3.1415, 2.718, 42]

In [12]:
SwankyZips = [90210, 33139, 60611, 10036]

In [13]:
MyCar = ["Toyota Prius", 2006, "Purple"]


As you can see from above, a list could be a collection of any objects. Python provides several useful functions that enable manipulation of lists. Below, we will show some examples using the variables defined above:

In [14]:
len(N)

Out[14]:
5
In [15]:
len(L)

Out[15]:
0
In [16]:
N + FamousNumbers

Out[16]:
[7, 14, 17, 20, 27, 3.1415, 2.718, 42]
In [17]:
SwankyZips[0]

Out[17]:
90210
In [18]:
SwankyZips[1:3]

Out[18]:
[33139, 60611]
In [19]:
33139 in SwankyZips

Out[19]:
True
In [20]:
19010 in SwankyZips

Out[20]:
False

From the above, you can see that the function len takes a list and returns the length or the number of objects in the list. An empty list has zero objects in it. You can also access individual elements in a list using the indexing operation (as in SwankyZips[0]). The first element in a list has index 0 and the last element in a list of n elements will have an index n-1. You can concatenate two lists using the ‘+’ operator to produce a new list. You can also specify a slice in the index operation (as in SwankyZips[1:3] to refer to the sublist containing elements from index 1 through 2 (one less than 3). You can also form True/False conditions to check if an object is in a list or not using the in operator. These operations are summarized in more detail at the end of the chapter.

Besides the operations above, Python also provides several other useful list operations. Here are examples of two useful list operations sort and reverse:

In [21]:
SwankyZips

Out[21]:
[90210, 33139, 60611, 10036]
In [22]:
SwankyZips.sort()
SwankyZips

Out[22]:
[10036, 33139, 60611, 90210]
In [23]:
SwankyZips.reverse()
SwankyZips

Out[23]:
[90210, 60611, 33139, 10036]
In [25]:
SwankyZips.append(19010)
SwankyZips

Out[25]:
[90210, 60611, 33139, 10036, 19010, 19010]

sort rearranges elements in the list in ascending order. reverse reverses the order of elements in the list, and append appends an element to the end of the list. Some other useful list operations are listed at the end of the chapter. Remember that lists are also sequences and hence they can be used to perform repetitions. For example:

In [26]:
Cities = ["New York", "Dar es Salaam", "Moscow"]
for city in Cities:
print(city)

New York
Dar es Salaam
Moscow


The variable city takes on subsequent values in the list Cities and the statements inside the loop are executed once for each value of city. Recall, that we wrote counting loops as follows:

for I in range(5):
<do something>

The function range returns a sequence of numbers:

In [27]:
range(5)

Out[27]:
[0, 1, 2, 3, 4]

Thus the variable I takes on values in the list [0, 1, 2, 3, 4] and, as in the example below, the loop is executed 5 times:

In [28]:
for I in range(5):
print(I)

0
1
2
3
4


Also recall that strings are sequences. That is, the string:

In [29]:
ABC = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"


is a sequence of 26 letters. You can write a loop that runs through each individual letter in the string and speaks it out as follows:

In [30]:
for letter in ABC:
speak(letter)

A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z


There are also some useful functions that convert strings into lists. Say we have a string containing a sentence:

In [31]:
sentence = "Would you have any Grey Poupon"


You can convert the string above into individual words using the split operation

In [32]:
sentence.split()

Out[32]:
['Would', 'you', 'have', 'any', 'Grey', 'Poupon']

In light of the list operations presented above review some of the sensing examples from earlier in the chapter. We will be using lists in many examples in the remained of the text. For now, let us return to the topic of sensing.

## Extrasensory Perception?¶

You have seen many ways of acquiring sensory information using the robot’s sensors. In addition to the robot itself, you should be aware that your computer also has several "sensors" or devices to acquire all kinds of data. For example, you have already seen how, using the input function, you can input some values into your Python programs:

In [33]:
N = ask("Enter a number:")

In [34]:
print(N)

42


Indeed, there are other ways you can acquire information into your Python programs. For example, you can input some data from a file in your folder. In Chapter 1 you also saw how you were able to control your robot using the game pad controller. The game pad was actually plugged into your computer and was acting as an input device. Additionally, your computer is most likely connected to the internet using which you can access many web pages. It is also possible to acquire the content of any web page using the internet. Traditionally, in computer science people refer to this is a process of input. Using this view, getting sensory information from the robot is just a form of input. Given that we have at our disposal all of the input facilities provided by the computer, we can just as easily acquire input from any of the modalities and combine them with robot behaviors if we wish. Whether you consider this as extra sensory perception or not is a matter of opinion. Regardless, being able to get input from a diverse set of sources can make for some very interesting and useful computer and robot applications.

The game pad controller you used in Chapter 1 is a typical device that provides interaction facilities when playing computer games. These devices have been standardized enough that, just like a computer mouse or a keyboard, you can purchase one from a store and plug it into a USB port of your computer. Myro provides some very useful input functions that can be used to get input from the game pad controller. Game pads come in all kinds of flavors and configurations with varying numbers of buttons, axes, and other devices on them. In the examples below, we will restrict ourselves to a basic game pad shown in the picture above.

The basic game pad has eight buttons (numbered 1 through 8 in the picture) and an axis controller (see picture on right). The buttons can be pressed or released (on/off) which are represented by 1 (for on) and 0 (for off). The axis can be pressed in many different orientations represented by a pair of values (for the x-axis and y-axis) that range from -1.0 to 1.0 with [0.0, 0.0] representing no activity on the axis. Two Myro functions are provided to access the values of the buttons and the axis:

getGamepad(<device>)

getGamepadNow(<device>) returns the values indicating the status of the specified <device>. <device> can be "axis" or "button".

The getGamepad function returns only after <device> has been used by the user. That is, it waits for the user to press or use that device and then returns the values associated with the device at that instant. getGamepadNow does not wait and simply returns the device status right away. Here are some examples:

In [68]:
getGamepadNow("axis")

Out[68]:
[0.0, 0.0]
In [69]:
getGamepad("axis")

Out[69]:
[0.0, 1.0]
In [70]:
getGamepadNow("button")

Out[70]:
[0, 0, 0, 0, 0, 0, 0, 0]

Both getGamepad and getGamepadNow return the same set of values: axis values are returned as a list [x-axis, y-axis] (see picture on right for orientation) and the button values are returned as a list of 0, and 1’s. The first value in the list is the status of button#1, followed by 2, 3, and so on. See picture above for button numbering.

Do This: Connect the game pad controller to your computer, start Python, and import the Myro module. Try out the game pad commands above and observe the values. Here is another way to better understand the operation of the game pad and the game pad functions:

In [72]:
for t in timer(10):

[1, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[1, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[1, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[1, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 1, 0, 0, 0, 0]
[0, 0, 1, 1, 0, 0, 0, 0]
[0, 0, 1, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0, 0, 0]
[0, 1, 1, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[1, 0, 0, 0, 0, 0, 0, 0]
[1, 1, 0, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0, 0, 0]
[0, 1, 1, 0, 0, 0, 0, 0]
[0, 0, 1, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 1, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[1, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 1, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 1, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[1, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[1, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 1, 0, 0, 0, 0, 0]


Try out different button combinations. What happens when you press more than one button? Repeat the above for axis control and observe the values returned (keep the axes diagram handy for orientation purposes).

The game pad controller can be used for all kinds of interactive purposes, especially for robot control as well as in writing computer games. Let us write a simple game pad based robot controller. Enter the program below and run it.

In [75]:
def main():
# A simple game pad based robot controller
for t in timer(30):
motors(X, Y)
main()


The program above will run for $30$ seconds. In that time it will repeatedly sample the values of the axis controller and since those values are in the range $-1.0..1.0$, it uses them to drive the motors. When you run the above program observe how the robot moves in response to pressing various parts of the axis. Do the motions of the robot correspond to the directions shown in the game pad picture on previous page? Try changing the command from motors to move (recall that move takes two values: translate and rotate). How does it behave with respect to the axes? Try changing the command to move(-X, -Y). Observe the behavior.

As you can see from the simple example above, it is easy to combine input from a game pad to control your robot. Can you expand the program above to behave exactly like the gamepad controller function you used in Chapter 1? (See Exercise 6).

## The World Wide Web¶

If your computer is connected to the internet, you can also use Python facilities to access the content of any web page and use it as input to your program. Web pages are written using markup languages like HTML and so when you access the content of a web page you will get the content with the markups included. In this section we will show you how to access the content of a simple web page and print it out. Later we will see how you could use the information contained in it to do further processing.

Go to a web browser and take a look at the web page:

http://www.fi.edu/weather/data/jan07.txt

This web page is hosted by the Franklin Institute of Philadelphia and contains recorded daily weather data for Philadelphia for January 2007. You can navigate from the above address to other pages on the site to look at daily weather data for other dates (the data goes back to 1872!). Below, we will show you how, using a Python library called urllib, you can easily access the content of any web page. The urllib library provides a useful function called urlopen using which you can access any web page on the internet as follows:

In [37]:
from urllib import *

In [40]:
Data = urlopen("http://learn.fi.edu/weather/data/jan07.txt")

In [41]:
print(Data.read())

January 2007

Day	Max	Min	Liquid	Snow	Depth
1	57	44	1.8	0	0
2	49	40	0	0	0
3	52	35	0	0	0
4	58	41	0.02	0	0
5	62	52	0.15	0	0
6	70	56	T	0	0
7	57	43	0.4	0	0
8	55	38	0.75	0	0
9	44	34	0	0	0
10	39	26	0	0	0
11	48	34	0.02	0	0
12	59	47	T	0	0
13	54	47	0.03	0	0
14	63	43	0	0	0
15	63	43	0	0	0
16	61	27	0	0	0
17	33	20	0	0	0
18	36	26	0.2	0.2	T
19	41	32	0	0	0
20	33	24	T	0	0
21	28	20	0.1	0.5	0.5
22	34	25	T	0.3	T
23	37	30	T	0.2	0
24	39	32	0	0	0
25	35	17	T	T	0
26	25	9	0	0	0
27	42	24	0	0	0
28	42	30	0.7	0.4	0.4
29	30	22	0	0	0
30	37	22	0.01	0.2	0.2
31	31	22	0	0	0
#days	31
Sum	1414	1005	4.18	1.80	1.10
Avg	45.6	32.4
Avg Month	39.0

Calendar Day observation (Midnight to Midnight Local) for Temperature
4:30pm to 4:30pm Local for Precipitation



The following two commands are important:

Data = urlopen("http://learn.fi.edu/weather/data/jan07.txt")



The first command uses the function urlopen (which is imported from the urllib) to establish a connection between your program and the web page. The second command issues a read to read from that connection. Whatever is read from that web page is printed out as a result of the print command.

## A little more about Python functions¶

Before we move on, it would be good to take a little refresher on writing Python commands/functions. In Chapter 2 we learned that the basic syntax for defining new commands/functions is:

def <FUNCTION NAME>(<PARAMETERS>):
<SOMETHING>
...
<SOMETHING ELSE>



The Myro module provides you with several useful functions (forward, turnRight, etc.) that enable easy control of your robot's basic behaviors. Additionally, using the syntax above, you learned to combine these basic behaviors into more complex behaviors (like wiggle, yoyo, etc.). By using parameters you can further customize the behavior of functions by providing different values for the parameters (for example, forward(1.0) will move the robot faster than forward(0.5)). You should also note a crucial difference between the movement commands like forward, turnLeft, and commands that provide sensory data like getLight or getStall, etc. The sensory commands always return a value whenever they are issued. That is:

In [ ]:
getLight('left')

In [ ]:
getStall()


Commands that return a value when they are invoked are called functions since they actually behave much like mathematical functions. None of the movement commands return any value, but they are useful in other ways. For instance, they make the robot do something. In any program you typically need both kinds of functions: those that do something but do not return anything as a result; and those that do something and return a value. In Python all functions actually return a value. You can already see the utility of having these two kinds of functions from the examples you have seen so far. Functions are an integral and critical part of any program and part of learning to be a good programmer is to learn to recognize abstractions that can then be packaged into individual functions (like drawPolygon, or degreeTurn) which can be used over and over again.

## Writing functions that return values¶

Python provides a return-statement that you can use inside a function to return the results of a function. For example:

In [42]:
def triple(x):
# Returns x*3
return x * 3


The function above can be used just like the ones you have been using:

In [43]:
triple(3)

Out[43]:
9
In [44]:
triple(5000)

Out[44]:
15000

The general form of a return-statement is:

return <expresssion>



That is, the function in which this statement is encountered will return the value of the <expression>. Thus, in the example above, the return-statement returns the value of the expression 3*x, as shown in the example invocations. By giving different values for the parameter x, the function simply triples it. This is the idea we used in normalizing light sensor values in the example earlier where we defined the function normalize to take in light sensor values and normalize them to the range $0.0..1.0$ relative to the observed ambient light values:

In [45]:
# This function normalizes light sensor values to 0.0..1.0
def normalize(v):
if v > Ambient:
v = Ambient

return 1.0 - v/Ambient


In defining the function above, we are also using a new Python statement: the if-statement. This statement enables simple decision making inside computer programs. The simplest form of the if-statement has the following structure:

if <CONDITION>:
<do something>
<do something else>
...



That is, if the condition specified by <CONDITION> is True then whatever is specified in the body of the if-statement is carried out. In case the <condition> is False, all the statements under the if command are skipped over.

Functions can have zero or more return-statements. Some of the functions you have written, like wiggle do not have any. Technically, when a function does not have any return statement that returns a value, the function returns a special value called None. This is already defined in Python.

Functions, as you have seen, can be used to package useful computations and can be used over and over again in many situations. Before we conclude this section, let us give you another example of a function. Recall from Chapter 4 the robot behavior that enables the robot to go forward until it hits a wall. One of the program fragments we used to specify this behavior is shown below:

In [76]:
while not getStall():
forward(1)
stop()


In the above example, we are using the value returned by getStall to help us make the decision to continue going forward or stopping. We were fortunate here that the value returned is directly usable in our decision making. Sometimes, you have to do little interpretation of sensor values to figure out what exactly the robot is sensing. You will see that in the case of light sensors. Even though the above statements are easy to read, we can make them even better, by writing a function called stuck() as follows:

In [77]:
def stuck():
# Is the robot stalled?
# Returns True if it is and False otherwise.

return getStall() == 1


The function above is simple enough, since getStall already gives us a usable value (0/False or 1/True). But now if we were to use stuck to write the robot behavior, it would read:

In [78]:
while not stuck():
forward(1)
stop()


## Summary¶

In this chapter you have learned all about obtaining sensory data from the robot’s perceptual system to do visual sensing (pictures), light sensing, and proximity sensing. The Scribbler provides a rich set of sensors that can be used to design interesting robot behaviors. You also learned that sensing is equivalent to the basic input operation in a computer. You also learned how to get input from a game pad, the World Wide Web, and from data files. Programs can be written to make creative use of the input modalities available to define robot behaviors, computer games, and even processing data. In the rest of the book you will learn how to write programs that make use of these input modalities in many different ways.

## Myro Review¶

getBright()

Returns a list containing the three values of all light sensors.

getGamepad(<device>)

getGamepadNow(<device>)

Returns the values indicating the status of the specified <device>. <device> can be "axis" or "button". The getGamepad function waits for an event before returning values. getGamepadNow immediately returns the current status of the device.

getIR()

Returns a list containing the two values of all IR sensors.

getIR(<POSITION>)

Returns the current value in the <POSITION> IR sensor. <POSITION> can either be one of 'left' or 'right' or one of the numbers 0, 1.

getLight()

Returns a list containing the three values of all light sensors.

getLight(<POSITION>)

Returns the current value in the <POSITION> light sensor. <POSITION> can either be one of 'left', 'center', 'right' or one of the numbers 0, 1, 2. The positions 0, 1, and 2 correspond to the left, center, and right sensors.

getObstacle()

Returns a list containing the values of all IR sensors.

getObstacle(<POSITION>)

Returns the current value in the <POSITION> IR sensor. <POSITION> can either be one of 'left', ‘center’, or 'right' or one of the numbers 0, 1, or 2.

savePicture(<picture>, <file>)

savePicture([<picture1>, <picture2>, …], <file>)

Saves the picture in the file specified. The extension of the file should be ".gif" or ".jpg". If the first parameter is a list of pictures, the file name should have an extension ".gif" and an animated GIF file is created using the pictures provided.

senses()

Displays Scribbler’s sensor values in a window. The display is updated every second.

show(<picture>)

Displays the picture in a window. You can click the left mouse anywhere in the window to display the (x, y) and (r, g, b) values of the point in the window’s status bar.

takePicture()

takePicture("color")

takePicture("gray")

Takes a picture and returns a picture object. When no parameters are specified, the picture is in color.

## Python review¶

if <CONDITION>:
<statement-1>
...
<statement-N>



If the condition evaluates to True, all the statements are performed. Otherwise, all the statements are skipped.

return <expression>

Can be used inside any function to return the result of the function.

<string>.split()

Splits <string> into a list.

urlopen(<URL>)

Establishes a stream connection with the <URL>. This function is to be imported from the Python module urlopen.

<stream>.read()

Reads the entire contents of the <stream> as a string.

### Lists:¶

[] is an empty list.

<list>[i]

Returns the ith element in the . Indexing starts from 0.

<value> in <list>

Returns True if <value> is in the <list>, False otherwise.

<list1> + <list2>

Concatenates <list1> and <list2>.

len(<list>)

Returns the number of elements in a list.

range(N)

Returns a list of numbers from 0..N-1

range(N1, N2)

Returns a list of numbers starting from N1..(N2-1)

range(N1, N2, N3)

Returns a list of numbers starting from N1 and less than N2 incrementing by N3.

<list>.sort()

Sorts the <list> in ascending order.

<list>.append(<value>)

Appends the at the end of <list>.

<list>.reverse() Reverses the elements in the <list>.

## Exercises¶

1) The numbers assigned to the variable FamousNumbers in this chapter all have names. Can you find them? What other famous numbers do you know?

2) Besides text, the speak command can also vocalize numbers. Try speak(42) and also speak(3.1419). Try some really large whole numbers, like speak(4537130980). How is it vocalized? Can you find out the limits of numerical vocalization?

3) The Fluke camera returns pictures that are $256\times192 = 49,152$ pixels. Typical digital cameras are often characterized by the number of pixels they use for images. For example, $8$ megapixels. What is a megapixel? Do some web search to find out the answer.

4) All images taken and saved using the Fluke camera can also be displayed in a web page. In your favorite browser, use the Open File option (under the File menu) and navigate to a saved picture from the Scribbler. Select it and view it in the browser window. Try it with an animated GIF image file.

5) What does the following expression produce?

L = [5] * 10

6) Modify the game pad input program from this chapter to make the axis controller behave so that when the top of the axis is pressed, the robot goes forward and when the bottom of the axis is pressed, it goes backward.

 Previous: [Chapter 4](Chapter 4.ipynb) [Learning Computing with Robots](Learning Computing with Robots.ipynb) Next: [Chapter 6](Chapter 6.ipynb)