“The promise of IoT is smart everything” (ref). In fulfilling this promise for robotics, localization is the next frontier. And we wanted to achieve a solution without actual human presence. Our other motivation is to look for a different approach than current solutions available for indoor localization. WigWag is also working in this domain and was generous enough to sponsor our project.
When the team is researching on current presence detection technologies and mapping algorithms, many implementations involved in cameras to do the job. One typical implementation is using Xbox Kinect and SLAM algorithms to map the floor. It is very accurate and can show real life images. However, because of our hardware limitations, we do not have the option to use cameras in our project. Other implementations on localization can be achieved by using Radio Frequencies (RF). Techniques such as Fingerprinting, Maximum Likelihood localization, Sequence-based localization, or Ultra Wide Band-based localization. Also, during research, we found that people also trying to implement mapping using Roomba's internal odometry. However, they showed that Roomba's odometry was so inaccurate that it was almost impossible to do it.
Roomba Odometry Model Result from [1]
We have seen research in RF-based localization and robot mapping, but we wanted to combine the two concepts into one simple, integrated solution that uses relatively cheap and readily available off-the-shelf consumer products. Using the iRobot Create 2, UC Berkeley's Tmote-Skys, and Estimotes Bluetooth beacons, we created a proof-of-concept robot to map the obstacles in a typical consumer's living room. By implementing every component as a Python module, other users can easily import our project, modify the algorithms, and test their code real-time. This provides people with a playground for scientific discovery and perhaps a starting point for WigWag to build a Home IoT solution for living room mapping.
To calibrate the Tmotes, we place one tmote-sky client at distance d1 away (usually we use 1m as first reference distance) from tmote-sky server (server - which collects RSSI value). After collecting 100 samples, we place the tmote-sky client at distance d2 away. Then we collect 100 more samples. Since the RSSI value from Tmotes will vary a lot based on their characteristics, we use median filter and average filter to get the RSSI value for each reference distance. Then we use the simplified path-loss model: RSSI(dBm) = -10*n (log(d)) + A, to find parameters n and A based on the distance and RSSI values. Here is the expression on how to find parameter n:
n = (RSSI1 - RSSI2)/(-10log(d1) + 10log(d2))
Then we can plug-in n into the model to find parameter A. Notice that in our experiment, we found that each Tmotes have very different signal strength at the same distance. Therefore, we have to calibrate each individual Tmote. Thus, we wrote a script to map n and A value for each Tmote into their MAC addresses. It helps us to get more accurate result in the next section of work, Trilateration.
We use the method of trilateration to locate the Roomba. In this project, we assume that the room is in square/rectangular shape, and the Roomba will not go out of the boundary. First we put one Tmotes to each corner of the room. Then, we put the server/base station (Tmote that collect RSSI samples) on top of the Roomba. We collect 50 RSSI samples for each reference Tmote (Tmote in the corners). Then we use median filter and average filter to smoothen the result. Therefore, we can find the "real" RSSI value from Roomba to each reference Tmote. Traditionally, for trilateration method to work, it requires at least three reference points to locate the base station. However, in our implementation, we can use only two reference points. As the following figure shows:
Illustration on Trilateration
We use each adjacent pair of reference Tmotes to locate the Roomba. Normally, using the method of trilateration will result us two locations as the figure shows. However, we know that the point outside of the wall is inaccurate, thus the location inside the wall must be the correct location. To implement trilateration algorithm. We first input the size of the room. For the "real" RSSI values of one pair of Tmotes, we iterate through all location (x,y) in the room, and calculate the Estimated RSSI value using our path-loss model. We will chose the location which the estimated RSSI values has the least MSE error comparing with the "real" RSSI values. We repeat the same method three times for other pairs of Tmotes to improve the accuracy of the Roomba location we estimate.
As an experiment, we calibrated one Tmote and fit the model using RSSI samples at 1m and 8m. Then we take RSSI samples at 4m, and calculate the model estimate for the true distance. Our results are plotted below. The error bars represent 95% confidence intervals that the true median value lies within that range.
Testing the accuracy of the RSSI-distance model created after calibrating a Tmote. The error bars represent 95% confidence.
Our model does worst when it has to extrapolate an estimate for the 1m and 8m distances, rather than interpolate the middle distances, like 2m, 3m and 4m. i.e. It performs poorly when we develop the model using RSSI @ 1m and 4m, then estimate using RSSI @ 8m. The largest error we get is about +/- 2.5m.
Here is a box plot of our RSSI samples. We took 100 samples for each distance. The true receive power in dBm can be calculated through calibration curves provided in the Tmote-Sky data sheet.
Testing the accuracy of the RSSI-distance model created after calibrating a Tmote. The error bars represent 95% confidence.
We notice that for shorter distances, there are many more outliers in RSSI measurements. And for all distances, the RSSI range is large and overlapping with other distances. This makes the Tmote distance estimation inaccurate.
Because of the inaccurate result in our experiment, we then seek ways to further improve the accuracy. First, we found that the transmit power of Tomte is low and unstable on battery than running on AC power or computer USB. Thus we provide each Tmote with either AC power or computer USB to strengthen the signals of Tmote. In addition, the direction of the Antenna will also effect on the RSSI values received from the same distance and same Tmote. Thus we decide to add rotation to the Roomba while we are taking RSSI samples. We assumed these were the best we can do to improve Tmotes accuracies given our circumstances. We carried these improvements into our mapping stages.
For reading about indoor localization through estimote beacons refer to the Estimote Indoor Localization section.
Download the Indoor map example app from the git hub. SDK is available as EstimoteIndoorSDK in CocoaPods. For installation put the following line in your Podfile: pod 'EstimoteIndoorSDK'.
For installing the frameworks related to MQTT, add the link to github MQTTKit (https://github.com/mobile-web-messaging/MQTTKit) in pod file.
After this save the pod file. Then run pod install at the terminal and open the IndoorMap.xcworkspace to build and run the example.
After this, IndoorMap app has all the frameworks required for estimote indoor localization and MQTT. Now build the app further to publish the x,y coordinates from the iOS app to the MQTT broker hosted at iot.eclipse.org on the topic MQTTEstimote.(For MQTT working example refer the link: https://github.com/jmesnil/MQTTExample).
For merging objectiveC class in swift project, follow the link http://stackoverflow.com/questions/24002369/how-to-call-objective-c-code-from-swift
Final iOS app build can be seen at: https://github.com/DougMHu/roomba-estimotes-localization
One can receive the x,y coordinates on the laptop by running the following python script: mqtt_paho_subscribe.py
To change the app according to individual user, one need to change the App ID, App Token, Location Identifier in ViewController.swift by looking for the values from the estimote cloud account (Estimote cloud).
One can also change the beacon settings(advertising interval, broadcasting power) through the estimote cloud and estimote app.For changing it through estimote cloud:
The user can send the Roomba commands via the Roomba's serial port. The Roomba only understands commands in a packet structure specified in iRobot's Open Interface Specification (See References). Below is a summary table of available commands, from driving the wheels to reading the bump sensor. Each packet contains the command's opcode followed by data bytes whenever applicable. For example, to rotate the Roomba, one could send a "Drive Wheels" command, sending a right velocity = -200 mm/s and a left velocity = +200 mm/s. The packet we are sending looks like this in decimal form:
[opcode][right velocity][left velocity]
[145][-200][200]
But in reality, we are sending the bits:
[opcode][right velocity][left velocity]
[1001 0001][1111 1111 0011 1000][0000 0000 1100 1000]
Summary Table of all the commands a user can send to the Roomba, taken from iRobot's Open Interface Specification (See References)
The Mapping API provides an interface between the user and the cryptic packet interface explained in the previous section. For example, it provides 2 functions "drive(distance, speed)" and "rotate(angle,speed)" for moving the Roomba. These functions only require the user to input distance [mm], angle [degrees], and speed [mm/s]. The function handles creating the appropriate packet structure and sending it to the Roomba via serial.
The Mapping API also provides a higher level interface that accomplishes complicated tasks necessary for our mapping algorithm. For example, it provides 2 functions: "discover(distance)" and "retreat()". The first function tries to drive the Roomba forward by the input distance. If the Roomba senses a bump, then it will stop itself and the function returns "True". Otherwise, it successfully steps forward and returns "False". The second function reads the Roomba's sensors to determine how far it has traveled forward. Then, it drives the Roomba in reverse by that same distance. These more complicated functions require the program to continuously poll for the Roomba's sensor data, which is explained in the next section.
Process flow diagrams of API functions for Rotate (left) and Discover (right). These functions are used by our mapping algorithm (See Mapping Section)
The iRobot Open Interface Specification also provides a command for "streaming" the Roomba's sensor data as a stream of packets. For example, by sending this command to the Roomba:
[opcode][# of packets][packet opcodes]
[148][1][7]
The user can expect the Roomba to send back packets updating the bump state:
[header][# of payload bytes][sensor ID][sensor data][checksum]
[19][2][7][0][#]
To the computer's USB port once every 15 msec. In the above example, the sensor data was "0" indicating no bumps detected.
However, to read the Roomba's sensors continuously every 15 msec, the program must create a different thread that runs in the background. Otherwise, running this code every 15 msec would stop the main thread from other important tasks, like driving the Roomba and calculating its next mapping step. This is handled by Python's "threading" class, modified to allow the main thread to send a "kill" signal to this child thread whenever it is finished reading the Roomba sensors.
Diagram of multi-threading during the Discover API function. A child thread is created and killed before the function returns, and the Bump sensor state is returned.
Our Roomba's wheels/ motors are not perfect. From a few qualitative tests, the team noticed that the Roomba's forward movement and rotation angles are not accurate and repeatable. More specifically, when the Roomba drives forward, it tends to drift to its right side. For example, try letting the Roomba step forward a distance, then step backward the same distance. After 5 iterations, our Roomba drifted 100s of millimeters from its starting position. We also noticed that the drift is different for different floor materials: tile, wood, rug.
After 5 iterations of step forward + step backward motion, the Roomba drifts forward and to the right.
Also, try letting the Roomba rotate 90 degrees, first at 200 mm/s, then at 400 mm/s (twice the speed should take the Roomba half the time). While the Roomba was close to 90 degrees in the first case, it was far from 90 degrees in the second case. Even worse, our Roomba sometimes rotated more than 90 degrees, sometimes less than 90 degrees. In the Mapping Section, we discuss how we tried to design our mapping algorithm around these inaccuracies.
The Roomba's rotation angle accuracy depends on the speed of rotation.
From the above observations, we concluded that the left wheel was spinning faster than the right wheel, even when we specified the same wheel velocity. We tried to command the left wheel to spin slightly slower, or the right wheel to spin slightly faster. However, we continued to observe the same drift and rotation inaccuracies. Only when we increased or decreased wheel velocity by +/- 20 mm/s did we notice a change that caused the Roomba to drift too much in the left direction instead. So, we observed that the wheel velocity only changes in discrete steps where the threshold value was ~ 20 mm/s. In the end, we settled for a hardware hack of taping the left wheel treads with carpenters tape. This caused the left wheel to slip, so the left wheel was effectively covering less distance than the right. Afterwards, the Roomba drifted less significantly.
Room Discretized to Grids
While considering mapping techniques we had to take into account inaccuracies introduced by the Roomba sometimes like inaccurate rotations or angular drifts introduced when going straight. So we tried to minimize forward step size, total distance traveled by the Roomba and restricted our angles to 90 degrees. We discretized the room into square grids of size STEP*STEP where STEP can be 200mm. The Roomba travels to each grid and checks for obstacles. The final map formed is thus accurate upto STEP size.
For our project we considered 3 main obstacle discovery algorithms:
-The Maze Wall Follower
-DFS(Depth First Search)
-BFS(Breadth First Search)
We decided not to pursue the Maze Wall Follower(left-hand rule or the right-hand rule) algorithm which is generally used for Mazes which contain no loops. As some obstacles can be loops and there can be additional complexity due to the random shapes of obstacles. Also DFS/BFS is required in addition to keep track of visited locations. For these reasons, we did not pursue this approach.
DFS and BFS are both graph discovery algorithms but vary in the way the graph is explored.
DFS vs BFS[2][3]
We can observe in Fig 2 how DFS goes to the bottom of a subtree and then backtracks, whereas BFS visits the nodes neighbors first and then their neighbors and so on.
Mapping Algorithm Optimization
To decrease the total distance traveled by the Roomba, we modified the way neighbors were added to the DFS Stack to reduce the distance the Roomba travels when it backtracks. This can be observed in first figure where it has to backtrack all the way from the end of the maze, whereas it completes the upper regions first in the second figure after we prioritized it through the way we add elements to the DFS stack.Mapping Algorithm Flowcharts
Python Turtle graphics is a favorite in introductory CS courses. It is also useful for visualizing the Roomba as it moves inside the map real-time. In many ways, the Turtle Graphics API provides functions similar to our own Roomba API (See Roomba Section). It provides rotation using "right(degrees)" and "left(degrees)" for rotating clockwise and counter-clockwise, respectively. It provides translation using "forward(distance)". We updated the turtle graphics in real-time as the Roomba moves. We also convert the Roomba STEP distance to a turtle STEP in the graph.
Using Roomba odometry
As explained in the Mapping section, the room is discretized into squares of STEP size before running the mapping algorithm. So, if the mapping algorithm outputs a matrix of obstacles, the obstacle location is obtained by simply by inverting the discretization. For example, a room of width 4m and length 4.5m when discretized into grids gives a matrix of 8 by 9 squares. If there is an obstacle in grid[0][2] we can say by inversion there is an obstacle in grid at location [0.2*(x+1)][0.2*(y+1)](here at grid location 0.2m by 0.6m).
When exploring a new grid in the matrix, the Roomba moves a STEP forward. If no obstacle is encountered, we mark this grid in the matrix as having no obstacle. If an obstacle was encountered before the STEP is completed, the Roomba retreats to its previous location and the grid is marked as an obstacle in the matrix. The Roomba proceeds like this till completion of the Mapping Algorithm.
The python Turtle moves in the same direction as the Roomba and we are able to see the real-time map of the Roomba exploring the room.(See Python Turtle Section)
Integrating the Roomba Mapping and measurements from the motes
Python Turtle Roomba Mapping and Locations the Roomba Rotated(where we sample location form the motes)
To compare the maps we get from the Roomba and the measurements from the motes, we keep track of all the locations the Roomba rotated(numbered in sequential order as they occur just like in the above figure). At these locations after the Roomba rotates, we also record the location we get from the Tmotes and the Estimotes. After mapping is complete, we are then able to compare the results. If we retrace the locations from 1 to 25(end), we can get the path taken by the Roomba. More importantly, by connecting the points at these sampled locations, we get an outline of the room walls and obstacles
The Tmote locations are obtained using Tri-lateration while the Estimote locations are accessed using MQTT through the Estimote APIs.
The team chose to run experiments in a living room with dimensions 3.8m x 4.8m. It was chosen for its 4 closed walls, no glass windows, hard wood flooring, good WiFi signal, AC outlets in every corner. These were all desirable to overcome hardware limitations (See Sections Tmotes, Estimotes, and Roomba). The Roomba mapping script was run twice, once using Tmotes to sample obstacle locations and once using Estimotes. The resulting discretized map closely resembled the actual room and obstacles. However, the Tmote and Estimote locations were very inacurate in comparison.
The obstacle locations in the room according to Tmote and Estimote samples.
We also ran experiments attempting to compare the location accuracy between the Tmotes and the Estimotes. Here are our results after sampling each location for 120 seconds and taking an average.
Location measurements using Tmotes and Estimotes. Error bars reflect the range of values sampled.
Look into spiral mapping and wall follow.
Try with Google Beacons.
Try in a bigger room.
Then try on Mars ☺
Github link to python libraries: https://github.com/DougMHu/roomba-obstacle-mapping
Github link to iOS app: https://github.com/DougMHu/roomba-estimotes-localization
Github link to contiki code: https://github.com/DougMHu/contiki/tree/ee579
Aashiq Ahmed
aashiqah@usc.edu
Shuai Chen
shuaic@usc.edu
Meha Deora
mdeora@usc.edu
Douglas Hu
douglamh@usc.edu
Prof: Dr. Bhaskar Krishnamachari
bkrishna@usc.edu
TA: Pradipta Ghosh
pradiptg@usc.edu
WigWag: Yash Goyal
ygoyal@wigwag.com