EECSC106A
Challenges
There are several challenges the team experienced that are summarized in the tables below:






Results & Conclusion
In summary, the final implementation of our project included 3 main nodes. The computer vision node responsible for creating the map of the world, the path planning node to provide trajectory commands which are executed but the Kinematics node. The computer vision node proved reliable with the combination of color cups we ended up using. In the end, we could identify the obstacles ( red cups ) and goals (blues cups) pretty reliably from the image segmentation. The RRT path planning had a 100 % reliability and always found a path to the goal by avoiding the obstacle. Our turtlebot, when placed in a corner of a map, could navigate around a given obstacle in red and find its way to a goal in blue. This is done by following the waypoints provided by the RRT path planner from start to goal. The turtlebot is then able to stop once a goal is reached.
​
When we think about future improvements there are a couple of things we could tackle. For e.g. we could replace the rudimentary HSV with object detection. We also weren’t able to get to LiDAR, so implementing that to detect smaller obstacles + give us more accurate real-life distance measurements. However, the main issue is that when it comes to planning and actuator, here is no feedback loop. The turtlebot gets commands at the beginning and then executes them. But what if it gets stuck? Behind a smaller obstacle? Or it overshoots the goal? For those reasons we would want to implement real-time path optimization, PID controller and integrate particle filter for turtlebot localization.

Supplemental Explorations
In the event of a successful implementation of a LIDAR object detection and with the anticipation of likely sensor errors introduced in our turtlebot position estimates, we implemented a particle filter algorithm for turtlebot localization. The goal was to use the obstacles as known landmarks and use the LIDAR determined distance to those obstacles to estimate the position of turtlebot. The particle filter was unused in the final product of our project but here are implementation details that could be integrated into our project given more time:
The following implementation is based on a combination of research papers and similar implementations found online:
-
Particles are generated based on the known map corners x,y coordinates, retrieved from the image segmentation node. This produces uniformly distributed particles on the map. (Figure 1)
-
While the turtlebot is moving, the pose of the robot is then obtained and the particles are moved based on the odometry signal from the turtlebot ( heading and velocity). The odometry data is used to move the particles in the same direction as the turtlebot. In this section, the position of the particles generated in part a) are moved at each timestep (i.e turtlebot average position) based on the odometry data. (Figure 2 & 3)
-
After the particles have moved, the distance to each landmark is obtained from the LIDAR obstacle detection. The discrepancy between each particle's positions versus the measured distances to each landmark is evaluated. This evaluation allows us to assign some weight to each particle. Each weight signifies how certain we are that the particle is at the turtlebot position. At every time step, we update those weights based on the landmark measurements and refine the turtlebot position. (Figure 4 & 5)
-
The turtlebot position is then returned with turtle_estimate (Figure 5 & 6)

Figure 1 - Random Particles Creation

Figure 2 - Get turtlebot position


Figure 3 - Move particles to match turtlebot movement
Figure 4 - Get distance to known landmarks from LIDAR



Figure 5 - Turtlebot position finder
Figure 6 - Return turtlebot position and example Particles creation (left), localization tuning (right)