LAB 10
Simulator

After having created a map of the room using the robot and its distance sensor in Lab 9, we set up and play around with the simulation environment in this lab. In lab 10, we get familiar with the simulator, its virtual robot, and the plotting tool. This simulator tool designed by the course staff will be very useful, as it can help us design and test the navigation and localization algorithms in software without actually having to run the robot around in the room. The future labs will involve constant back-and-forth implementation between the simulator and the real robot. Using a simulator like this is a common practice in robotics to design and test systems without the hardware, which is generally much easier and quicker. Another advantage of using the simulator for me is that I can run it on my Windows 11 laptop, without needing another Windows 10 computer or the VM.

Setup and Trial

The course staff has developed and provided us with a simulation tool to use, which involves the simulator kernel, a tool for plotting different data, and a controller module to control the virtual robot and obtain its sensor data. I followed the steps outlined in the lab handout without any problems to update to the latest version of Python and install all the necessary libraries.

The plotter and the simulator are both Python GUIs that are encapsulated behind the Commander class in the code. We can use the commander class to start/stop/reset both the GUIs, plot the map, pose, and the ground truth on the plotter, as well as control the animated virtual robot in the simulator GUI. For direct use, the GUIs also offer key controls to move the robot around, stop it, and modify the plots. Once we are familiar with how the simulator and the GUIs work, we can interact with them programmatically through various functions in the Commander class.

Open-loop Control

The first task in this lab was to implement open-loop control on the virtual robot, specifically, move the robot in a square loop. The ground truth and the odometry of the robot are plotted in the GUI as the robot moves, to observe the square pattern. This involved directly using the set_vel() function in the Commander class which can move the robot with a specified linear velocity to go straight, and an angular speed to turn on the spot. In order to perform the task, I moved the robot forward at a constant speed, turn 90° to the left, and move again, in a loop. As seen in the code snippet below, I move the robot forward by 500 cm over 1 second and then turn 90° over another second.

# Helper function to plot ground truth and odometer pose 
def plot_stuff():
    pose, gt_pose = cmdr.get_pose()
    
    cmdr.plot_odom(pose[0], pose[1])
    cmdr.plot_gt(gt_pose[0], gt_pose[1])
                        
# Open-loop control for a square
while cmdr.sim_is_running() and cmdr.plotter_is_running():
    plot_stuff()
    
    # Move forward for 0.5m
    cmdr.set_vel(0.5, 0)
    await asyncio.sleep(1)
    cmdr.set_vel(0, 0)
    plot_stuff()
    
    # Turn left 90 deg
    cmdr.set_vel(0, 1.565)
    await asyncio.sleep(1)
    cmdr.set_vel(0, 0)
    plot_stuff()

The robot moves almost appropriately with open-loop control. If the simulator model was immaculate with no room for uncertainty, then the robot movement would have been an exact square in each iteration. However, we see that every square drawn by the robot is slightly different, and they are generally drifting in a certain direction. This uncertainty is introduced by the one-second delay I used to time every movement. While we specify a non-blocking wait for 1 second using await asyncio.sleep(1), the scheduler on the computers OS may reschedule the Jupyter kernel slightly before or after 1 second depending on the other processes and tasks running on the system. Therefore, while the virtual robot is able to roughly execute a square loop with open-loop control, the square drawn is not exact and uniform. One the other hand, the path drawn by the odometry of the robot in the simulator is nowhere close to a square which is expected because of how noisy the IMU is.

Closed-loop Control

For the next part of the lab, we needed to create a closed-loop controller for the virtual robot that prevented it from colliding with any obstacles or boundaries of the room. Since no explicit goal was assigned, the robot can just keep roaming around the room as long as it doesn’t hit anything.

In the first iteration of my controller, I made a simple obstacle avoidance system which moved forward at a constant speed if the distance measured by the sensor is above a certain threshold, else turn 90° to the left and tries again. This worked mostly well, especially with small linear speeds that allowed the robot to have a small braking distance. However, at moments when there is no obstacle near the robot and it can move straight for a while, the movement at this constant speed seems to be very slow. Another problem I noticed was when the robot is near a corner, such that it faces an obstacle even after turning left. With this simple controller, it would turn left again (180° from original), and go back up the path it came on. In some situations, this could cause the robot to get stuck in a loop of moving back-and-forth between two such corners. While this is permitted according to the controller requirements, it’s not too fun to look at.

I improved the basic closed-loop control with some changes to account for the mentioned problems. The robot now moves with a linear speed proportional to its distance from any obstacle – if the path is clear it will move very quickly, and eventually slow down near the threshold distance to prevent overshoot. This is similar to the P term in a PID controller. The second change is to not default to only turning left when facing an obstacle. The robot now turns left and if the path ahead is not clear, it turns 180° to face 90° right from the original orientation, and continues. The pseudocode for the controller, its implementation, and the demonstration are shown below.

# Pseudocode for closed-loop control 
while running:
distance = sensor_reading()

if distance > threshold:
    move forward with speed k * (distance - threshold)
    
else:
    stop 
    turn 90 deg to the right 
    stop
    
    distance = sensor_reading()
    
    if distance > threshold:
        continue 
    else:
        turn 180 deg to the right
        stop
                    

# Implementation of closed-loop control
# Set distance threshold
thresh = 0.5

# Set min speed 
min_speed = 0.2

while cmdr.sim_is_running() and cmdr.plotter_is_running():
    plot_stuff()
    
    # Check sensor reading
    sensor_val = cmdr.get_sensor()[0]
    
    # If sensor above threshold, move forward with speed, else turn by 90 deg
    if (sensor_val > thresh) :
        speed = 1 * (sensor_val - thresh) + min_speed
        cmdr.set_vel(speed, 0)
    else:
        # Stop
        cmdr.set_vel(0, 0)
        
        # Turn 90 deg to the left 
        cmdr.set_vel(0, 1.565)
        await asyncio.sleep(1)
        cmdr.set_vel(0, 0)
        
        # Measure distance again 
        sensor_val = cmdr.get_sensor()[0]
        
        # If still in front of obstacle, move 180 deg (90 deg right from original)
        if (sensor_val < thresh):
            cmdr.set_vel(0, 1.565)
            await asyncio.sleep(2)
            cmdr.set_vel(0, 0) 

We can observe that although the controller only deals with right angles, the robot after a few turns is moving at various angles with the horizontal. This is due to the uncertainty of movement durations as described in open-loop control. After running the controller a few times and tuning the parameters, I finalized the threshold distance to be 0.5 m, which prevents all the collisions including the ones where the robot proceeds the obstacle at a very steep angle. After running the simulator for a long time, I noticed that the robot can still crash into obstacles in some specific cases. This is because of narrow angular range of the distance sensor, which only detects in the forward direction from the center of the robot. Since the robot is a rigid body instead of a point, some parts of its body can come in contact with obstacles while the sensor thinks it is clear to move. One way to prevent such instances would be to manually take distance readings over a small angular range by turning the robot. However, this will make the entire movement very slow and I decided to not implement this method as such types of collisions were extremely rare.

This was a quite liberating lab because it gave us a lot of flexibility to play around with the robot virtually without having to run things on the actual robot and potentially breaking it, especially while testing the closed-loop control algorithms.