Yunting Yan
Greetings! I'm Yunting, currently pursuing a MS. in Mechanical Engineering. Welcome to my webpage for Cornell MAE5190 Fast Robots. Here, you'll find detailed information on the 12 lab reports completed during the course.
Greetings! I'm Yunting, currently pursuing a MS. in Mechanical Engineering. Welcome to my webpage for Cornell MAE5190 Fast Robots. Here, you'll find detailed information on the 12 lab reports completed during the course.
In this section, we initialized the Artemis board and configured the Arduino Integrated Development Environment (IDE). Subsequently, we examined four illustrative examples by Artemis. Additionally, we formulated a comprehensive test case to validate the operational integrity of the USB interface, temperature sensor, microphone, and LED.
After installing Arduino IDE, we tested the LED on the board by running the "Blink" example and observed that the blue LED light was on and then off.
In this example, we tested serial monitor output as shown below.
This example tested the digital temperature sensor on the Artemis board.
The build in microphone was tested, the frequency of the sound received was printed to the serial monitor.
This experiment was to identify the musical "A," a sound that has a frequency of 1098. Upon successful identification of the frequency by the microphone, a consequential action was triggered, causing the illumination of a blue LED. Conversely, in the absence of the identified frequency, the LED maintained an inactive state. The Arduin code is:
void setup(){
Serial.begin(115200);
printPDMConfig();
// initialize digital pin LED_BUILTIN as an output.
pinMode(LED_BUILTIN, OUTPUT);
}
void loop(){
if (myPDM.available()){
myPDM.getData(pdmDataBuffer, pdmDataBufferSize);
uint32_t tmpF;
tmpF = printLoudest(); //Example frequency detection function
if (tmpF == 1098){
digitalWrite(LED_BUILTIN, HIGH);
}
else{
digitalWrite(LED_BUILTIN, LOW);
}
}
The video below shows an example of the functionality of this code.
In this section of the lab, we establish a protocol for the bi-directional exchange of data between the computer and the Artemis board. This process is facilitated through the utilization of the Bluetooth stack, employing Python for the PC side and Arduino for the Artemis board side.
The following lab was done on a Windows 11 laptop with Anaconda3 and Python 3.11. Notice that Venv is not installed, and the intended connection between the Artemis board and the computer was successfully established.
To connect successfully, we first connect the board to the computer and upload ble_arduino.ino script provided in the lab to get the MAC address as shown in the figure below.
Then in the "connection.yaml" file, change the MAC address to the one printed by the board. Then, a unique UUID was needed to connect to the right device. The id was generated using the code provided and overwritten the default UUID in the "connection.yaml" and "BLE_UUID_TEST_SERVICE" files. The finalized "connection.yaml" configuration is provided in the graph below for reference.
To assess the connectivity between the Artemis board and the computer, a series of commands were executed for diagnostic purposes. The initial command employed for this purpose was the "ECHO" command. This command facilitated the transmission of a character string from the Python code to the Artemis board, which, in turn, reciprocated by transmitting the received phrase back to the Python code. This command was added to the Arduino code in ble_arduino.ino as shown below.
case ECHO:
char char_arr[MAX_MSG_SIZE];
// Extract the next value from the command string as a character array
success = robot_cmd.get_next_value(char_arr);
if (!success)
return;
tx_estring_value.clear();
tx_estring_value.append("Robot says -> ");
tx_estring_value.append(char_arr);
tx_estring_value.append(" :)");
tx_characteristic_string.writeValue(tx_estring_value.c_str());
Serial.print("Robot says -> ");
Serial.println(char_arr);
Serial.println(" :)");
return;
break;
In Python sent the Echo command to Artemis, and see the reult shown below.
The execution of the GET_TIME_MILLIS command encompasses the retrieval of the present time from the Artemis board. This process entails the utilization of the millis() function within the Arduino library to capture the current timestamp, subsequently converting it into a double data type. The resultant value is transmitted to Python as a string. The pertinent section of the Arduino code is shown below:
case GET_TIME_MILLIS:
char msg[20];
tx_estring_value.clear();
tx_estring_value.append("T:");
sprintf(msg,"%lu",millis());
tx_estring_value.append(msg);
tx_characteristic_string.writeValue(tx_estring_value.c_str());
Serial.println(tx_estring_value.c_str());
break;
On the Python side, the following command is used. Notice that the return value is a timestamp not a real time.
In the foregoing examples, the proper reception of data necessitates the invocation of the "receive_string" method. However, in scenarios where the Artemis persistently transmits data at an indeterminate rate, relying solely on manual invocation may lead to suboptimal reception. Consequently, the implementation of a notification handler becomes imperative, ensuring the automatic retrieval and processing of transmitted data. The ensuing code snippet illustrates the structure of the handler function, along with the corresponding code for its invocation:
In this example, we call "GET_TIME_MILLIS" in Python through a loop. This function gets the current time in milliseconds and sends it to the computer to be received and processed by the notification handler as soon as a new timestamp is collected. Another way of implementing this real-time process is to create a command that loops in Arduino. The Python code to get and receive data (timestamp) is:
The "GET_TIME_MILLIS" command yields the recorded timestamp. By computing the temporal disparity between two consecutive timestamps, an estimation of the rate at which the Artemis accumulates and dispatches data can be derived. The empirical findings indicate an average speed of approximately 1.23 seconds for the complete process of data collection and transmission of a string.
As a counterpart to real-time processing, an alternative approach involves the storage of collected data on the Artemis, with subsequent transmission after the entirety of the data is collected. This is accomplished by establishing a global array capable of storing timestamps. Within the Arduino function, instead of executing real-time transmission, each timestamp is systematically inserted into the array. Furthermore, the introduction of a "SEND_TIME_DATA" command is implemented. This command iteratively traverses the array, transmitting each data point as a string to the computer for subsequent processing. The Arduino code is then enhanced with the inclusion of the following cases:
case GET_20_TIME_MILLIS:
for (int x=0; x<20; x++){
RecordTimes[x] = new char[20];
sprintf(RecordTimes[x],"%lu",millis());
}
Serial.print("Time test done! [20]");
break;
case SEND_TIME_DATA:
for (int x=0; x<20; x++){
tx_estring_value.clear();
tx_estring_value.append(RecordTimes[x]);
tx_characteristic_string.writeValue(tx_estring_value.c_str());
Serial.println(tx_estring_value.c_str());
}
break;
The Python code is:
The results show that the average speed at the Artemis board is able to collect timestamp is 0.05ms, which is 104 faster than real-time processing.
This section is similar to the former one but includes the use of digital temperature sensor. The Arduino codo is:
case GET_TIME_TEMP_START:
for (int x=0; x<20; x++){
RecordTemps[x] = new char[20];
RecordTimes[x] = new char[20];
sprintf(RecordTemps[x], "%lu", analogReadTemp());
sprintf(RecordTimes[x],"%lu",millis());
}
Serial.print("Temps test done! [20]");
break;
case GET_TEMP_READINGS:
for (int x=0; x<20; x++){
tx_estring_value.clear();
tx_estring_value.append("|Time:|");
tx_estring_value.append(RecordTimes[x]);
tx_estring_value.append("|Temp:|");
tx_estring_value.append(RecordTemps[x]);
tx_characteristic_string.writeValue(tx_estring_value.c_str());
Serial.println(tx_estring_value.c_str());
}
break;
The presented Python code and its corresponding results are delineated below. Upon examination of the outcome, it is evident that the Artemis expended an average time of 0.21 milliseconds for the data collection process. This duration is extended due to the inherent requirement of the device to retrieve data from the temperature sensor, thereby contributing to the observed elongation in the data collection time.
The advantage of real-time processing of data is that one can monitor the readings. Also, under certain conditions, the Artemis board might not have the capacity to process algorithms in time, and sending sensor data to the computer is needed. However, the communication between devices is slow. On the other hand, offline processing can have a high sample rate, but we do not know the data until finished collecting. The Artemis board has 384 kB of RAM. Assuming all memory can be used to store characters and one character causes 1 byte, we can store 393,216 characters.
This task compares the data rate for5 bytes and 120 bytes reply from the Artemis. Python code used is shown below:
The data rate for 5 bytes reply is 16.7 bytes/s, and data rate for 120 bytes is 477.5 bytes/s. Thus, the bigger message size leads to a bigger data rate, and larger message reduce the overhead.
When carried out previous sections, reliability issue is observed. The computer cannot read all messages if the Artemis is publishing short messages at very high rate. To address this problem, a delay in sending message is needed, or we can use the offline processing method.
In this lab, we established a bluethooth connection between the computer and the Artemis board and explore the limitation of data rate in bluetooth communication.
This lab integrated an Inertial Measurement Unit (IMU) onto the Artemis board to sense the robot's orientation and acceleration. Two methods for measuring orientation, using a gyroscope and accelerometer, were introduced and their advantages and drawbacks analyzed. Finally, the IMU was mounted on the RC robot for testing.
After connecting the IMU to the Artemis board, we tested the connection by running the example provided by the library to read sensor readings as shown below.
"AD0_VAL" represents the value of the last bit of the I2C address used to communicate with the ICM-20948 IMU sensor.
When contrasting the accelerometer and gyroscope data, it is observed that when the IMU is positioned on a level surface, the accelerometer demonstrates fluctuations within the range of ± 15 ms/s², while the gyroscope fluctuates between ± 1 degree per second.
The accelerometer integrated within the Inertial Measurement Unit (IMU) captures acceleration data in millimeters per second squared (mm/s²). Pitch and roll angles can be derived from the acceleration components along the x, y, and z axes, denoted as ax, ay, and az respectively, as depicted in Eq1. During the implementation of the function, the atan2 function is employed to determine the quadrant.
Theta = arctan(ax / az)
Phi = arctan(ay / az)
Utilizing the aforementioned equations, the pitch and roll measurements obtained at {90,-90} are depicted below. Observing the graphical representation, it becomes evident that when the pitch is set to 90 degrees, the roll measurement exhibits greater fluctuation compared to the scenario where the IMU is positioned horizontally. Additionally, an observation is made that at a pitch of 90 degrees and a roll of -90 degrees, the fluctuation in roll surpasses that of pitch.
The Fourier transformation of pitch and roll is shown below. The signal for roll is noisier than that of pitch. To limit the noise, we can use a low pass filter. A mean filter can help to reduce the noise and is implemented.
The filtered signal is shown below.
To mitigate the noise present in the pitch and roll calculations derived from accelerometer data and to capture yaw, it was imperative to incorporate gyroscope readings. Analysis of the accompanying graphs reveals a gradual drift in the gyroscope data over time, attributed to the integration of raw data. Despite the gyroscope being less noisy than the accelerometer, integration poses a significant risk of introducing substantial errors in robotic applications. Consequently, a synergistic approach involving the fusion of gyroscope and accelerometer data was pursued to establish a sensor system characterized by enhanced accuracy and reduced noise levels.
The following code implemented a complimentary filter.
// complimentary filter
pitchAcc2 = atan2(sensor->accX(),sensor->accZ())*57.2958*1.02;
rollAcc2 = atan2(sensor->accY(),sensor->accZ())*57.2958*1.02;
pitchGyr2 = pitchAcc2*(CompFilterAlpha) + (pitchGyr2 + (sensor->gyrY())*gryStep*0.001)*(1-CompFilterAlpha);
rollGyr2 = rollAcc2*(CompFilterAlpha) + (rollGyr2 + (sensor->gyrX())*gryStep*0.001)*(1-CompFilterAlpha);
yawGyr2 = yawGyr2*(CompFilterAlpha) + (yawGyr2 + (sensor->gyrZ())*gryStep*0.001)*(1-CompFilterAlpha);
The filter signal is shown below.
Delays introduced for debugging purposes were eliminated to assess the sampling rate of the Inertial Measurement Unit (IMU). Time stamps, accelerometer data, and gyroscope data were stored in memory for subsequent transmission after data collection. The average sampling interval was determined to be 3 milliseconds. Data was organized into separate float arrays and subsequently transmitted to the computer as a concatenated array. The Arduino code implementing this functionality is provided below.
case getData:
for (int x=0; x<1500; x++){
if (myICM.dataReady()){
myICM.getAGMT();
printScaledAGMT(&myICM);
}
else{
SERIAL_PORT.println("Waiting for data");
delay(500);
}
}
break;
case sendData:
for (int y=0; y<1500; y++){
tx_estring_value.clear();
tx_estring_value.append("|accpitch|");
tx_estring_value.append(accpitch[y]);
tx_estring_value.append("|accroll|");
tx_estring_value.append(accroll[y]);
tx_estring_value.append("|gyrpitch|");
tx_estring_value.append(gyrpitch[y]);
tx_estring_value.append("|gyrroll|");
tx_estring_value.append(gyrroll[y]);
tx_estring_value.append("|gyryaw|");
tx_estring_value.append(gyryaw[y]);
tx_estring_value.append("|time|");
tx_estring_value.append(timeData[y]);
tx_characteristic_string.writeValue(tx_estring_value.c_str());
}
break;
// in printScaledAGMT, add the following
if (myICM.dataReady()){
if (index1 < 1500){
accpitch[index1] = pitch;
accroll[index1] = roll;
gyrpitch[index1] = pitchGyr;
gyrroll[index1] = rollGyr;
gyryaw[index1] = yawGyr;
timeData[index1] = (float)millis();
index1 ++;
}
}
The result is shown below. In the test, data is collected for approximately 6s.
In this exercise, opting to store data as arrays of floats emerges as an efficient solution. This choice strikes a balance between precision, allowing for accurate representation of fractional values, and efficiency in storing multiple readings within a single data structure. While integers consume less memory, they sacrifice precision, potentially compromising the fidelity of the data. On the other hand, arrays of characters, though versatile, are not as storage-efficient as floats, making them less favorable for this task.
The following video demonstrates the robot being teleport by the controller.
The objective of this laboratory exercise is to enhance the functionality of the robot by integrating Time of Flight (ToF) sensors. These sensors provide the robot with the capability to measure its distance from objects in its surrounding environment.
According to the VL53L1X datasheet, the default address is 0x52. By executing the "Wire_I2C" example, the address of the ToF sensor is 0x29.
In our proposal, we present two prospective configurations for sensor placement. The first configuration involves the installation of sensors both at the front and rear ends of the robot. This arrangement facilitates environmental perception by necessitating only a 90-degree turn, as opposed to other configurations that mandate a 180-degree rotation for comprehensive environmental sensing. However, it is noteworthy that in scenarios involving robot teleportation, this setup renders the robot incapable of detecting obstacles situated on its sides. The second configuration entails positioning one sensor at the front of the robot and another on its lateral side. This arrangement augments the robot's capability to sense its surroundings, particularly enabling it to detect objects situated to its side.
A sketch of wiring diagram is shown below.
The following image shows an example of connecting a ToF sensor to the Artemis board. The test results is shown in picture above, and the address is 0x29.
The ToF sensor offers three distinct operational modes: short, medium, and long. The short-distance mode is characterized by heightened precision and represents a viable candidate for establishing a baseline mechanism for promptly halting the robot in emergent scenarios to prevent collisions with obstacles. Given the absence of additional long-distance sensors onboard, the long-distance mode is the preferred choice. Despite the presence of increased sensor noise in this mode, it boasts a detection range spanning 4 meters. This extended range ensures that the swiftly moving robot can promptly discern potential obstacles and react on time. The efficacy of the long-distance mode was evaluated with the results documented below.
In this section, we address the challenge posed by connecting two Time-of-Flight (ToF) sensors to the Artemis board. Due to the default configuration where both sensors share the same address, a conflict arises, impeding the functionality of one sensor by blocking the communication channel of the other. To circumvent this issue, we employ a solution wherein we temporarily deactivate one sensor by connecting its "XSHUT" pin to pin A8 on the Artemis board. Subsequently, we utilize the provided code to adjust the address of the remaining sensor, thereby resolving the address conflict and ensuring the proper operation of both sensors simultaneously.
//Add this in global scale
//Optional interrupt and shutdown pins.
#define SHUTDOWN_PIN_1 8
#define INTERRUPT_PIN_1 3
#define SHUTDOWN_PIN_2 6
#define INTERRUPT_PIN_2 4
//SFEVL53L1X distanceSensor1;
//Uncomment the following line to use the optional shutdown and interrupt pins.
SFEVL53L1X distanceSensor1(Wire, SHUTDOWN_PIN_1, INTERRUPT_PIN_1);
SFEVL53L1X distanceSensor2(Wire, SHUTDOWN_PIN_2, INTERRUPT_PIN_2);
//Add this to setup()
//Shut down sensor 2
distanceSensor1.sensorOff();
//Change address of sensor 1 from default to NEW_ADDRESS
distanceSensor2.setI2CAddress(0x32);
Serial.print("Sensor 2 address changed to 0x");
Serial.println(distanceSensor2.getI2CAddress(), HEX);
distanceSensor1.sensorOn();
Now we can read from both sensors as shown below.
The Time-of-Flight sensors exhibit a slower data acquisition speed compared to the looping speed of the Artemis board. Evaluation of clock timing and sensor data availability indicates an average acquisition time of 90 milliseconds, contrasting with the Artemis board's 2-millisecond iteration capability. This discrepancy stems from factors such as the sensors' start-stop ranging process for each iteration and their operation in long-range mode.
distanceSensor2.startRanging(); //Write configuration bytes to initiate measurement
while (!distanceSensor2.checkForDataReady())
{
Serial.println(millis());
}
int distance = distanceSensor2.getDistance(); //Get the result of the measurement from the sensor
distanceSensor2.clearInterrupt();
distanceSensor2.stopRanging();
// same for the other sensor
Using similar methods in Lab 2, the distance from both sensors was sent to the computer via Bluetooth. The notification handle is shown below.
def notificationHandler(uuidStr, bytesArray):
global tofData1,timeData,tofData2
decoded_data = bytesArray.decode('utf-8')
split_data = decoded_data.split("|")
tofData1.append(split_data[4])
timeData.append(split_data[2])
tofData2.append(split_data[5])
return
timeData = []
tofData1 = []
tofData2 = []
ble.start_notify(ble.uuid['RX_STRING'], notificationHandler)
ble.send_command(CMD.getData,"")
time.sleep(5)
ble.send_command(CMD.sendData,"")
time.sleep(10)
ble.stop_notify(ble.uuid['RX_STRING'])
The sample data for both sensors is shown below, and the x-axis is time in ms.
There are three common types of IR sensors, including amplitude-based, triangulation-based, and time-of-flight (ToF) IR sensors. Amplitude-based sensors measure the intensity of IR radiation reflected or emitted by objects and provide solutions for proximity sensing with fast response times but are limited in range and accuracy. Triangulation-based sensors calculate the distance to an object by measuring the angle of reflected IR light and offer higher accuracy and longer ranges compared to amplitude-based sensors. Time-of-flight sensors measure the time taken for IR light to travel to the object and back to determine distance and are more accurate in measuring longer distances.
Since the time of flight sensor measures the time of the signal before it returns. It is mostly insensitive to colors, textures, and ambient light compared to other IR sensors mentioned above. However, high levels of ambient light might interfere with the sensors' ability to detect, light-colored surfaces may reflect more light, and highly irregular textures may affect detection.
Given the dual motor setup, two motor drivers are used. The Artemis can only supply up to 3V voltage and is not capable of powering the motor at 3.7V. The motor drivers took two PWMs as input and connected to an external battery. A schematic of implementing the sensors on the robot is shown below.
The following graph shows the connection between motors, motor drivers, Artemis, and the battery. One motor is connected to pins 7 and 9 and the other to pins 11 and 12. These pins are chosen due to their PWM capabilities.s
The laboratory kit includes two batteries: one with a capacity of 650mAh and another with 850mAh. To mitigate interference between the motors and the Artemis microcontroller, separate batteries will be dedicated to each component. Given the higher power consumption of the motors compared to the Artemis, the 850mAh battery will be allocated to power the motors, while the 650mAh battery will be utilized to supply power to the Artemis microcontroller.
The dual motor driver was tested using a 3.7V power supply in lieu of the motor, with input and output signals measured using an oscilloscope. See the connection below.
The power supply was set to the same voltage output as the 850mAh battery, 3.7V, to simulate the voltage readings. The code used to drive the motors is shown below.
void setup() {
Serial.begin(115200);
analogWrite(7, 0);
analogWrite(9, 0);
analogWrite(12, 0);
analogWrite(11, 0);
}
void loop() {
Serial.println("Start");
analogWrite(11, 0);
analogWrite(12, 100);//right
//analogWrite(9, 0);
//analogWrite(7, 255);
delay(4000);
analogWrite(11, 0);
analogWrite(12, 0);
analogWrite(9, 0);
analogWrite(7, 0);
delay(4000);
analogWrite(11, 100);
analogWrite(12, 0);
delay(4000);
}
The reading of the oscilloscope is shown below.
The following video was taken to demonstrate control of the motors. The wheels were programmed to spin in either direction for 4s each.
The following video shows an example of both wheels on the two sides spinning on batteries.
The motors were assessed to determine their operational range, focusing initially on the lower threshold. By iteratively adjusting the PWM value until motion initiation was observed from a standstill position, the lower limit of the robotic system's locomotion was identified. Notably, experimentation revealed this threshold to be contingent upon the surface characteristics. Specifically, on a flat and polished wooden surface, the respective lower limit values for the left and right wheels were determined to be 90 and 85. Conversely, when evaluated on a PVC floor, these values were observed to be 105 and 95 for the left and right wheels, respectively.
From the section above, it is clear that the two motors have different thresholds. Therefore, calibration is needed to drive the car in a straight line. The calibration process was done by adjusting the analog output value and observing the car driving on the ground. The empirical calibration is to have the left motor's output greater than that of the right by 10 to 15 in absolute value.
// right
analogWrite(11, 0);
analogWrite(12, 100);
// left
analogWrite(9, 0);
analogWrite(7, 115);
The open loop control video is shown below. The distance traveled in the lab is about 2.5m, and the car is driving in approximately a straight line.
The video below shows the car drive in straight line and make a turn.
The analogWrite function on the Artemis operates with a time step of 3ms, resulting in a frequency of approximately 333Hz, sufficiently rapid for our robotic application, facilitating swift changes in direction. Manual configuration of timers could enhance PWM signal stability, particularly as the current time step exhibits fluctuations. Faster readings enable the robot to better sustain the directed PWM signal.
The lowest PWM value while the robot was in motion was tested to be 30 for the right motor and 42 for the left motor.
In light of prior laboratory experiments, it has been observed that Bluetooth-based data transmission exhibits latency, but a rapid iteration rate is needed for the control loop. To address this, the timestamp, time of flight sensor data, and motor driver output data will be stored within the memory of the Artemis microcontroller. Subsequent to the vehicle reaching its designated endpoint, the accumulated data will be transmitted to the computer. Timestamps will be stored in an array format comprising characters, while other data will be stored in arrays of floating-point numbers, as exemplified below. Also, the space can save data upto 5s approximately.
char* RecordTimes[1000];
int RecordToF1[1000];
int RecordToF2[1000];
int RecordV[1000];
On the python side, a notificationHandler is created as outlined below.
def notificationHandler(uuidStr, bytesArray):
global tofData1,timeData,tofData2,velocity
decoded_data = bytesArray.decode('utf-8')
split_data = decoded_data.split("|")
timeData.append(split_data[2])
tofData1.append(split_data[4])
tofData2.append(split_data[5])
velocity.append(split_data[7])
return
velocity = []
timeData = []
tofData1 = []
tofData2 = []
ble.start_notify(ble.uuid['RX_STRING'], notificationHandler)
PID control is utilized for its capacity to provide precise and stable control, crucial for accurately maneuvering and halting the vehicle at a specified position (at 304mm from the wall). The proportional component promptly responds to deviations from the desired trajectory. The integral component, accumulating over time, mitigates persistent errors in achieving the setpoint. Moreover, the derivative component forecasts impending changes, attenuating oscillations induced by integral action while fortifying system stability. Empirically determined gains, set at Kp=0.02, Ki=0.0002, and Kd=0.002, result from iterative adjustments following individual testing of each parameter and subsequent synthesis to optimize control performance.
Given the task's requirement for the vehicle to initiate at a distance of 2-4 meters from the wall, ToF sensors are configured to long-range mode. While ideally, a swift sampling time is preferred, shortening it considerably compromises sensor accuracy. To maintain a balance between accuracy and sampling efficiency, the default setting is retained, albeit passed through a mean filter with a kernel size of two. This approach mitigates abrupt variations while preserving an acceptable compromise between precision and sampling frequency. If the control loop runs faster than the ToF data rate, we will interpolate the data. The code to filter data is shown below:
if (distanceSensor1.checkForDataReady()){
distance1 = (distanceSensor1.getDistance()+lastDistance1)/2; //Get the result of the measurement from the sensor
distanceSensor1.clearInterrupt();
distanceSensor1.stopRanging();
dtInter = 0;
}
else{
dtInter = dtControl + dtInter;
distance1 = distance1 + (distance1-lastDistance1)/dtDistance*dtInter;
}
The following code implemented the PID controller. The PIDControl function calculate the raw motor input and cap the upper and lower bound of the integrator. The ToFSenseAndDrive functoin cap the maximum possible motor input and drive the motors.
float PIDControl(float kp, float ki, float kd, float distance){
float errorCurrent = distance - stopDistance;
dt = millis() - lastTime;
lastTime = millis();
if (distance < 150){
return 0;
}
// P control
float P = kp*errorCurrent;
// I control
integraterError += (errorCurrent*dt/10);
if (integraterError > 10000){
integraterError = 10000;
}
else if (integraterError < -10000){
integraterError = -10000;
}
float I = ki*integraterError;
Serial.println(integraterError);
if (I > 150){
I = 150;
}
else if (I < -150){
I = -150;
}
// D control
float D = kd*(errorCurrent - errorLast)/dt;
errorLast = errorCurrent;
float speed = P+I+D;
if ((speed>30)&&(speed<45)){
speed = 45;
}
return speed;
}
void stopCar(){
analogWrite(11, 0);
analogWrite(12, 0);
analogWrite(7, 0);
analogWrite(9, 0);
}
int ToFSenseAndDrive(float distance1,float distance2,float dt,int sensor){
float v;
if (sensor == 1){
v = PIDControl(kp,ki,kd,distance1);
if (int(v) >= 240){
v = 240;
}
else if (int(v) <= -240){
v = -240;
}
analogWrite(11, 0);
analogWrite(12, int(v)); //right
analogWrite(7, int(v*1.2+20)); //left
analogWrite(9, 0);
}
return int(v);
}
The following video demonstrate the driving task. The car starts at around 2.5 meters from the wall and stop at 300mm from the wall.
This control algorithm accommodates varying iteration rates within the PID controller and Time-of-Flight (ToF) sensors. In the absence of new data, distance prediction relies on observed distances via linear extrapolation, as previously detailed in the provided code sections. Additionally, an empirical determination establishes a motor input threshold, revealing that the car halts when the input falls below 40. Consequently, a PID output range of 30 to 45 prompts the controller to initiate a very slow movement, with a defined minimum speed set at 45. While generally effective in driving and stopping the car, a minor discrepancy of 6mm from the precise position persists..
The following figures demonstrate the distance to the wall over time, and the motor input over time.
The front ToF graph indicates that the car initiates at 2.5m and halts approximately 288mm from the wall. Although this position is not precisely aligned with the intended stop point, it is deemed reasonably accurate considering inherent motor imprecision and substantial wheel friction. Concurrently, the motor input graph exhibits an initial erroneous point, swiftly rectified to ensure negligible impact on the car's trajectory. Furthermore, the car's speed diminishes gradually as proximity to the wall decreases, reflecting an adaptive response to the changing distance.
The following video shows the car running with the same control gains but has no wind-up implementation.
The wind-up phenomenon is discernible from the video, wherein the integrator persistently accumulates errors, consequently extending the system's recovery duration. Despite the car's proximity to the wall, the accumulated error remains unresolved, thereby exerting excessive force as the integrator continues to operate at full capacity.
To measure the car's basic parameters while operating untethered. Bluetooth connection is established between the computer and the Artemis. However, from prior laboratory experiments, it has been observed that Bluetooth-based data transmission exhibits latency. Sending real-time data through Bluetooth slows down the control loop and sensing rate inevitably. To address this, the timestamp, gyroscope data, and motor driver output data will be stored within the memory of the Artemis microcontroller. After the car achieves the task, the "sendData" command will be sent from the computer to retrieve stored data from the Artemis. The code structure is depicted below.
On the python side, a notificationHandler is created as outlined below.
def notificationHandler(uuidStr, bytesArray):
global tofData1,timeData,tofData2,velocity
decoded_data = bytesArray.decode('utf-8')
split_data = decoded_data.split("|")
timeData.append(split_data[2])
gyrData.append(split_data[4])
velocity.append(split_data[6])
return
velocity = []
timeData = []
gyrData = []
ble.start_notify(ble.uuid['RX_STRING'], notificationHandler)
ble.send_command(CMD.setPID,"0.02|0.00002|0.002")
ble.send_command(CMD.getData,"1")
PID control is employed for precise and stable control, essential for accurately guiding and stopping vehicles at predefined orientations. The proportional component responds to deviations from the intended orientation. Meanwhile, the integral component, accumulating errors over time, rectifies persistent deviations from the setpoint, enhancing precision. Additionally, the derivative component anticipates changes in the system, thereby mitigating oscillations caused by integral action and bolstering overall system stability. The empirically determined gains, set at Kp=2, Ki=0.02, and Kd=0.05, result from iterative adjustments following individual testing of each parameter and subsequent synthesis to optimize control performance. Increasing the proportional gain (Kp) reduces the rise time and the steady state error, yet it cannot close the error completely. Elevating the integral gain (Ki) reduces steady-state error but can prolong the settling time and provoke overshoot if excessively amplified. Likewise, elevating the derivative gain (Kd) tends to attenuate overshoot and oscillations but can amplify noise sensitivity and destabilize the system if set too high.
The gyroscope data is integrated over time to obtain the yaw of the car as shown below.
if (initGyr){
yawGyr = (sensor->gyrZ())*dtIMU*0.001;
desiredAngle = yawGyr;
initGyr = false;
}
else{
yawGyr += (sensor->gyrZ())*dtIMU*0.001;
}
This integration method might lead to the yaw measurement drifting over time since the integrator is cumulating errors. To mitigate this problem, we could introduce a low pass filter, such as a mean filter, before integration. However, in this lab, I observed that the drift is relatively small as the car is operating in a short period. Additionally, if we are calculating the pitch and roll, we should introduce a complimentary filter. However, when calculating yaw, this method is not feasible since there is no way to find yaw from the accelerometer.
Gyroscopes exhibit biases, manifesting as constant offsets in their output, even in the absence of rotation. To mitigate this, an initial gyroscope reading is recorded upon startup. Subsequently, this recorded bias is subtracted from all subsequent readings to enhance accuracy.
As per the IMU datasheet, a maximum rotational output limit of 250 degrees per second (dps) is specified. Consequently, rapid turns exceeding this threshold may result in inaccurate yaw measurements. This is mostly enough for this lab. On the other hand, to circumvent this limitation, adjusting the cap to 500 dps could mitigate such inaccuracies.
To compute the derivative of the control output concerning yaw measurement, it is imperative to differentiate the yaw signal, obtained through the integration of gyroscope readings. This process may entail amplification of high-frequency noise inherent in the signal, potentially exacerbating gyroscope noise. Nonetheless, derivative computation is straightforward to execute, and I have not encountered notable discrepancies as a consequence. Should such issues arise, the adoption of a low-pass filter to mitigate high-frequency noise stemming from the derivative term would be advisable.
When running the gyroscope, the timestamp for which data is being updated is recorded and shown below. The average sampling rate is 50 samples per second. Though we desired a high sample rate, this is enough for the orientation task.
This task encompasses two phases. The user can optionally transmit a command to set the vehicle's orientation with the specified goal, and the car will move to the designated orientation. Subsequently, the vehicle readjusts its orientation to realign with the goal following any disturbances encountered. If no goal is set by the user, the initial angle is the goal.
In pursuit of this objective, various combinations of PID gains were experimentally evaluated. The initial trial employed the PID values utilized in the preceding laboratory session, namely kp=0.2, ki=0.0002, and kd=0.02. Subsequent analysis presents the evolution of the car's angle and the corresponding Pulse Width Modulation (PWM) output over time.
From the graph above, the PWM signal is too weak to drive the wheels and the car does not move. Therefore, I increased the gains such that kp=1, ki=0.02, and kd=0.2. The angle and PWM signal over time is shown below.
From the orientation graph, when the perturbation is small, the car is capable of returning to the desired angle with some steady state error. However, when the perturbation is big (at 60000ms), the PID controller is oscillatory. At this point, not only the settling time is long, but there is also a steady state error. Therefore, proportional gain and derivative gain need to be bigger to enhance settling time and the steady state error. The final controller yields the follwoing figures, where 30 degrees is the designated angle.
The following code shows the wind-up implementation.
// I control
integraterError += (errorCurrent*dt/10);
if (integraterError > 8000){
integraterError = 8000;
}
else if (integraterError < -8000){
integraterError = -8000;
}
float I = ki*integraterError;
The wind-up phenomenon happens when the integrator persistently accumulates errors, consequently extending the system's recovery duration. Despite the car's proximity to the target, the accumulated error remains unresolved, thereby exerting excessive force as the integrator continues to operate at full capacity. Setting a wind-up protection is to cap the maximum and minimum integrator error allowed to address this issue.
From Lab5 PID control using ToF sensors, we observed that the robot's control loop runs faster than the data rate of the sensors, causing the PID controller to over/underestimate the error. This lab addresses this issue by using the Kalman filter. This filter uses a pre-defined model and observation to infer its unobserved state.
To build our model, we need to estimate the inertia of the bot and its drag. This can be measured by recording the position of the car to a step response. In other words, we teleport the car with a constant PWM value (in this case PWM = 110), and measure its position using the ToF sensor. The car's performance is shown below.
The blue line shows the ToF sensor data when the car starts at 4 m from the wall and drives toward it. The orange line is the velocity of the car calculated by taking the derivative of the ToF data. The velocity oscillates at first and gradually converges to the steady state. However, taking the derivative amplifies the noise in the ToF sensor, and the steady state speed is between 1.5 to 2 m/s. A better way to measure steady-state velocity is to fit the ToF data with linear regression after the 90% raise time, as shown by the green line. The slope of the line is -1.8, indicating a steady state speed of 1.8 m/s, and this speed is used for further calculation.
Overlay the velocity graph with speed at 90% raise time, 90% rise time is roughly 2s, as shown in the figure below.
Given that d=u/v and m=-dt_0.9/ln(0.1), d is 0.556 and m is 0.483. Thus, we can obtain the A matrix, B matrix, and C matrix ([-1,0], since we are measuring negative distance from the wall) using the following equation.
The python code for calculation and discretization of A, B, and C matricies is shown below.
dt = (realTime[-1] - realTime[0])/(len(realTime)-1)
A = np.array([[0,1],[0,-d/m]])
B = np.array([[0],[1/m]])
Ad = np.eye(2) + dt*A
Bd = B.dot(dt)
C = np.array([[-1,0]])
The Kalman filter is a recursive algorithm that predicts the car's position based on the state space model and uses observation to update the predictions. The Kalman Filter depends on the initial state, state uncertainty, and process and sensor noise. The initial state contains the position at which the car starts and the initial velocity, which is zero since it is not moving at t=0. The state uncertainty can be simplified as a diagonal matrix, where the uncertainty of the initial position is assumed to be the same as the sensor uncertainty, and the velocity uncertainty is small (sig_v=1) as we are confident that the car is not moving. The initialization of Sigma is not significantly sensitive to the algorithm, and it will only affect several earlier predictions and will be corrected by observations. The process noise is determined with similar reasons as sigma where sig_1 = 20 and sig_2 = 1. The sensor noise is set to 20 (20mm)from the datasheet.
#initial state
x = np.array([[ToFList[0]],[0]])
#initial state uncertainty
sigma = np.array([[20^2, 0] ,[0, 1^2]])
#Model uncertainty
sigma_1 = 20
sigma_2 = 1
#sensor uncertainty
sigma_3 = 20
sig_u = np.array([[sigma_1**2,0],[0,sigma_2**2]])
#confidence in sensor measurement
sig_z = np.array([[sigma_3**2]])
However, in the following test, it is determined that the uncertainty in velocity is significant and outweigh the uncertainty in the position. Therefore, sigma_1=10, sigma_2=20, sigma_3=10.
The Kalman Filter was first built in Python as a sanity check. The results of the filter were compared to the actual distance measurements found during Lab 5. Note, in the rest of the report, the Kalman filter runs faster than the data rate of the ToF sensor to make predictions.
def kf(mu,sigma,u,y,update): mu_p = Ad.dot(mu) + Bd.dot(u) sigma_p = Ad.dot(sigma.dot(Ad.transpose())) + sig_u if(update): sigma_m = C.dot(sigma_p.dot(C.transpose())) + sig_z kkf_gain = sigma_p.dot(C.transpose().dot(np.linalg.inv(sigma_m))) y_m = y-C.dot(mu_p) mu = mu_p + kkf_gain.dot(y_m) sigma=(np.eye(2)-kkf_gain.dot(C)).dot(sigma_p) else: mu = mu_p sigma= sigma_p return mu,sigma
If we made the model uncertainties much higher than that of the sensor, we believe the model we made is inaccurate, and the filter trusts the sensor readings the most. This is an extreme case of what we observed, in which the model built in the previous section is not accurate, and we need to trust the sensor for corrections. By increasing the uncertainty in the sensor, the Kalman Filter distrusts the sensor readings and needs lots of measurements to correct the slope of the predictions.
The same parameters are used for orientation control in Lab6, the result is shown below.
The following graph compares the difference between predictions made by linear interpolation and the Kalman filter. dt was calculated by using the difference between the end time and start time divided by the number of timestamps of the control loop. By doing so, we assume the time step for each loop is similar. Since the control loop runs on a ms scale, this assumption holds.
The following code shows the Artemis implementation.
////////// Kalman Filter var //////////
int previousDistance;
float previousTOFtime;
int n = 2;
float d = 0.5555555555555556;
float m = 0.4825494243369465;
float dtKf = 0.1;
bool updateKf = true;
BLA::Matrix<2, 2> A = { 0, 1,
0, d/m };
BLA::Matrix<2, 1> B = { 0,
-1/m };
BLA::Matrix<1, 2> C = { 1, 0 };
BLA::Matrix<2, 2> eye = { 1, 0,
0, 1 };
BLA::Matrix<2, 2> Ad = eye + A * dtKf; //15 is an estimate of the loop time in s
BLA::Matrix<2, 1> Bd = B * dtKf;
BLA::Matrix<2, 1> X = { 0,
0 }; //to be initialized once command is given
BLA::Matrix<2, 2> sigma = { 10, 0,
0, 1 }; //initial uncertainty of state given from TOF data in lab 3 and set very small for speed because initial speed should be close to 0
BLA::Matrix<2, 2> sig_u = { 30, 0,
0, 100 }; //initial uncertainty for model, to be adjusted
BLA::Matrix<1, 1> sig_z = { 30 }; //initial uncertainty of sensors, taken from lab 3
BLA::Matrix<2, 1> kf_gain;
BLA::Matrix<1, 1> y_m;
BLA::Matrix<2, 1> mu = { 0,
0 };
void kf(float u, BLA::Matrix<1, 1> y){
BLA::Matrix<2, 1> mu_p = Ad * mu + Bd * u;
BLA::Matrix<2, 2> sig_p = Ad * sigma * (~Ad) + sig_u;
if (updateKf) {
y_m = y - C * mu_p; // BLA::Matrix<1, 1> y_curr = { TOF2m };
BLA::Matrix<1, 1> sig_m = C * sig_p * (~C) + sig_z;
BLA::Matrix<1, 1> sig_m_inv = sig_m;
Invert(sig_m_inv);
kf_gain = sig_p * (~C) * (sig_m_inv);
mu = mu_p + kf_gain * y_m;
sigma = (eye - kf_gain * C) * sig_p;
} else {
mu = mu_p;
sigma = sig_p;
}
}
The Kalman filter predictions are used to substitute linear interpolation. The following figure shows control input with/ without Kalman Filter.
This laboratory exercise implements an orientation control task, combining methodologies from Lab 6 (orientation control) and Lab 7 (linear control) to control a robot's movement. Initiated at a distance from a wall, the robot is programmed to execute a 180-degree turn upon reaching a 3ft proximity to the obstacle. To achieve this objective, two sets of Proportional-Integral-Derivative (PID) control algorithms are employed: one set to guide the robot towards the wall, and another to facilitate the turn.
Code for the two control algorithms is shown below. Note, orientation controller and linear controller have different integation wind-up protection.
float PIDControlToF(float kp, float ki, float kd, float distance){
float errorCurrent = distance - stopDistance;
dt = millis() - lastTime;
lastTime = millis();
if (distance < stopDistance+60){
controlTypeFlage = 1;
}
else{
controlTypeFlage = 0;
}
if (distance < stopDistance+80){
return 0;
}
float P = kp*errorCurrent;
integraterError += (errorCurrent*dt/10);
if (integraterError > 10000){
integraterError = 10000;
}
else if (integraterError < -10000){
integraterError = -10000;
}
float I = ki*integraterError;
Serial.println(integraterError);
if (I > 150){
I = 150;
}
else if (I < -150){
I = -150;
}
float D = kd*(errorCurrent - errorLast)/dt;
errorLast = errorCurrent;
float speed = P+I+D;
if ((speed>30)&&(speed<45)){
speed = 45;
}
return speed;
}
float PIDControlIMU(float kp, float ki, float kd, float currentAngle){
float errorCurrent = desiredAngle - currentAngle;
dt = millis() - lastTime;
lastTime = millis();
if (abs(errorCurrent) < 40){
controlTypeFlage = 1;
return 0;
}
else{
controlTypeFlage = 0;
}
float P = kp*errorCurrent;
integraterError += (errorCurrent*dt/10);
if (integraterError > 7000){
integraterError = 7000;
}
else if (integraterError < -7000){
integraterError = -7000;
}
float I = ki*integraterError;
Serial.println(integraterError);
if (I > 70){
I = 70;
}
else if (I < -70){
I = -70;
}
float D = kd*(errorCurrent - errorLast)/dt;
errorLast = errorCurrent;
float speed = P+I+D;
Serial.print("Speed at PID");
Serial.println(speed);
return speed;
}
For high-speed teleportation of the robot, it is imperative to predict its state accurately, given the relatively slow data rate of the Time-of-Flight (ToF) sensor compared to the control loop rate. Our analysis, as detailed in Lab 7, compares the effectiveness of the Kalman filter and linear interpolation for this prediction. While both methods yielded reasonable predictions, linear interpolation demonstrated a lower propensity for over- and under-prediction of states. This superiority is attributed to the Kalman filter's extended learning curve from historical data. Although parameter adjustments could potentially optimize the Kalman filter's performance, linear interpolation emerges as a simpler yet equally viable alternative. Code for interpolation is shown below.
if (distanceSensor1.checkForDataReady()){
ToFTimeNew = millis();
dtToF = ToFTimeNew - ToFTimeLast;
lastDistance1 = tmpRealDistance;
distance1 = distanceSensor1.getDistance();
distanceSensor1.clearInterrupt();
dtInterp = 0;
tmpRealDistance = distance1;
ToFTimeLast = millis();
}
else{
dtInterp += dtLoop;
distance1 = ((distance1 + dtInterp*(tmpRealDistance-lastDistance1)/dtToF) + distance1)/2;
}
Initially, I used the same PID values as in previous labs, in which PID for linear control is 0.02, 0.0002, 0.002, and PID for orientation control is 2, 0.02, 0.05. The result is demonstrated below.
From the video, the car turns more than the designated angle. There are two possible reasons. First, 180 degree is bigger than the perturbation introduced in Lab7, and the integrator might be pusshing too hard. Second, given that the car is driving in straight line, its inertia might interfere the turning. Thus, I lower the gains for the orientation control to 0.475, 0.0035, and 0.1.
The ToF sensor readings during the linear contro phase is shown below.
The objective of this laboratory exercise is to illustrate methods for sensing and constructing representations of the environment using a robot equipped with a Time-of-Flight (ToF) sensor and an Inertial Measurement Unit (IMU). The ToF sensor provides distance measurements, while the IMU determines the angle at which these measurements are taken. The exercise implements two control strategies for the robot. Initially, drawing from Lab 8, a Proportional-Integral-Derivative (PID) control scheme is employed to facilitate the robot's rotation in 20-degree increments. The associated code for the PID control and angle updates is provided below.
float PIDControlIMU(float kp, float ki, float kd, float currentAngle){
float errorCurrent = desiredAngle - currentAngle;
Serial.print("controller error:");
Serial.println(errorCurrent);
dt = millis() - lastTime;
lastTime = millis();
float P = kp*errorCurrent;
integraterError += (errorCurrent*dt/10);
if (integraterError > 1000){
integraterError = 1000;
}
else if (integraterError < -1000){
integraterError = -1000;
}
float I = ki*integraterError;
//Serial.println(integraterError);
if (I > 70){
I = 70;
}
else if (I < -70){
I = -70;
}
float D = kd*(errorCurrent - errorLast)/dt;
errorLast = errorCurrent;
float speed = P+I+D;
Serial.print("PWM: ");
Serial.println(speed);
return speed;
}
/////////// PID Control //////////
////////// Angle update //////////
float yawGyrInit = 0;
int updateAngle;
int LastdataPoint = 0;
bool finishRecording = 0;
int angleUpdate(){
Serial.print("finishRecording: ");
Serial.println(finishRecording);
if (finishRecording){
LastdataPoint = recordIndex;
desiredAngle += 20;
finishRecording = 0;
return 3;
}
if (abs(yawGyr-desiredAngle)<5){
Serial.print("recordIndex,LastdataPoint: ");
Serial.print(recordIndex);
Serial.print(',');
Serial.println(LastdataPoint);
if (abs(recordIndex-LastdataPoint) < 20){
digitalWrite(LED_BUILTIN, HIGH);
Serial.println("sensing");
integraterError = 0;
return 0;
}
else{
finishRecording = 1;
return 1;
}
}
else{
digitalWrite(LED_BUILTIN, LOW);
Serial.println("moving");
return 2;
}
}
The video for PID control and sensing is shown below. There are four measuring positions (5,-3), (5,3), (0,3), (-3,-2). (5,-3) and (-3,-2) are scanned using PID control, and the result is shown below.
From the analysis of the left graph, it is evident that the car exhibits a time-dependent drift, introducing a bias in the angle measurements. Specifically, when sensing at coordinates (5, -3), one dataset (represented in blue) significantly deviates from expected values, rendering it non-contributory. Throughout this trial, despite constant PID gain values, the car exhibited both drift and slippage, potentially attributable to low battery levels or random mechanical failures. Additionally, variations in ground friction across four distinct testing locations were noted, coupled with an inherent bias in the car's gearbox. This bias facilitates turning when the wheels are positioned at half of the potential angles, whereas it poses significant challenges in overcoming friction at other angles. An attempted remedy involved increasing the controller gains; however, this adjustment led to substantial oscillation at angles where the car previously operated effectively. Given these complications, particularly the robot's unreliability and the precision required for accurate angling, using PID controllers to accomplish this task appears impractical. For the following sensing, the car is controlled using feedforward with empirically determined PWM values as shown below.
To map the environment in global coordinates, the transformation of coordinates is needed. The first time-of-flight sensor was on the front of the robot, pointing to the positive x direction, while the second time-of-flight sensor was on the side, pointing to the y direction. Given the robot is turning in the positive z direction, the transformation matrix is as follows:
The code for the transfermation is as follows:
transformed_data = pd.DataFrame({
'X': sensor_position[0] + data['distance']*0.00328084 * np.cos(data['Theta_Radians']),
'Y': sensor_position[1] + data['distance']*0.00328084 * np.sin(data['Theta_Radians']),
'Z': np.zeros_like(data['distance']) # Z-coordinate remains zero
})
The resulted map is shown below, where the blue dots are the posiiton of the robot when sensing.
The lab implements grid localization using Bayes filter. The robot's state is discretized to a 3D grid world that can be represented by location and orientation (x,y,theta).
This function governs the robot's movement through a three-step process. Initially, it aligns the robot to face the trajectory's direction. Subsequently, it executes the translation movement. Finally, the function reorients the robot to achieve the specified orientation.
def compute_control(cur_pose:np.ndarray, prev_pose:np.ndarray)->tuple[float,float,float]:
""" Given the current and previous odometry poses, this function extracts
the control information based on the odometry motion model.
Args:
cur_pose ([Pose]): Current Pose
prev_pose ([Pose]): Previous Pose
Returns:
[delta_rot_1]: Rotation 1 (degrees)
[delta_trans]: Translation (meters)
[delta_rot_2]: Rotation 2 (degrees)
"""
y = cur_pose[1]-prev_pose[1]
x = cur_pose[0]-prev_pose[0]
delta_trans = (x**2+y**2)**(1/2)
delta_rot_1 = mapper.normalize_angle(np.rad2deg(math.atan2(y,x)))
delta_rot_2 = mapper.normalize_angle(cur_pose[2]-delta_rot_1)
return delta_rot_1, delta_trans, delta_rot_2
The function employs the prior and the transition model to predict the state of the robot. It updates the probability of each cell within the robot's grid world by summing the probabilities of transitioning to each grid from all other grids. Below is the mathematical description along with its Python implementation.
def prediction_step(cur_odom:np.ndarray, prev_odom:np.ndarray):
""" Prediction step of the Bayes Filter.
Update the probabilities in loc.bel_bar based on loc.bel from the previous time step and the odometry motion model.
Args:
cur_odom ([Pose]): Current Pose
prev_odom ([Pose]): Previous Pose
"""
u = compute_control(cur_odom,prev_odom)
for prev_x in range(mapper.MAX_CELLS_X):
for prev_y in range(mapper.MAX_CELLS_Y):
for prev_theta in range(mapper.MAX_CELLS_A):
if (loc.bel[prev_x][prev_y][prev_theta]>0.0001):
for cur_x in range(mapper.MAX_CELLS_X):
for cur_y in range(mapper.MAX_CELLS_Y):
for cur_theta in range(mapper.MAX_CELLS_A):
curr_pose = mapper.from_map(cur_x,cur_y,cur_theta)
prev_pose = mapper.from_map(prev_x,prev_y,prev_theta)
#use the given grid pose make prediction for each grid
prob = odom_motion_model(curr_pose,prev_pose, u)
belief = loc.bel[prev_x][prev_y][prev_theta]
loc.bel_bar[cur_x][cur_y][cur_theta] += prob*belief
loc.bel_bar = loc.bel_bar/np.sum(loc.bel_bar)
This function implement the sensor reading assuming the sensor noise has a gaussian distribution.
def sensor_model(obs):
""" This is the equivalent of p(z|x).
Args:
obs ([ndarray]): A 1D array consisting of the true observations for a specific robot pose in the map
Returns:
[ndarray]: Returns a 1D array of size 18 (=loc.OBS_PER_CELL) with the likelihoods of each individual sensor measurement
"""
data_len = mapper.OBS_PER_CELL
prob_array = np.zeros(data_len)
for i in range(data_len):
prob_array[i] = loc.gaussian(loc.obs_range_data[i], obs[i], loc.sensor_sigma)
return prob_array
As the last step, we update our belief based on our prediction and observation using the following equation.
def update_step():
""" Update step of the Bayes Filter.
Update the probabilities in loc.bel based on loc.bel_bar and the sensor model.
"""
for x in range(mapper.MAX_CELLS_X):
for y in range(mapper.MAX_CELLS_Y):
for a in range(mapper.MAX_CELLS_A):
loc.bel[x][y][a] = np.prod(sensor_model(mapper.get_views(x,y,a)))*loc.bel_bar[x][y][a]
loc.bel = loc.bel / np.sum(loc.bel)
The robot runs on a predefined trajectory. We use Bayes filter to predict the states (trajectory) of it. The ground truth is in green, odometry data in red, and prediction in blue. Though the prediction ‘follows’ the ground truth, it does not perfectly match with truth. I believe this observation indicate successful implementation of the bayes filter, but, probably, due to the noise in the robot and sensor, the deviation occurs.
This laboratory exercise demonstrates the implementation of the Bayes filter for localization (x,y) using a robot equipped with a Time-of-Flight (ToF) sensor. The robot initiates its position along designated points parallel to the x-axis and performs an environmental scan by rotating in 20-degree increments to facilitate accurate localization.
The figure below illustrates the navigation and localization outcomes within the simulator. The trajectories are represented by red, green, and blue lines, corresponding to the odometry, ground truth, and belief, respectively. The alignment of the belief trajectory with the ground truth suggests effective functioning of the Bayes filter.
We aimed to localize the robot at four distinct marked poses without translational movement between landmarks. This stationary context allowed for the initialization of the belief with a uniform prior, facilitating the identification of the robot's position at each marked pose. The following program teleport the robot to turn 360 degrees in place while collecting ToF sensor readings every 20 degrees. We implemented a proportional orientation controller as we demonstrated in previous lab that PID works not as well as P along.
def perform_observation_loop(self, rot_vel=120):
obs_length = 18
sensor_ranges=np.zeros([obs_length,1])
sensor_bearings=np.zeros([obs_length,1])
robot_orientation_goal = 0
#interval=360/observations_count # 20
for i in range(obs_length):
sensor_bearings[i]=robot_orientation_goal
start = "1|"
pid = "10.5|0|0|"
command = start+pid+str(robot_orientation_goal)
print(command)
ble.send_command(CMD.MOVE2GOAL,command)
asyncio.run(asyncio.sleep(3))
#time.sleep(1)
ble.send_command(CMD.GET_DISTANCE, "")
# time.sleep(1)
temp=float(ble.receive_string(ble.uuid['RX_STRING']))*0.001
sensor_ranges[i]=temp#+0.075
asyncio.run(asyncio.sleep(2))
print("Angle: ",robot_orientation_goal,'distance: ',temp)
robot_orientation_goal+=20
# SETPOINT=SETPOINT-interval; # CCW
ble.send_command(CMD.STOP2Move, "")
return sensor_ranges,sensor_bearings
The four landmarks were defined as follows: (1) (-3 ft, -2 ft), (2) (0 ft, 3 ft), (3) (5 ft, -3 ft), and (4) (5 ft, 3 ft). The robot was positioned at these marked poses, and the update step of the Bayes filter was executed. The results are illustrated with green dots representing the ground truth and blue dots indicating the belief.
The table below details the localization errors observed at four landmarks. At the first landmark, the x-coordinate is accurately determined, but there is a slight discrepancy in the y-coordinate, potentially due to an error in the orientation belief. Despite this, the robot maintains a generally accurate estimate of its location. In the second scenario, the orientation error is significant at 150 degrees, nearly mirroring the actual map in the robot's perception. This error causes the Time-of-Flight (ToF) sensor readings to suggest an incorrect orientation, as observations made facing the positive x-direction could mistakenly appear to originate from the negative x-direction. Correcting the orientation is crucial for resolving this localization issue. The third case exhibits a similar pattern with a 130-degree orientation error, mirroring the actual position in the robot's belief. The fourth case has a minor 10-degree error in orientation due to the use of grid localization with 20-degree grid cells along the theta axis. This discretization leads to inevitable errors, slightly affecting the localization accuracy. Increasing the grid resolution could enhance performance.
This lab aimed to navigate the robot in a given maze using localization methods implemented in Lab 11.
From Lab11, we used the Bayes Filter to infer the position and orientation of the car. However, we demonstrated that the orientation using Bayesian Inference is not fully functional. Therefore, we hard-coded the orientation control in the following implementation, and we will be inferencing (x,y) only. We assume the landmarks and visiting sequences in the maze are known in our knowledge base. This is human planning rather than robot planning but given the limited time for this lab, robot planning methods like Integrated Task and Motion Planning (TAMP) are hard to implement. Therefore, we plan our trajectory by calculating the relative angle and distance between the current (inferred) landmark and the next landmark. Relative code is shown below.
def compute_control(cur_pose:tuple, prev_pose:tuple)->tuple[float,float]:
y = cur_pose[1]-prev_pose[1]
x = cur_pose[0]-prev_pose[0]
delta_trans = (x**2+y**2)**(1/2)
delta_rot_1 = np.rad2deg(math.atan2(y,x))
return delta_rot_1, delta_trans
for landmark_num in range(1,len(landmarks)-1):
# start by localization
loc.init_grid_beliefs()# Init Uniform Belief
# Get Observation Data by executing a 360 degree rotation motion
loc.get_observation_data()
# Run Update Step
loc.update_step()
loc.plot_update_step_data(plot_data=True)
argmax_bel = utils.get_max(loc.bel)
current_bel = loc.mapper.from_map(*argmax_bel[0])
# find [orientation,dirve_distance]
delta_rot_1,delta_trans = compute_control(landmarks[landmark_num],current_bel)
The results are shown below.
The initial idea is to use PID controllers for orientation control and translation control as implemented in previous labs. The code is shown below. First, turn the robot face to the next next landmark. Then, move the robot to the next point by translation. Last, re-orient the robot to its initial orientation. Therefore, we do not need to consider its orientation at each landmark during localization control and the actual path following control.
void driveCar_linear(float data){
float pwm = PIDControlToF(kp_linear,ki_linear,kd_linear,data);
pwm = pwmCap(pwm);
analogWrite(11, 0);
analogWrite(12, int(pwm)); //right
analogWrite(7, int(pwm*1.15+15)); //left
analogWrite(9, 0);
Serial.print("[Drive_car_linear] Linear Motion, Distance_Error: ");
Serial.print(errorCurrent_linear);
Serial.print(" |Dt: ");
Serial.print(dt);
Serial.print(" |Velocity: ");
Serial.println(int(pwm));
if (abs(errorCurrent_linear)<=100){
status_linear = 0;
status_reorientation = 1;
status_orientation = 0;
}
else{
status_linear = 1;
status_reorientation = 0;
status_orientation = 0;
}
}
void driveCar_angular_orientation(float data, float loopTime){
float pwm = PIDControlIMU(kpIMU,kiIMU,kdIMU,data);
pwm = pwmCap(pwm);
if (pwm > 0){
analogWrite(11, 0); // right backward
analogWrite(12, int(pwm)); //right forward -int(v)
analogWrite(7, 0); //left forward int(v*1.1+20)
analogWrite(9, int(1.075*pwm));
}
else if(pwm < 0){
analogWrite(11, abs(int(pwm))); // right backward
analogWrite(12, 0); //right forward -int(v)
analogWrite(7, abs(int(pwm*1.075))); //left forward int(v*1.1+20)
analogWrite(9, 0);
}
else{
analogWrite(11, 0); // right backward
analogWrite(12, 0); //right forward -int(v)
analogWrite(7, 0); //left forward int(v*1.1+20)
analogWrite(9, 0);
}
if (abs(data-desiredAngle)<=5){
status_orientation = 0;
status_linear = 1;
status_reorientation = 0;
}
else{
status_orientation = 1;
status_linear = 0;
status_reorientation = 0;
}
}
Before running in the actual maze, the algorithm has been tested by telling the robot that the next landmark is at its 45 degrees and is 300 mm away. From the video below, we see that the robot can move as desired on the ground. However, the translation algorithm does not work in the actually maze, and the car would either be stationary or move fast and crash into the wall. This might be that the walls in the maze do not have strong reference point for the ToF sensor to measure.
The above image shows ToF sensor data when moving the robot in the maze. To collect the data, I am moving the car towards the wall constantly by hand while recoding the ToF readings. The readings decreases as expected initially. However, after some time passes, the readings increases suddenly though the car is moving close to the wall. Thus, I believe there is a failure in my ToF sensor (either when it is in the maze or because of the long range mode). Given the fact the ToF sensors do not work in the maze, we can only use feedforward algorithms to teleport the robot.
The following is a demo of driving the robot in the maze.
Though I established a path planning pipeline for the robot, due to the failure of the ToF sensor, I am not able to test it in the maze. However, both localization algorithm and path following algorithms work on the lab's ground. To avoid such sensor failure and potential motor failure that has happened to other students, I suggest we should have better robot. Given this cheap robot, we observed that feedforward works much better than feedback control, making what we learned and implemented less meaningful as they should be.
Apart from being a web developer, I enjoy most of my time being outdoors. In the winter, I am an avid skier and novice ice climber. During the warmer months here in Colorado, I enjoy mountain biking, free climbing, and kayaking.
When forced indoors, I follow a number of sci-fi and fantasy genre movies and television shows, I am an aspiring chef, and I spend a large amount of my free time exploring the latest technology advancements in the front-end web development world.