ft243@cornell.edu
Fengkai TANG was awarded the degree of BEng Hons Electrical and Electronic Engineering in the First Class classification from The University of Nottingham.
Fengkai TANG now works on MEng Electrical and Computer Engineering program at Cornell University, where he had a deeper understanding on AI/ML, embedded computing and CS Algorithms.
According to the Setup instructions, install the latest Arduino IDE and Sparkfun Appollo 3 support software on the laptop. After installing everything correctly, there are some start-up examples in the Arduino IDE, and we start our journey to Arduino.
Load the blink example file just following the steps in the online instruction webpage. Different from the tutorial, the real board we were using is RedBoard Artemis Nano instead of BlackBoard Artemis. Also, choose the corresponding port otherwise the Arduino IDE will not find the device to upload the code. As shown in the video, the delay time is set to 1000ms, that is, the LED should blink every 1 second.
To use the serial monitor, the baud rate should set the same as the baud rate definition in the code. If not, the serial monitor may display wrong information. In this task, the baud rate is set 115200. In the code, we print some lines first, and in the loop, the serial monitor will output what we type in. The board is continuously reading and then outputting on the serial monitor.
The board continuously reads the analog signal and outputs to the serial monitor. From serial monitor we can see the real-time temperature. We can notice the small increase of the temperature by touching the board chip.
The board will analyze the signal by carrying out FFT operation. The loudest frequency will be displayed in the serial monitor in real time. The loudest frequency represents the frequency of the loudest signal received by the sensor.
I planned to let the chip detect musical A4 note. Referring to the Microphone Output example code, I knew the frequency of musical A4 note is 446Hz. Therefore, a basic logic code can be written as if the loudest frequency is 446Hz, turn on the LED, otherwise turn off the LED. Coding is quite simple, we can just combine the Task 1 and Task 4 code. Code looks like this:
if(ui32LoudestFrequecy == 446)
{
digitalWrite(LED_BUILTIN, HIGH);
}
else
{
digitalWrite(LED_BUILTIN, LOW);
}
Follow the prelab steps, which are detailed, to set up the environment for python and BLE. BLE (Bluetooth Low Energy) is used for communication between Arduino board and Laptop. Jupyter lab will be used to run the python code. I installed python myself but strangely I did not install python3. In this case, any “python3” command must be replaced by “python” command. Initially I did not realize this problem, so I spent lots of time creating the virtual environment. Another problem here was that the virtual environment cannot be activated. Use the command “Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted” first and then the virtual environment can be activated successfully. Load the python files through the jupyter lab and set up the necessary libraries for Arduino, we can then start our first step: find the MAC address.
After running the ble_arduino.ino, the MAC address for my board is c0:89:d:6c:2d:4b. Remember that the MAC address should be 12-digit long, so I left padded one 0 to make MAC address c0:89:0d:6c:2d:4b.
Use uuid4() to create my own unique UUID for connecting my laptop and my nano board. Replace the BLEService UUID in ble_arduino.ino file and replace the ble_service in connections.yaml file with this generated UUID. Also, in connections.yaml file, update the MAC address with our address. One important thing here, after changing the parameters in ble_arduino.ino, we should update again otherwise we cannot connect to the device.
A good start is to try the function in demo.ipynb file to test if all the libraries are working fine. After changing the configuration of connections.ymal file and ble_arduino.ino file, it is expected to connect successfully. In demo.ipynb, we tested the receive_float, receive_string function and PING command. Everything looks good and we can start to do the task. All the task codes will be written in demo.ipynb file.
The task is that the computer will send a string to the nano board, and we need to get the nano board to read this string and add a specific prefix “Robot says ->” and suffix ":)". The code is very easy to implement, we need to extract the string, then prefix append the string and then append the suffix.
Then, in the python file, call the ECHO command and use the receive_string function we have tested in demo to receive the generated string (with prefix and suffix).
This task requires a GET_TIME_MILLIS Command to get the time. We can use millis() function and convert it to a string. Add the prefix “T:” and send it back to the laptop. The curly bracket was added here because without it nano board cannot be updated with the code.
Before calling GET_TIME_MILLIS command, we should add GET_TIME_MILLIS to cmd_types.py and ble_arduino.ino. If not, we cannot call this command and cause an error.
At last, call this command and we should get the time with “T:” prefix.
With the help of callback function, a notification handler can be set up to receive the string value. I named my callback function stringhandler. In my callback function, the string is split with a colon, and use a global variable “time_value” to store the time. Use ble.start_notify() function to start the notification handler. We can see below, “time_value” is only the time while “s” contains “T:” and time. This means that the notification handler is working well.
We can find a loop() function in ble_arduino.ino file, and in loop() function we can find a write_data() function to send data. Therefore, I planned to change the write_data() function to make nano board send time data continuously in the loop.
Collect 5 seconds time data, and put all the time data into “data_collector”. Use the notification handler again to extract the wanted time data.
The length of the “data_collector” is the number of time data we have received. We can use the last element of “data_collector” minus the first element of “data_collector”, which is the time duration used for transmitting data. Use the number of time data divide the time duration (remember to change microsecond to second), and this is the data rate.
Create an array to store the time stamps. Make it global in case other functions will access it.
Similar to Task 4, the code needs to be in a loop. Write the code in the write_data() again and comment the code used for Task 4. The if logic prevents from over-filling the array. At last, timeStamps[] array should contain 100 time data.
Similar to Task 2, create a SEND_TIME_DATA command (remember to add it in cmd_types.py file). After all the time data is sent, I designed to send one more “END” data to show all data has been transmitted.
I designed a new callback function stringhandler2, which can detect the “END” message. However, it seems that it is not very useful. I think t I think this design might be useful in the future, so I keep it here.
Again, write the code in the write_data(), which is in a loop. Each element in time stamps array and temperature array should correspond, so every time put in a new element, the element number should be the same (timeStampIndex in the code shown below).
Add a new command GET_TEMP_READINGS. Each time the nano board send back a time data and a temperature data split with a comma.
A new callback function stringhanlder3 was designed. Split the message with comma and put the first one into time list and the second one into temperature list. At last, 100 corresponding time data and temperature data is sent and stored in each list.
One method is an instant send method and one is a batch store and send method. The instant send method sends the data as soon as it is generated (Task 4), the advantage is that the data can be monitored in real time, the disadvantage is that if the connection is unstable, it will lead to data loss. The disadvantage is that the data can be lost if the connection is unstable. The batch store and send method is that the data is stored locally first and then sent later. The advantage is that the data will not be lost, the disadvantage is that there may be a delay.
384kB = 384 * 1000 Byte = 384000 Bytes = 384000 * 8 bit = 3072000 bits
If the timestamp is a 4-byte integer and the temperature reading is a 4-byte floating point number, then the size of each data point is 8 bytes. 384000 Bytes / 8 = 48000 data point. However, this is not possible because not all the RAM will be used for sending data.
I modified ECHO command to let the board send back the exact message that the computer sends to the board.
I tested the data rate when the message byte was increased from 5-byte to 120-byte, so I called ECHO_BYTE command 115 times. Each time I calculated the data byte and appended it to the data_rates list.
Plot the figure. The data rate increases as the packet size increases, indicating that sending larger packets is more efficient in terms of data transfer per unit of time. Overhead is introduced by short packets while larger packets help to reduce overhead because larger packets usually mean a smaller percentage of overhead (e.g., packet header information, acknowledgment information, etc.) per packet.
Higher rate did not lead to any missing data, which means that BLE transmission is quite reliable. However, some packages may be lost due to extremely high rate.
In this lab, I got more familiar with the Nano board by doing the provided examples. Also, I used BLE to connect the laptop and the board to send data, which is a convenient and reliable method. I met some problem on virtual environment and online tutorials did help a lot.
In Lab2, we will get more familiar with 9DOF IMU Sensor. Basically, we tested the accelerometer and gyroscope function of IMU and played with the RC car for more information about itself.
Below is a figure of Roll, Yaw and Pitch which we used throughout the Lab2. Understanding it is beneficial for calculating roll, yaw and pitch data by accelerometer and gyroscope.
First, connect the IMU to the Artemis Nano board.
Below is a video showing that the IMU example code is working fine. I added some code to let the LED blink three times each time the board is uploaded so that I can know it starts working. When I rotated the IMU, I found that the accelerometer and gyroscope value were changing, which was helpful to calculate roll, yaw and pitch.
AD0_VAL is the last bit of the I2C address. The default it is set 1 but when the address jumper is closed, the value becomes 0. From datasheet, the IMU slave address is 110100x. “x” here is controlled by AD0_VAL. This means that two IMU can be connected using the same I2C bus at the same time. To do this, one device AD0_VAL should be set low and the other should be set high.
Using the serial plotter, we can see that when the IMU is stationary, there is some noise. When the IMU is moving, we see the change and the noise seems to be more significant. If necessary, after some movements, we can restart the IMU to make the data more reliable.
Use atan2() to calculate pitch and roll with accelerometer sensor readings. The formula was provided in the lecture example code.
From the video we can see that the accelerometer was quite accurate. The two-point calibration was not a need in my case.
It was counted that 30 sets of data were generated per second from the serial monitor, so the sampling rate should be 30. I planned to analyze 4-second data. Roll and Pitch data in time domain:
Below is frequency domain figure. We can see that the noise amplitude is too low to be noticed. As pitch and roll frequency domain figures are so similar that I only posted pitch ones.
If we focus on the low frequency part, we can notice that amplitude of 10Hz got close to 0. Plus, the amplitude of the high frequency part is very small, we don't need a low pass filter, and the data sheet shows that this IMU comes with a low pass filter, so there is even less need to add a low pass filter.
Since my sample rate is 30Hz, and according to Nyquist's law of sampling we know that the highest frequency (Nyquist frequency) at which a signal can be captured and reconstructed without ambiguity is 15 Hz. So it makes sense to set the cut off frequency of my low pass filter to around 15 HZ.
After applied low pass filter, there was not too much difference.
From figure below, we can see that gyroscope data was flat, which means it was not bothered by noise. The disadvantage is that the gyroscope data drifts as time goes by. The drift eventually will be so significant that it influences the calculated degree. Also, I found that the longer delay we introduce in the loop, the larger the gyroscope will drift. Therefore, we should avoid too much delay when operating.
By the video below we can see that there are always some errors between gyroscope pitch and accelerometer pitch. One possible reason is that gyroscope pitch data is obtained by integration, the error is always in the integration and there is no way to eliminate this error. Changing the sampling frequency does not help a lot with this problem.
As accelerometer has noise and gyroscope drifts, a complimentary filter could be used, which takes part of accelerometer reading and part of gyroscope reading to compute the output. The weight alpha was set 0.1 in my case.
By using complimentary filter, the noise effect was heavily reduced. One thing to mention here is that do not write the variable definition in the loop, otherwise each loop the variable will be given the initial value and lead to a problem.
Use a flag to control data collection. First, we connect the Artemis to the laptop via Bluetooth. Then, we send a command to make the flag True. As a result, the time-stamped IMU data is stored into an array.
Delete all the delays in the loop. The sampling rate is around 233.33 data per second. Roughly every 4ms one data is transferred. Artemis can run the main loop at very high speeds, typically tens of kilohertz, which far exceeds the data update rate of most IMUs.
Storing the data for each sensor separately improves the clarity and maintainability of the code. So I used different arrays to store data from different sensors. Although the size of the float data will be larger than int, it is best to use float to store data because our data comes with a decimal point, which is important information. Be careful about the array size. I initially set size 200 and caused overflow!
Float is 4-byte size. All my data is float, so a set of data needs to use 28-byte. Artemis has 284k byte then 284k/28 = 10142 sets of data, roughly 300 seconds of data.
The vehicle runs so fast that it is very hard to control. It was discovered that the velocity of this vehicle is not changeable. It takes so limited time to accelerate to its maximum speed.
We setup the Time-of-Flight (ToF) sensor, particularly VL53L1X. At last, we connected a QWIIC Breakout Board to the Artemis and two ToF sensors and one IMU are connected to the Artemis by connecting to the breakout board.
The I2C device address of the ToF sensor is 0x52, but two ToF sensors will use this address. To use the sensors simultaneously, one approach is to programmatically shutdown one of them, namely Sensor 1 for understanding, through shutdown pins (soldering the Sensor 1 shutdown pins to one of the Artemis pin, in my case, I used pin 4), change the address that is ON (Sensor 2) so they will not conflict and then turn on the sensor which is shut down before (Sensor 1). After doing so, two ToF sensors now are using different address and can work simultaneously.
A brief sketch of the wiring diagram is shown below. One ToF Pin XSHUT is soldered to connect the Artemis Pin 4 for changing the address.
One of the ToF sensors has to be placed at the front of the car because we need to know if there are obstacles in the direction the car is going. The other ToF sensor could be placed at the front of the car to give a more accurate reading, but I don't think that would make much sense, we only know what's in front of us and not what's to the side. It could also be placed in the side position, so we have information in both directions.
The figure below shows the connection between the ToF sensor and the Breakout Board. Red – Vin, Black – GND, Blue – SDA, Yellow – SCL for soldering reference.
By running the example wire_I2C file, we can see the address of the ToF. It is shown 0x29, which is different from the 0x52 in the datasheet. Look closer at the datasheet, we can know the least significant bit is read/write bit. Take this bit out we can find that the left seven bits are exactly 0x29. So, the least significant bit is not considered into the address.
Now there are only two modes in ToF library, which are short (1.3m) and long (4m) modes. The short mode offers a higher accuracy while the long mode has the advantage of longer distance detection. Personally, I think 1.3m is an enough range for the robot to react and its data is less affected by the noise. If the speed of the car is really fast, the long mode can be chosen for a longer adjustment distance. In this lab, I chose the short mode to test the ToF.
I planned to test the data at every 100mm (therefore from 100mm to 1300mm) to see the accuracy of the ToF sensor under different lighting conditions. When testing, I found that one of my ToF sensors have a around 100mm blind spot, which means that the sensor cannot detect the obstacles that are too close the sensor. According to the testing results, I found that it is accurate and reliable. The lighting conditions have a little effect on the results, but generally, the results under dim lighting conditions have a higher accuracy.
As discussed in the Prelab section, connect two sensors, turn one off, change the address and turn on the sensor. Then, two ToF sensors can be used simultaneously.
The video below shows that the two sensors worked well.
To test the sensor speed, we can use millis() function to see the duration spent to send two data.
From the result below, it was found that it used about 100ms to generate one data. The limiting factor are a lot, such as, the delay function, the serial print function used in the loop, and the baud rate and so on.
Similar to Lab 2, use BLE to transmit the data from the Artemis to the laptop. Create a handler in python file to process the data.
ToF sensor is an active infrared distance sensor. It emits infrared radiation and receives the reflection infrared signal after hitting an object. The time used between transmit and receive can be used for calculating the distance. The other kind of infrared sensor is called passive infrared sensor. It only receives the infrared signal emitted by other objects. We should use active infrared sensor such as ToF to measure the distance. There are a few factors that can affect the accuracy of an infrared sensor. For example, interference from external light and smooth or reflective surfaces can degrade accuracy.
I tested different color such as red, black and yellow and smooth or reflective surface, the results shown that it does not really influence the accuracy. However, when I tried to move the obstacle far away from the sensor, such as 1m, the black obstacle measurement was not so accurate. This may be caused by long distance but I chosen the short mode, and dark obstacle absorbs more light so the light energy returned to the sensor is less.
We will use two dual motor drivers to control two motors. We should select appropriate pins to control the dual motor drivers. One motor is connected to pin 6 and pin 7 and the other motor is connected to pin 11 and pin 12. These pins are chosen due to their PWM capabilities. These pins are also capable of analogWrite function as “~” shown in the datasheet. The circuit diagram is shown below.
We have 650mAh and 850mAh batteries. We will use both of them, and the smaller capacity battery 650mAh is connected to the Artemis board while the larger capacity battery 850mAh is connected to the dual motor drivers as the drivers consume more power. The reason why two separate batteries are used is that this avoids the transient effects, further avoiding the restart of the Artemis caused by large current changes.
For testing we do not solder the pin Vin to the battery yet, so we use power supply instead to power up the motor drivers. As the output voltage of the 850mAh battery is 3.7V, so the power supply is also set 3.7V. This power supply positive terminal is connected to the driver Vin pin while the negative connected to the GND pin. The hook of the oscilloscope needs to be hooked to OUTPUT and then the clamp to GND. We had the common ground, so the reading was accurate.
I set pin 11 zero so it would not output any PWM signal. Then I controlled pin 12 to output PWM signal by changing the parameter in analogWrite function. It was found that too small duty cycle cannot make the wheel rotate while too large duty cycle is so powerful and noisy. Therefore, for testing I chosen 160/255 duty cycle. Another thing to mention is that the two motor speeds are different under the same duty cycle, and this is probably due to the mechanical problems. This will be discussed further later.
I set duty cycle 160/255 and the peak-to-peak voltage is 3.7V. The average voltage should be 3.7 * 160 / 255 = 2.3V, which is near to 2.44V shown on the oscilloscope. Adjust the duty cycle, we can see that the time for ON state was changing. Here I adjusted the duty cycle to 100/255.
To let the motor spin in different direction, set pin 12 zero and control the motor by adjusting the duty cycle of pin 11 PWM. Obviously, for each driver, when one input is HIGH and the other is LOW, the wheel will spin in one direction. Reverse the direction we can simply reverse the state of each input.
The video below shows that one dual motor driver is working expected with 850mAh battery driving the motor driver.
The video below shows that both motor drivers are working as expected with 850mAh battery driving the motor drivers. One point to mention is that do not short the Vin pin and GND pin otherwise it breaks the motor driver. I broken one motor driver because they are shorted by accident. I later used the heat shrink to protect the pin short each other.
Before discussing the lower limit, let’s look at my hardware connection. I placed one ToF at the front of the car and the other ToF at one side. The IMU was placed next to the Artemis.
The lower limit PWM value is the value that just make the car move. As observed before, the motors do not spin at the same rate, so the lower limit PWM value would be different for two motor drivers. After multiple testing, the lower limit for the left motor is 50 and the lower limit for the right motor is 33.
For the lower limit PWM value making on-axis turn, the value is much larger than the lower limit making the car move. After testing, the lower limit value for left motor is 150 while for right motor is 200 (turn left condition). The video below shows the lower limit of moving and making an on-axis turn.
There are two ways to stop the car, hard stop and soft stop. Hard stop is to set all inputs 255, this will make driver not spin immediately so the car can stop in a very short time. In hard stop, the wheels seem like they are stuck and can't turn. Soft stop is to set all inputs 0, this just makes the speed 0, but the car may need some distance to be stationary. Therefore, I used hard stop first, and then used soft stop.
As two motors have different rate under the same PWM, the correction factor is needed to make the car move forward in a straight line. I set PWM value 100, and the correction factor was found to be 0.66. The right motor always spins faster, so correction factor applies to right motor. The video below shows the robot can move in a fairly straight line.
To demonstrate open loop, I programed the robot to move forward for one second, make a left turn, and then move forward for other second.
The code used for analogWrite frequency test is shown below. I planned to see the time duration between each four analogWrite functions to calculate the frequency.
The output is shown below. It can be found that around 10ms will be used for generating four analogWrite functions. The frequency is around 400Hz. This should be sufficient for the robot because it can control the robot to adjust under a small amount of time. Manually configuring the timers can create a faster PWM signal, this outputs a more stable DC voltage for the motor driver, which make the robot movement more stable.
It requires larger PWM value to start the robot movement from stationary state, however, a smaller PWM value can make the robot keep the movement. I planned to make the robot move for two seconds, and then decrease the PWM value to find the minimum value that keeps the movement. The lowest PWM value was found to be 40.
The purpose of this lab is to have a better understanding of PID control. I implemented a PID controller onto the robot to make the robot stop at the desired distance from the wall.
I used the BLE to send command to let the robot start, and another command to receive the PID data from the robot. Also, for adjusting the PID value easier and target setpoint, I created a command to change the Kp, Ki, Kd and setpoint via BLE. Set PID value and setpoint command is shown below:
To prevent sending the data from the robot to the laptop slowing down the loop frequency, I decided to store the PID data into the arrays first and the robot sends the data when I call the command. I stored time data, the ToF at the front of the robot distance data, error data and PWM data for both motor drivers.
The PID formula is shown below.
PID controller considers the error between the expected value and the current value, and applies proportional Kp, integral Ki and derivative Kd to make the error smaller. PID controller used in this lab was to keep the robot 40cm away from the wall. The proportional term is the error multiplied by a constant of proportionality Kp. It serves to produce a control that is proportional to the current error, i.e. the larger the error, the stronger the control. Increasing the proportional gain Kp will make the system respond faster, but it may cause the system to oscillate if Kp is too high. A low Kp is difficult to eliminate errors quickly due to slow response. The integral term considers the sum of the errors accumulated over time and multiplies this sum by a constant of integration. Its main function is to eliminate the steady state error. A suitable integral gain Ki can help the system to eliminate the steady state error quickly and not cause the system to oscillate. The differential term is based on the rate of change of the error, multiplied by a differential constant. Its main function is to predict future error trends so that control decisions can be made in advance to suppress excessively rapid error changes.
I firstly used PI controller, but the robot always stopped at 40cm distance from the wall after it ran into the wall. So I used the PID controller at last and the robot performed better. The final choice was Kp = 2, Ki = 0.3 and Kd = 2.
The ToF sensor detects at intervals of approximately 110 milliseconds, which is sufficient for the PID control to respond. I used shortdistance mode which I think should be a enough range. One problem is that the ToF sensor reading is not so accurate and reliable. I kept the robot stationary and tested the sensor, I found that the reading was floating with an error of 5mm. Therefore, in code, when the error is between -5 and 5, I stopped the robot.
I used command SET_PID and SET_SETPOINT to adjust the parameters tested in the control. In my PID control, I set the robot stopped running after 100 iterations. Every iteration, I stored the wanted values into corresponding array, which can be sent to my laptop after calling SEND_DATA command. I restricted PWM value maximum 150 to avoid the robot move too fast. I simply used the PID result to be my drivers’ PWM, and the correction factor was applied to make the robot move in a straight line. When the error absolute value is larger than 5, PID controls the robot, otherwise, robot stopped (as the sensor errors discussed before).
From the figure plotted below, it can be found that the robot stopped at the distance around 400mm. It can also be seen from the videos. I made two videos, one is on the Lab ground while the other is on the carpet. The robot cannot move as fast as on the ground, so it is easier to stop.
In my code, I limited the integral results to avoid integral windup. If the error is large and keeps for some time, the integral will easily wind up and cause PWM value exceeding 255 limitations. I also set the maximum limitation on the PWM, so it cannot exceed 255.
In this Lab, the PID was used for orientation control, which means drive the wheel in opposite directions to control the orientation.
I used the BLE command from last lab to adjust the PID parameters. Similarly, in last lab I stored the data in different array and call a command to send them in one run, I did this as well for this lab, just store the data into other array. The PID parameter command is shown below:
The reason why I did not transmit the data while PID operation was that I did not want to slow down the loop.
It was discussed in detail in the last lab. Generally, the larger Proportional Kp can make the system response faster. Therefore, in this lab I planned to choose a quite large Kp to adjust the orientation fast. For Integral Ki, a suitable integral gain Ki can help the system to eliminate the steady state error quickly and not cause the system to oscillate. Differential Kd main function is to predict future error trends so that control decisions can be made in advance to suppress excessively rapid error changes.
At last, I chose Kp = 10, a quite large value, Ki = 0.5 and Kd = 1. This combination not only responses fast but also makes system stable.
The sampling frequency of the IMU sensor is around 100Hz to 200Hz, which is very enough for PID control to response. The accelerometer has noise and the gyroscope drifts, so the sensor reading may be not stable. However, we can use complimentary filter to settle this. In general, the data is quite accurate.
I used the above PID parameter combination. In my PID control, I planned to operate for 20 seconds. The error here, unlike before, is the sum of previous error and IMU sensor reading. By the way, I placed the IMU vertical, so I used gyroscope x axis value instead of z value. The error here we can think in this way: a rotation can be considered as many small rotations, and the error is the sum of the errors of these small rotations. At the first several error, set it to 0 because the data at the beginning is not stable, which is further influencing the integral part (Ki), and making the robot rotating all the time even no external force. I tested that when I made robot rotate in clock-wise direction, the gyroscope x axis value changed to negative. So when the PID result is negative, the robot should rotate in anti-clock-wise direction and vice versa.
The figure below helped me debug. When the first several errors were not set 0, the integral part has a very large value, usually 200 as I set integral part maximum is 200. This will cause PID result always large, even without the change of gyroscope data. By eliminating the first several errors, the PID control worked as normal.
Below are some data plot during the operation. When the PWM is positive, rotate anti-clock-wise, otherwize, rotate clock-wise.
Similar as before, I set a limitation to the integral to avoid the PID result becoming to large. At the same time, I set limitation on the PID result maximum 255 to avoid cause error of analogWrite PWM value.
In this lab, we learned Kalman filter which is very necessary as our robot moves faster than the speed of ToF sensor reading distance. Kalman filter can predict where the robot is heading for and calculate the predicted distance to avoid collusion.
The Kalman Filter predicts the distance of the robot according to the state space model, which are two important parameters, drag and mass. Then according to the sensor reading, the Kalman filter updates some of the parameters.
To find the matrix A B and C, we should find the drag and mass based on the robot velocity. Drag (d) is 1 / steady state robot speed and mass (m) is -d * t_90 (the time used for to achieve 90% of the steady state speed) / ln(1-0.9). The figure below shows the steady state speed when the robot moves towards to the wall. We can find that the steady state speed is about 1900mm/s and the t_90 is around 2 seconds. Therefore, d= 1 / 1900 = 0.000526 and m = -d * t_90 / ln(0.1) = 0.000457.
Then, we can calculate more parameters such as A, B, C, Ad and Bd. By the way, C = [-1 0] which is not calculated, but is defined as so.
The Kalman Filter also needs the measurement noise and process noise (depending on the sampling rate). From figure below, we should find sigma1, sigma2 and sigma 3. From the lecture PowerPoint, sigma1 = 27.7, sigma2 = 27.7 and sigma3 = 20.
To implement it on Jupyter Lab, we should combine the matrix calculated before into one KF function. The parameter u is our robot PWM value during moving and the parameter d is the distance from the wall. Pay attention, the parameter u is the percentage of the PWM values which can move the car. The least PWM value to move my robot is 50, and maximum is 255, so it should be (u – 50) / (255 – 50).
By adjustiing sigma values, we can have different prediction. Change the trust level of the sensor. The less you trust the sensor, the more the output is predicted from the KF.
To implement KF on the robot, we should transfer python into C. The code is shown below.
In this lab, we are required to carry out one of the stunt tasks. I chose the Task A Position Control which performs a flip and drives back.
To flip the robot at the distance 0.5m from the wall, my plan was to rush the robot into the wall and then suddenly change the direction of both motors, so the robot could flip due to inertia. Therefore, I removed the PID used in the previous labs as PID control would slow down the speed the robot rushes into the wall. I planned to use Kalman Filter but it did not work well. As Kalman Filter predicts much faster than ToF sensor reading, when the prediction is not accurate, the speed of the robot was low. Then it cannot flip. Recalculating the drag (d) and mass (m) did not solve the problem so I gave up using Kalman Filter. Just use ToF sensor.
Before flipping the robot, the robot is running into the wall at maximum PWM value 255 for both motors. When the sensor reading is smaller than a specific value, the robot will move backward at maximum PWM value 255 for some time to flip the robot, and then move backward at PWM value 150. As the sensor reading is not fast enough, when I set the specific value 500, the robot always did not flip. I reckon this is because when the sensor reading is 500, the robot has no time and distance to do the next operation, so it ran into the wall instead of flipping. Therefore, I increased the specific value to 1200. The logic makes the robot flip at 1.2m distance from the wall, but the sensor reads not fast enough and robot needs time to react, it will eventually flip at 0.5m on the sticky matt.
The videos below say that my robot indeed flipped and move backward. However, there is a problem unsolved. When backward moving, the left side motor always stops earlier than the right side. This makes that the robot cannot move backward in a straight line, but rotate at one point. After discussing with TAs, this is caused by mechanical problem so there is not much I can do to change it. Maybe in the future I can use IMU to make the robot move backward in a straight line.
The PWM value is 255 when the robot moves forward, then -255 (which means move backward at 255) to cause the flip. At last, PWM value is -150 to make it move backward in a straight line and hard stop (both motors are 255). The distance versus time is shown below.
The goal of this lab is to map out a static room by using the ToF sensor data. This static room will also be used in later Lab 11 and Lab 12. Our robot will be placed at five different designated spots, which will give enough information about the static room. At each designated spot, the robot should spin around its axis and gather the ToF readings during the spin. After ToF readings from five spots are collected, we can plot a 2D map by using the transformation matrices.
There are three ways to control the robot collecting the ToF data, I preferred the angular speed control. I planned the robot spin around 20 degrees per second. In order to control the angular speed, PID control should be used. To spin the robot on one axis, for my case, it was the X axis of the gyroscope as I put the IMU vertically. To determine the slowest PWM value for motors, friction and battery are two most important factors to consider. On different ground and with different battery conditions, the slowest PWM value varies. Overall, the slowest PWM value is around 120 to 130.
The below video and figures demonstrate the angular speed control.
The video shows that the robot spined slowly on-axis but not so stable, the robot is likely to drive in a slight circle. The angular speed average should be 20, but the IMU reading is frustrating due to the friction between tile and wheels. It took around 15 seconds to complete a 360-degree spin, so the average angular speed is 24 which is close to 20. My PID result, which is the PWM value, is fluctuating from 120 to 127.
The five designated spots are (5, -3), (5, 3), (0, 3), (0, 0) and (-3, -2). Therefore, in total, five distance data should be recorded. Each spot, I recorded twice and found the best fit one. The five polar coordinate plots are shown below.
The transformation matrices are used to convert the distance value detected by ToF sensor to the inertial reference frame of the room. My transformation equation is shown below. The data was obtained at different locations, so angle offset may be applied to get the correct mapping. The unit should be constant and 1 foot = 304.8mm. Different locations have different X and Y value.
The mapping is not satisfied, so I increased the PWM value to spin faster and get less data to avoid useless data.
Based on the scatter plot, add the actual walls and boxes to it. I estimated the location of walls and boxes and stored the X and Y values.
In this lab, we should implement a Bayes filter to perform the grid localization using a simulation of the robot. The basic principle of Bayes filter is that the probability density of a state is firstly predicted by a system model, then it is corrected with the most recent measurements.
The map is a 3-dimensional grid size of potential robot poses for locating the robot. The 3-dimensional stands for x, y and theta, where x and y are the robot position and theta is the angle with respect to the inertial frame. The robot in total has 1944 poses as x-dimension, y-dimension and theta size are 12, 9 and 18 respectively.
The provided jupyter notebook and predefined classes help a lot. We only need to complete five functions to implement Bayes filter.
The compute_control takes in previous pose and current pose, and extracts the control information based on the odometry motion model. To find the probability of the new pose, the first rotation, translation and the second rotation should be calculated.
The odom_motion_model takes in current pose, previous pose and control input (u). It computes the probability of the robot moving to current pose using Gaussian distribution with compute_control output rot1, trans and rot2 and taken-in input u.
The prediction_step is used for calculating the position of the robot. The calculation only takes the odometry values into consideration, without any sensor measurements. There are 6 nested for loops as there are 3 dimensions for previous and current pose. To optimize the code operation speed, we skip the states which previous belief is smaller than 0.001. If the belief is smaller than 0.001, it is not worth calculating.
The sensor_model takes in true observation of the robot and output an array which stores the value of p(z|x) for every 18 sensor measurement. The Gaussian distribution is used here to calculate the probability.
The update_step updates the beliefs based on the sensor_model. Depending on how much sense the previous belief makes considering the sensor measurements, the new belief is updated.
Below are demos showing the Bayes filter. The green line represents the ground true, the blue line represents the Bayes filter output and the red line represents the odometry readings. We can see that the Bayes filter output (blue line) is very accurate even though the odometry readings are not promising ,as the blue line almost aligns with the green line.
In the previous lab, we successfully implemented the Bayes Filter for robot simulation, and in this lab we need to implement it to the real robot. The distance data was collected by ToF sensor and it was processed by optimized localization code in the Jupyter Notebook.
Run the localization code in simulation to verify it is working well. From figure below, we can see that the blue line (Bayes Filter) is quite in align with the green line (ground truth), the same results of last lab.
For the real robot, like we did in Lab 9, we need the data from four points (-3,-2), (0,3), (5,-3) and (5,3). For each point we only want 18 distance data, so our wanted angular speed is one data per 20 degrees. Sometimes the number of data will exceed 18, in this case we should only focus on the first 18 data and ignore the rest.
For the perform_observation_loop function given in the lab 11 jupyter notebook, we process our distance data. Because we are using Bluetooth to transmit the data, we should take the time delay into consideration. This is what “asyncio.run(asyncio.sleep(30))” does. This time allows the robot to do the rotation and send the data, and after 30 secs, the data is processed. The reason why we do not use time.delay() here is that it does not allow the localization code running in the background. Nothing is running during time.delay().
(-3,-2)
(0,3)
(5,3)
(5,-3)
From the results, some points have good localization while some not. For those not, it is likely due to not robust sensor readings. For example, point (0,3) may loss some disntance data on the x axis. Another potential problem is that the robot is not rotating every 20 degrees.
This lab is the last lab and it used what we learned in the previous labs to create a path planning algorithm for the robot. There are totally 9 waypoints on the map and the robot is expected to reach these 9 waypoints. The robot using localization and PID control can run autonomously and adjust its position.
At the beginning of the lab, I expect the robot to localize itself at each waypoint to adjust its position and calculate the distance and the angle for the next waypoint. However, this was not ideal, and the test results were not satisfactory. Therefore, due to running out of time, I quit using localization but just used the PID control and open loop control for the robot rotation.
From figure below, we can see that at the first four waypoints the robot needs to rotate 45 degrees and the left three waypoints the robot needs to rotate 90 degrees. I used open loop to control this.
The main functions we will use are forward, back, turn left and right. Forward made robot go forward, back made robot adjust the distance. Turn left and right is used to adjust the angle.
There were too many people testing the robot and more importantly, my robot is not working well. Due to other courses final exams, I did not have the enough time to debug. The performance was so poor that I did not record a video for it.