The Embedded team has two main responsibilities: firmware for Vortex and acoustic systems. In the firmware domain, Embedded acts as an intermediary between hardware and software. This involves receiving sensor data, developing drivers for different electrical components, and optimizing code. Being part of the Embedded team provides an opportunity to gain deeper insight into the interaction between electrical systems and software.
In the acoustic systems domain, Embedded is responsible for locating surroundings underwater using sound. To achieve this, sound frequencies are sampled and their signals are processed digitally. Physical phenomena related to sound underwater are then leveraged to reverse engineer the sound's origin.
Over the past year, Embedded has been developing firmware code for Beluga, our ASV. We have integrated sensor systems, using electrical systems to collect internal status data such as voltage levels, motor current draw, and IP addresses of different sensors and computers. This data is communicated using protocols like I2C and ROS to a central computer RPI, acting as the "Master" node. There, the data is processed and sent to software for use in control and perception algorithms.
Embedded accomplished a significant milestone by implementing Digital Signal Processing (DSP) for Acoustics. DSP is an essential part of analysing sound waves, which are captured by sensors and then converted to digital signals through an Analog to Digital Converter (ADC). DSP is an essential part of making acoustics work, and it involves three main parts: filtering, Fourier transform, and peak detection.
The digital signal goes through a series of processing involving DSP. First, the signal undergoes filtering to remove noise and ensure that the algorithms are more accurate. The better the filter, the more accurate the digital signal version will be. Next, the filtered signal is decomposed into different frequencies using the Fast Fourier Transform algorithm.
Finally, the highest dominating frequencies are detected using peak detection. These frequencies are estimated to be the main components of the signal we got. Suppose we have an interest in a particular frequency and want to know if the signal we are recording from sensors has that frequency. In that case, this entire process of DSP shows if that particular frequency is present in the recorded signal or not.
The main objective of DSP for acoustics is to understand what the signal is made of and decide whether we want to locate where that signal is coming from or not. DSP helps to differentiate signals, identify whether they are the ones we are interested in or just background noise.
Despite the challenge of implementing everything accurately in a single small microcontroller and making it efficient, Embedded has managed to create a functioning DSP part. This achievement is significant as it deepens our understanding of how electrical systems interact with software and allows us to locate the surroundings underwater with the help of sound.
One major area of focus is implementing the sampling phase of acoustics. This involves capturing sensor data of sound waves underwater as quickly and accurately as possible. To optimize this process, the team is leveraging the characteristics of the Analog to Digital Converter (ADC) to sample sensor data as fast as it can with zero downtime. Additionally, they are working on ways to send this data to the microcontroller as quickly as possible and store it in memory with minimal latency. To achieve the fastest data saving, the team is planning to implement Direct Memory Access (DMA), an exploit of the microcontroller architecture where data is directly saved into memory, bypassing the CPU. By doing so, the microcontroller can multitask and calculate data much faster than if done in a series.
Another critical task is the full implementation of firmware for Beluga. This includes writing better firmware code that is easier to maintain for thruster driver code and more robust code to take in sensor data of Beluga's internal status (e.g., voltage and current) and display all data on an LCD screen. Additionally, the team plans to send this data to software for further use.
The final task the team is aiming to accomplish is implementing firmware code for their new Autonomous Surface Vehicle (ASV). For safety reasons, the ASV must have a safe way to kill power in case of an emergency or signal loss. Electrical engineers have already implemented a hardware solution, but having multiple kill switches for software is essential, providing redundancies in case one failsafe doesn't activate. The team will also work on an Electronic Speed Control (ESC) driver, which will take in commands from the software of thrust values and convert them into an electrical signal that applies the correct amount of thrust for each thruster.
Overall, the Embedded Gang has a busy semester ahead. Their goal is to finish implementing features on Beluga, implement acoustic sound data gathering, and implementing firmware for the ASV. Stay tuned for the end result!