research & experience
My PhD research focused on robot perception and collaborative navigation by using multiple sensors to inform safe robot movement. My goal is to create robots that can reliably navigate in their environments. This page contains projects I've been involved with as part of research, school classes, internships, or campus jobs.
PhD Dissertation
Leveraging Relative Ranging Geometry for Fault Detection and Multi-Robot Coordination
Computing an object’s position in its surrounding environment is critical in robotics and transportation systems. We use tools from graph theory and signal processing to create novel algorithms that use measured distances between multiple objects to provide insights into the underlying geometry of the objects. Our proposed algorithms are broadly applicable to a diverse set of settings, from rapidly detecting outlier GPS signals to creating a multi-robot coordination strategy for a swarm of rovers navigating on the Moon. We demonstrate the proposed algorithms decrease computation time and positioning error compared to other state-of-the-art algorithms.
Dissertation Link. Defense on YouTube. Slides on Google Drive.
Uncertainty-Aware Multi-robot Navigation
Fusing robot perception and control to plan trajectories that maximize sensor information gain and reduce position error. Sensor fusion with camera and UWB (ultra-wideband) ranging radios.
Derek Knowles, Adam Dai, and Grace Gao. Multi-Robot Collaborative Localization and Planning with Inter-Ranging. Submitted to IEEE Robotics and Automation Letters. Open Access Link.
Alexandros Tzikas, Derek Knowles, Grace Gao, and Mykel Kochenderfer. Multi-robot Navigation using Partially Observable Markov Decision Processes with Belief-based Rewards. JAIS: Journal of Aerospace Information Systems. 2023. https://arc.aiaa.org/doi/10.2514/1.I011146.
EDM-Based Fault Detection
Detecting errors in the measurements from Global Navigation Satellite Systems (like GPS, GLONASS or Galileo) through comparing the properties of Euclidean Distance Matrices created using those measurements.
Derek Knowles and Grace Gao. Greedy Detection and Exclusion of Multiple Faults using Euclidean Distance Matrices. Submitted to Navigation: Journal of the Institute of Navigation. 2024. Open access link.
Derek Knowles and Grace Gao. Euclidean Distance Matrix-based Rapid Fault Detection and Exclusion. Navigation: Journal of the Institute of Navigation. 2023. Open access link.
Derek Knowles and Grace Gao. Detection and Exclusion of Multiple Faults using Euclidean Distance Matrices. Proceedings of the Institute of Navigation GNSS+ conference (ION GNSS+ 2023). 2023. Best Presentation of the Session Award. Available on YouTube.
Derek Knowles and Grace Gao. Euclidean Distance Matrix-based Rapid Fault Detection and Exclusion, Proceedings of the Institute of Navigation GNSS+ conference (ION GNSS+ 2021). 2021. Best Presentation of the Session Award. Available on YouTube.
How to Quickly Detect Bad Satellite Signals (no math background required). Available on YouTube.
Python GNSS Library
Open-source Python library that allows new users to easily analyze, visualize, and compute position estimates from raw GNSS (Global Navigation Satellite Systems) signals.
Available on the Python Package Index (PyPi): https://pypi.org/project/gnss-lib-py/
Open-source on GitHub: https://github.com/Stanford-NavLab/gnss_lib_py
Derek Knowles, Ashwin V. Kanhere, Daniel Neamati, and Grace Gao. gnss_lib_py: Analyzing GNSS Data with Python. SoftwareX. 2024. Open access link.
Derek Knowles, Ashwin V. Kanhere, and Grace Gao. Localization and Fault Detection Baselines From an Open-Source Python GNSS Library. Proceedings of the Institute of Navigation GNSS+ conference (ION GNSS+ 2023). 2023.
Derek Knowles, Ashwin V. Kanhere, Sriramya Bhamidipati, and Grace Gao. A Modular and Extendable GNSS Python Library. Proceedings of the Institute of Navigation GNSS+ conference (ION GNSS+ 2022). 2022. Best Presentation of the Session Award.
Visual SLAM
Created analysis tools and an automated pipeline for testing visual SLAM algorithms as part of an internship with AeroVironment.
Evaluated multiple feature-based Visual SLAM algorithms, performed algorithm downselect, and implemented chosen algorithm on an ARM Cortex-A53 single board computer.
Lean Launchpad: Construction Technology
Stanford's Lean Launchpad class taught me the principles of entrepreneurship through fast-paced experiential learning. My team developed ideas surrounding construction technology including robotic roof welding, virtual reality for immersive training, and augmented reality for field service technicians. The most profitable lesson I learned was the need to have customers' pain points drive product decisions as our team interviewed over 120 stakeholders testing our minimum viable products.
Construction Robot Evaluation
As part of a partnered class project, I evaluated a robot tool for use on a specific construction project. We met weekly with the CEO of Naska.AI and MT Højgaard's Project Director for the "Water Culture House" being constructed in the Copenhagen harbor. We evaluated how Scaled Robotic's tool for comparing the planned building models with the built reality would impact the project's quality, safety, schedule, and cost. In the end, we recommended the use of Scaled Robotic's tool for its potential to reduce rework and the associated schedule delays.
Open-source Contributions
Over the years, I've made some tiny open-source contributions to a number of projects including georinex, bagpy, zed-ros-wrapper, and asl_turtlebot.
DARPA Subterranean Challenge
A DARPA-funded competition to use autonomous robots to explore subterranean environments. I contributed as part of an internship with Scientific Systems.
For the systems track (physical robots), I did data collection and training for object recognition. The system was successfully implemented in August 2019 at the cave-exploring competition stage recognizing 17 artifacts across 4 cave test runs.
For the simulation track (ROS Gazebo), I wrote software for aerial and ground robot movement control, trained and implemented object recognition, and set up a framework for multi-agent exploration by expanding inter-agent communication.
AUVSI Competition
My undergraduate senior capstone project was for the annual Association for Unmanned Vehicle Systems International (AUVSI) competition. I was on the vision subteam with two electrical engineers and two computer engineers developing the software to take images from our autonomous drone and both manually and autonomously classifying ground targets by their color, alphanumeric and geolocation. I was also responsible for our payload release which deployed an unmanned ground vehicle.
Assistant Equipment Manager
For three years in college, I worked as a student employee managing engineering equipment. This position exposed me to the broad range of research that was performed in the Mechanical Engineering department and across campus through assisting students and faculty conduct experiments, perform data collection, and advise on prototype designs.
I maintained and trained students and faculty on a variety of equipment including: laser cutters, FDM 3D printers, SLA 3D printers, LabVIEW, Arduino, Raspberry Pi, Jetson Nano, ShopBot CNC Router, MTS & Instron tensile testers, wind tunnels, and water tunnels.
Our lab also maintained and trained students on smaller equipment available for checkout including oscilloscopes, waveform generators, voltage probes, accelerometers, force sensors, anemometers, high-speed cameras, soldering kits, power tools, hand tools, etc.
Voronoi Collision Avoidance
I conducted research in Brigham Young University’s Multiple Agent Intelligent Coordination and Control (MAGICC) Lab under Tim McLain. I focused on obstacle avoidance algorithms for unmanned aerial vehicles by implementing 3D dynamic Voronoi diagrams. The Python algorithms dynamically calculated Voronoi diagrams around moving obstacles (blue and green spheres) and found the least cost path to the end destination (pink path).