You have reached the homepage of Derek Anderson. I am an Associate Professor at the University of Missouri-Columbia in the Electrical Engineering and Computer Science (EECS) department and I run the Mizzou INformation and Data FUsion Laboratory (MINDFUL). My research is focused on information fusion and computational intelligence/machine learning/artificial intelligence. On this site, you will find information such as my CV, publications, students, research grants, and laboratory. If you have any additional questions, contact me at Email. The MINDFUL git repo is at link and YouTube videos are at link.

AI/ML for microUAV autonomy

Summary
Researching and developing trustworthy and explainable artificial intelligence (XAI) that can sense, think, and act autonomously in new, complex, and dynamic environments is an extreme challenge. A BIG research barrier is sadly the real-world!!! It is cost and time prohibitive to design, set up, collect, and accurately ground truth real-world datasets for training and testing/evaluating AI/ML. Since we cannot collect data for all possible scenarios across space and time, what information does an AI/ML need to learn? Where does a given AI/ML work (or not work)? Why did an AI/ML make a specific decision? In this project we are using the Unreal Engine (UE), a state-of-the-art simulation and photorealistic rendering engine, to advance AI/ML. While simulation has existed for decades, it is only just now that we are seeing the convergence of affordable high-performance computing resources, high-quality and widely available simulation software/libraries, and inexpensive content (e.g., Quixel, UE Marketplace, Turbosquid, etc.) to create large and complex digital universes that are becoming indistinguishable from our own reality. This is a game changer for researchers who have been dependent to date on small and insufficiently labeled/annotated real-world datasets. Herein, our AI/ML is built on top of the Robotic Operating System (ROS). As a result of ROS + UE, our frameworks and workflows are extremely generic (in design and deployment) and the AI/ML is unaware of if it ‘exists’ in a real or simulated world. We are leveraging these platforms to rapid prototype, train, and deploy simulated artificial agents on real-time microUAVs in the context of human-robot (UAV) interaction and shared collaborative spaces via augmented reality. To date, we have used this platform to explore agent mental models, decision making, and fuzzy spatial relations (see our publications). On a final note, simulation offers a wealth of other benefits: e.g., the ability to investigate, vary, and study-controlled scenarios; measure and evaluate AI/ML at a deeper level relative to real ground truth; compare and benchmark algorithms and their hyperparameters; and unit/stress test our AI/ML to identify weaknesses. For more details, see our publications page.

Students and Profs
Brendan Alvey, Jeff Schultz
Dr. Anderson, Dr. Buck, Dr. Keller, Dr. Scott

Procedural AI/ML environments

Summary
Now that we have access to high quality simulation (e.g., the Unreal Engine (UE)) and have demonstrated that AI/ML trained using sim works in niche domains (see our pubs), the harder, deeper, and richer research questions begin! In this project, we are exploring what information is needed to drive trustworthy and explainable AI (XAI). Specifically, we are investigating a formal language to describe an agents universe and its role in it. This language is used to algorithmically generate new environments (in UE) and data collections (low altitutde aerial UAVs) with respect to different levels of visual features and abstractions. Examples of visual abstraction include imagery ranging from art to cartoon, photorealism, and real data augmented by simulated data. This exploration allows us to explore AI/ML generalizability. Visual features of interest include shape, color, texture, instensity, contrast, etc. Understanding features like these allow us to determine if an AI/ML learned a target concept or did it simply memorize the training data. It allows us to assess to what degree the machine can generalize and how. Ultimately, the goal of this project is to develop an adaptive and closed loop environment to train, evaluate, and continuously improve AI/ML. To date, we have used these ideas to create and study AI/ML algorithms for explosive hazard detection in the visual spectrum (aka RGB) from low altitude UAVs. See our publications for further details.

Students and Profs
Jeffrey Kerley and Brendan Alvey
Dr. Anderson, Dr. Buck, Dr. Keller, Dr. Scott

Hybrid Sim/Real Imagery for AI/ML

Summary
Simulated data is constantly improving, but it is not currently as good as the real world. In this project, we are using the Unreal Engine (UE) to simulate high quality novel poses of objects in different contexts. This data consists of visual spectrum imagery (aka "RGB"), multispectral or infrared imagery (see Infinite Studio plugin), object ID maps, shadows, depth maps, etc. We also record all object, environment, and platform metadata. In AMA (Altitude Modulated Augmentation), we are exploring the intelligent context-based insertion and augmentation of simulated data into real low altitude UAV data to extend its usefulness. The flight (e.g., GPS and IMU) and environment (time of day, weather, etc.) metadata from a real data collect is fed to AMA, which decides how to insert contextually relevant simulated data into real drone imagery. We have shown (see our publications) that this strategy, even in its infancy stage, can outperform AI/ML models trained on real-world data in small niche low altitude aerial domains. While there is obviously more research to be done, this is a promising method for augmenting and extending real world data using simulation.

Students and Profs
Brendan Alvey and Peter Popescu
Dr. Anderson, Dr. Buck, Dr. Keller, Dr. Scott

Multi Spec VIS and IR in Unreal Engine

Summary
Our group has licensed the real-time Infinite Studio plugin for the Unreal Engine game engine. Infinite Studio supports research, development, simulation, and analysis of electro-optical and infrared systems across a variety of complex environments. Our group is leveraging this toolset for modeling and simulation (fully simulated scenes and templates inserted into real imagery) in support of AI/ML. More to come soon!

Students and Profs
Brendan Alvey and Maddy Kovaleski
Dr. Anderson and Dr. Buck