A conversation on predictive maintenance, cybersecurity, and decision support in high-stakes defense environments, exploring how explainability, adaptability, and human-centered design underpin the creation of trustworthy AI.
In defense environments where decisions carry real operational risk, AI must do more than make predictions—it has to earn trust. Kenneth Lu, Scientist at Charles River Analytics, has spent over a decade designing AI systems that support sailors, analysts, cyber defenders, and commanders in high-stakes contexts.
His work spans predictive maintenance for complex naval platforms, adaptive cybersecurity tools that anticipate attacker behavior, and decision support systems that combine machine learning with human judgment.
Throughout, one principle remains constant: the most effective AI is transparent, explainable, and grounded in the realities of mission environments. In this Q&A, Lu reflects on how to build AI that people can rely on when it matters most.
“In defense, AI doesn’t just need to be right, it needs to be understood. Trust is built when people see why a system makes the call it does.”
Q: You’ve worked across a wide range of government research and defense programs throughout your time at Charles River Analytics. What’s your approach when designing AI systems that must earn the trust of operators in high-stakes defense environments?
A: Trust starts with explainability. Take something like Google Maps. It might tell you to take an unfamiliar route, and you’re skeptical until it tells you there’s construction or an accident on your usual path. Suddenly, the recommendation makes sense. That’s what we aim for: not just outputs but also justifications.
If our system says to “replace this ship component,” we want it to also say, “because sensor data shows a 75% likelihood of failure within 40 hours of operation.” That level of clarity builds confidence in the human-machine partnership. We take that mindset and apply it to all of our AI solutions, especially in defense, where decisions have real consequences.
“We take that mindset and apply it to all of our AI solutions, especially in defense, where decisions have real consequences.”
Q: One of your solutions addresses a core logistics challenge: turn isolated, vessel-level data into actionable insights. What technical or strategic breakthrough enabled this work?
A: For advanced predictive maintenance logistics, one key insight was recognizing how modular naval systems are. You don’t throw out a whole radar system when one subcomponent fails, just like you wouldn’t replace your whole car if a tire goes flat. Ships are built in parts, and our AI systems are built to accommodate modularity.
We use probabilistic programming to model each piece of the ship individually—radars, propulsion, sensors—and then stitch those models together to form a full picture of the vessel’s health. This lets us pinpoint risks and forecast maintenance needs in a way that scales with complexity. If a system is as big as an aircraft carrier, we don’t need to model the entire ship monolithically. We model what matters, where it matters.
Q: In these complex environments, how do you navigate the tension between using data versus domain expertise?
A: I don’t think of it as a tension. It’s more about adaptability. We design our systems to work with what the end user has. Maybe that’s a decade’s worth of sensor data. Or maybe it’s institutional knowledge, like things passed down in manuals or just stored in someone’s head.
Sometimes you’ve got lots of data but not much user expertise, like with a new platform. Other times it’s the reverse: deep expertise but limited recorded data. Our hybrid AI models can flex either way. We can use machine learning, symbolic reasoning, or probabilistic models depending on the inputs available. That flexibility means we’re not stuck waiting for “perfect” conditions to build useful systems.
“That flexibility means we’re not stuck waiting for “perfect” conditions to build useful systems.”
Q: The CIRCE system under IARPA’s ReSCIND program uses cognitive models compiled into probabilistic programs. How do you translate human behavior patterns into something mathematically rigorous yet still useful in real time?
A: CIRCE is fascinating because it flips the typical cybersecurity script. Rather than just react to an attack, we model the attacker, their patterns, preferences, and likely next moves.
Let’s say a hacker accesses certain IPs or attempts multiple password resets. We treat those behaviors like data points in a pattern. Using probabilistic programming, we infer what they’re trying to do: what’s their intent, where are they headed, what’s their goal? Then we can strategically steer them toward decoy systems or “honeypots” and away from sensitive information.
Q: Looking back at your role in DARPA’s BRASS program, how did that inform your later work on self-adaptive software in autonomous systems?
A: BRASS was about adapting software in place. With our PRINCESS project, we used reinforcement learning to update code as conditions change, which shaped how I think about AI more broadly.
It was my first big DARPA project, spanning from 2015 to 2019, and it taught me a lot—not just technically, but professionally. I made mistakes, learned the rhythms of large programs, and grew into the role. It also sparked my interest in systems that evolve over time, which continues to influence how I think about AI in dynamic, high-consequence environments.
“It also sparked my interest in systems that evolve over time, which continues to influence how I think about AI in dynamic, high-consequence environments.”
Q: Across your projects, you’ve combined symbolic reasoning, probabilistic logic, and machine learning. Where do you see the most untapped potential for this “triad” in future applications?
A: One term we talk about a lot internally, and that I’m really passionate about, is “democratizing AI.” The idea is to build systems that allow people without a PhD in AI to harness powerful machine learning tools. Think of it like building an AI copilot for your job, trained on your expertise.
Imagine a paramedic or firefighter being able to feed their field experience into an intuitive AI system that supports decision-making in real time. We’ve seen glimpses of this with GitHub Copilot for developers, but that’s just the beginning.
“The goal is for AI to learn not just from data but from the mental models and decision frameworks of real people, across industries. That’s the future I want to help build.”
A few of Kenny’s publications and projects
Hybrid-AI Approach to Health Monitoring of Vehicle Control System — RAMS 2024
Challenges and Progress in Predictive Maintenance of Long-Endurance & Long-Range Uncrewed Platforms — AUVSI 2024
Democratizing AI for Condition-Based Maintenance using Probabilistic Programming — RAMS 2023
Democratizing AI for Condition-Based Maintenance using Probabilistic Programming — RAMS 2023
AI Inference of Team Effectiveness for Training and Operations — I/ITSEC 2023
RAPS: An AI-based maintenance system that keeps robotic combat vehicles mission ready
SLICK: Automatically verify code, catch errors, and seamlessly make recommendations



