A conversation on advancing human-AI teaming in defense and industry, highlighting tools that support real-world decision-making and the importance of designing technology that puts people first.
Through her career, Amanda (Mandy) Warren, Charles River Analytics Senior Scientist, has been driven by a deep commitment to designing systems that work for the people who rely on them, especially in complex, high-stakes defense environments. But her path to becoming a leader in human factors and cognitive systems engineering wasn’t always linear.
From developing predictive maintenance interfaces for Navy ships to building test frameworks that assess how humans and AI systems team together, Mandy’s work helps ensure that cutting-edge technology stays grounded in real human needs. At Charles River, she leads efforts to translate complex analytics and AI outputs into intuitive designs that help operators act quickly and decisively.
“I used to think I hated project management,” Mandy laughs. “In past roles, it often meant being removed from the technical work I loved. At Charles River, I get to stay hands-on leading projects while also mentoring others. That balance is what I enjoy most. I especially like building tools that solve real problems and working directly with the people who use them.”
Q: You’ve led cutting-edge projects that support operators in complex environments, from rail operations to ships at sea. What’s the common thread across these systems when it comes to designing for the human in the loop?
A: The shared goal across these projects is helping operators make sense of complexity. In one case, we developed a diagnostic maintenance interface for technicians tasked with managing incredibly intricate systems, often with limited prior exposure. We apply principles from cognitive systems engineering to surface critical insights in a digestible, intuitive way. We built a human-AI system test and evaluation (T&E) framework that goes beyond typical efforts to validate AI algorithms but also ensures that these systems are technically sound and usable in real-world conditions. Whether we’re supporting real-time decision-making or validating system design, our focus is always on giving humans what they need to understand and trust the system so they can act with confidence.
Q: Several of your projects aimed to improve decision-making for naval maintenance teams. What design or user feedback moment shaped your thinking most during development?
A: Early in the design process, we put prototypes in front of Naval maintainers. One repeated theme was how often they had to embark on long “fact-finding missions” just to locate an issue, literally walking the ship to hunt down a problem’s source. By combining predictive analytics with smart visual cues, we can direct them to potential issues and lead them through the steps to solve the issues before they impact Navy missions. That kind of direct user input, and the opportunity to measurably reduce time spent on root-cause investigation, is exactly what makes this work so rewarding.

Q: You’ve also developed tools to evaluate how humans interact with AI-enabled intelligent systems. What challenges are you trying to assess in human-AI teams using a technical testing framework?
A: One of the biggest hurdles is validating that the system effectively translates model output—what an AI model “knows”—into something a human can understand and trust. For instance, AI model confidence is often expressed using probabilities, but people are inherently bad at understanding probabilistic information. We use human-machine interface (HMI) visualization strategies to normalize probabilistic outputs and represent them as simple, intuitive cues like red-yellow-green indicators so end users can make informed decisions without needing to interpret raw data. It might look simple on the screen, but strategic cognitive systems engineering happens behind the scenes to promote rapid and intuitive understanding of AI output.
“One of the biggest hurdles is validating that the system effectively translates model output—what an AI model ‘knows’—into something a human can understand and trust.”

Q: You’ve worked across rail, aviation, oil & gas, and defense sectors. How does the context of use—say, a Navy ship versus a rail cab—affect your approach to human factors engineering?
A: Context defines everything. In transportation sectors like rail, the focus is often on human safety and survivability, designing systems to prevent rare but catastrophic failures. In defense, especially in Navy contexts, the priority equally emphasizes mission criticality and operational readiness under dynamic and often unpredictable conditions. Both are high stakes, but they demand different design considerations and human-system integration requirements.
Q: Your team’s solutions emphasize observability, directability, and exploration in human-AI teaming. What human-AI teaming design principles do you think are most misunderstood or overlooked in current defense tech development?
A: Common ground is one that’s frequently underestimated. In effective human-AI teaming, the system needs a model to represent what the human knows, is doing, and should be doing—and vice versa. That reciprocal understanding is hard to build and even harder to measure. That’s why we focus on evaluating the full AI-human team as a joint cognitive system, not just how the AI or human performs in isolation. You can have a perfectly engineered model, but if it isn’t usable by the operator, it can fail where it matters most.
Q: You’ve held leadership roles at a few different companies. What drew you back to Charles River Analytics, and how has your view of innovation in defense contracting evolved since your early career?
A: I’ve held leadership roles at other companies and learned a lot through trial and error, especially about project management and mentoring. But I missed being close to the technical work. Charles River gives me the best of both worlds: the ability to stay hands-on while also contributing to bigger strategic efforts. What’s evolved most for me is my appreciation for how essential usability is to innovation. It’s not enough to build something that works—you have to build something people can and will use.
“It’s not enough to build something that works—you have to build something people can and will use.”
Q: Looking ahead, what emerging technology or design principle most excites you for the next generation of operator-support systems in defense or transportation?
A: I’m excited about the ecosystem we’re building around maintenance and sustainment, tying together diverse tools to offer holistic support for maintainers and sustainment decision-makers. On their own, these technologies are powerful, but when integrated with user-centered design, they become transformational. Whether helping a maintainer troubleshoot faster or enabling a Fleet Commodore to make real-time decisions about readiness, it’s the scaling of insight, from below deck to command, that inspires me most.

A few of Mandy’s projects and publications
ABOARD: A testing framework to improve railroad safety through human-AI interaction analysis
A Joint, Adaptive, Robust Visualization and Interaction System for AI-Enabled, Symbiotic Cyber-Physical System Design – 26th International Conference on Human-Computer Interaction (HCII 2024),
Charles River human-AI teaming Projects and publications
ENHANCE: Improving biomedical data tools through user-centered, human-AI-focused design
JARVIS: AI enhanced design collaboration for human creativity while mitigating bias
JUPITER: An AI-based mission planning framework for Navy operations
TITAN: An enhanced communication prototypefor crewed–uncrewed teaming