I/ITSEC 2024
SPEAKER SCHEDULE
Presented by:
Eduardo Barrera
Software Engineer, Machine Learning
Eduardo Barrera
Software Engineer, Machine Learning
Eduardo Barrera is a Machine Learning Software Engineer at Charles River Analytics where he specializes in signal processing (radio frequency), circuits, spectral analysis, and software architecture design. He recently led software efforts for a real-time processing software suite to compensate for noise in infrared polarimetric microgrid sensors, a submarine magnetometer signature detection software, and the integration of an AR-enhanced satellite operator training system with modern tools. In addition, he built features for an internal maritime simulator for integration with external software ecosystems to allow safe vessel navigation in hostile terrains. Mr. Barrera received BS degrees in Engineering Physics and Mathematics from Tufts University.
Generative AI-Powered 3D-Content Creation for Military Training
Date: Wednesday, December 4, 2:00 PM
Location: Room 320F
The U.S. Marine Corps (USMC) has taken the initiative of introducing interactive learning experiences at its training centers as a cost-effective and timesaving means to augment classroom instructions and physical equipment training with immersive maintenance and safety training in a simulated environment. However, the techniques to create the 3D models for immersive environments—which use computer aided design, graphics software, 3D-scanning, and photogrammetry—require software skills, manual effort, time, and financial investments. The USMC has the need to rapidly build a repository of ready, reusable, and manipulable 3D models of their assets in a scalable manner. Recent advances in generative AI can fill this need by rapidly generating approximate but realistic 3D models from available 2D pictures of equipment found in existing USMC training guides such as presentations and student handouts.
This presentation introduces a scalable and automated content generation process that uses an ensemble of vision-based generative AI techniques to convert 2D images into 3D models based on appropriate tradeoffs between desired levels of quality and computational complexity. We leverage an existing foundational 2D-to-3D conversion model trained with large and diverse web-scale data for “few-shot” transfer learning with domain specific data. The 3D content-generation process uses open-source software and incorporates intuitive user interfaces to minimize the need to learn machine learning (ML) or graphics programming. The resulting 3D objects can be directly imported into reusable libraries for use across various schoolhouse applications requiring immersive training content.
This presentation documents the results from performance experiments that convert a wide array of images from a USMC schoolhouse course with varying degrees of complexity, and benchmarks various vision-based AI/ML techniques with respect to object fidelity and speed of conversion. Eduardo will also present the best practices and lessons learned from these content conversion experiments.
Presented by:
Dr. Spencer Lynn
Senior Scientist, Human-Centered AI
Dr. Spencer Lynn
Senior Scientist, Human-Centered AI
Spencer Lynn, Ph.D., is a Senior Scientist at Charles River Analytics with 25 years of experience conducting cognitive science research on perception, decision, and behavior of humans and other animals and building computational models of those processes. His research uses principles of behavioral ecology and neuroscience to create biologically inspired cognitive architectures and autonomous agents. His work focuses on development of technology that applies evidence-based, computational modeling of cognitive and behavioral processes to human-machine teaming, situation awareness, and operational readiness. Dr. Lynn received his Ph.D. in Ecology and Evolutionary Biology from University of Arizona. Prior to joining Charles River, Dr. Lynn was a professor of Psychology at Northeastern University.
Human-AI Common Ground for Training and Operations
Date: Wednesday, December 4, 4:00 PM
Location: Room 320F
Presenter: Dr. Spencer Lynn
How do we create artificially intelligent agents capable of meaningful and trusted teaming with humans for training and operations? “Common ground” refers to congruent knowledge, beliefs, and assumptions among a team about their objectives, context, and capabilities. It has been a guiding principle in cognitive systems engineering for human-AI interaction, where research has focused on improving communication between human and machines. Coordination (e.g., directability) and transparency (e.g., observability and predictability) are important for establishing, maintaining, and repairing both human-AI and human-human common ground. Nonetheless, human-AI common ground remains relatively impoverished, and AI remains a tool rather than a teammate. From human to machine, communication sets the state of the machine (coordination), and from machine to human, communication reveals the state of the machine to the human (transparency).
Among humans, common ground occurs at the level of concept structure; however, human concepts are not merely variables to be parameterized, but are constructed during discourse. For example, an instructor uses communication to activate and shape concepts (e.g., through dialog) in the student’s mind, contextualizing and refining concepts until shared perceptions are categorized (i.e., understood) in a common way. To increase autonomy and human-AI teaming, the challenge is to provide the AI with human-like conceptual structure. An architecture to enable human-AI common ground must provide the AI with representational capacity and algorithms that mimic features of human conceptual structure and flexibility. This presentation identifies critical features of human conceptual structure, including Conceptual Blending, Situated Categorization, and Concept Degeneracy. We describe challenges of implementing these features in AI and we outline technical approaches for hybrid symbolic/subsymbolic AI to meet those challenges. As contemporary human-factors approaches to human-AI common ground continue to mature, common ground issues will move from interface transparency to concept congruency.
Presented by:
Kevin Golan
AI Research Scientist, Human-Centered AI
Kevin Golan
AI Research Scientist, Human-Centered AI
Kevin Golan is an AI Research Scientist at Charles River Analytics where he focuses on probabilistic modeling and deep learning. Mr. Golan is currently leading a technical team implementing strategic courses of action for a reinforcement learning agent. His recent work on the READI project involved developing a probabilistic programing solution for risk modeling of an Air Force base, and he is currently contributing to the DARPA OPEN project, developing probabilistic models to estimate the demand of critical minerals. Mr. Golan received his MS in Electrical Engineering from ETH Zürich and his BE in Electrical and Electronics Engineering from the University of Manchester.AI-Driven COA Generation Using Neuro-Symbolic Methods
Location: Room 320E
Presenter: Kevin Golan
AI-based systems show great promise for supporting military decision-making and complex planning, as AI systems can consider massive option spaces that far exceeds human capabilities. Game-playing AI systems, particularly those employing deep reinforcement learning (DRL), show superhuman levels of performance, with the capacity to unearth innovative strategies by exploring numerous Courses of Action (COAs) in complex scenarios. Opportunity exists for such AI systems to assist human planners, especially for Joint All Domain Operations (JADO), to construct more high-quality plans, more deeply explore strengths and weaknesses, and consider more alternatives in finite planning time. JADO planning must simultaneously consider multiple domains (e.g., Air, Land, Sea, Undersea) and interacting phenomena (e.g., maneuver, cyber)—massively expanding the size of action spaces beyond those employed by SoA game-playing AI. Real decision support systems must also provide deep foundations for trust, interpretability, and expression of commander intent—properties lacking in typical SoA black-box deep learning systems.
This presentation reports on a new DRL approach, called Neural Program Policies (NPPs), that construct trainable COAs via a composition of a deep neural network with a structured domain-specific program to vastly reduce state and action spaces into a smaller, more meaningful, and tractably learnable subset. In this work we describe the domain specific language (DSL) that abstracts away actions and observations employed by a deep reinforcement learner. Then, we describe our OVERMIND framework for cross-simulator agent learning and self-play (e.g., via StarCraft II and military simulators). We conclude with performance measurements for 1) the ability to generate COAs and 2) NPP COA measurements, including action space reduction (>1000X vs SoA DRL algorithms), performance prediction, and reduction of simulation runs required for training. We also discuss how the approach supports a human-AI teamed paradigm to increase the number and quality of COAs considered.