CIRCE
Cyber-psychological security tools
leveraging attacker cognitive vulnerabilities
Cyber-psychological security tools leveraging attacker cognitive vulnerabilities
Context-driven Interventions through Reasoning about Cyberpsychology Exploitation (CIRCE)
Most cybersecurity methods involve analyzing tools and methods and fortifying existing defenses. While these techniques have their merits, they ignore the potential to use the attacker’s own psychology against them. The CIRCE tool from Charles River Analytics demonstrated that it could successfully thwart cyberattackers by exploiting biases in perception and decision. Charles River—with teammates Arizona State University (ASU), Montana State University (MSU), Assured Information Security (AIS), Narf Industries, and SimSpace Corporation—conducted five studies, each rooted in various psychological aspects of cyberattack performance. The project is part of IARPA’s ReSCIND program, which aims to develop a new set of cyberpsychology-informed defenses that take advantage of attacker’s limitations, such as decision-making biases and cognitive vulnerabilities.

CIRCE data sets
The open-access materials from the ReSCIND CIRCE team are available via the Open Science Framework (OSF). The datasets were collected as part of human subjects research conducted within the program.
“Focusing on exploiting human vulnerabilities makes sense. Although we live in a time where cyber offense technologies evolve at lightning speed, humans have cognitive constraints that are difficult to overcome. Therefore, defenses that target the human attackers remain relevant for longer periods of time.”

Sean Guarino
Principal Scientist and Principal Investigator on CIRCE
Cyber defense strategies
Today, cyber defenses try to understand what kinds of tools adversaries are using. Considerable effort is spent assessing whether an adversary is on a network and, if so, how they got on. But there’s very little work focused on exploiting the human executing the attack.
Part of the strategy involves misleading human attackers to believe something about the attack surface or defenses that’s not true. For example, if the name of an entry port signals administrative authority, attackers might target it selectively to gain network access, and once they do so, their behavior can be steered in specific ways.
CIRCE relies on the principle of oppositional human factors (OHF), which pinpoints and strengthens the constraints that attackers face when they’re executing their jobs. The theory is that by degrading the experience, you frustrate the attacker into not executing the job. When an attacker lands on a network, they have many choices available. The goal is to steer those choices unbeknownst to them, so that they’re wasting time on the attack.
Assessing cognitive bias susceptibility
Through each of the studies, the CIRCE tool showed an ability to effectively discern people who are susceptible to cognitive biases and heuristics, and to manipulate attacker behavior and performance by exploiting those cognitive vulnerabilities. The most effective studies focused on loss aversion bias and representativeness heuristics.
The principle behind loss aversion is that people are more averse to loss than they are receptive to an equivalent gain. The strategy for using loss aversion as a cyber defense exploits a situation where the attacker has made some initial gains. The CIRCE approach threatens those gains, so the attacker works hard to protect what they already have at the cost of further progress in the attack.
The second study used a representativeness heuristic to shape attacker behavior. In representativeness, people follow rules of thumb (heuristics) or prior assumptions without considering related information. For example, if the cyberattacker assumes that out-of-date or unpatched devices are soft targets, they would attack these first. The CIRCE study intentionally configured network devices to mimic outdated systems, luring attackers toward them and away from valuable assets.
Confirmation bias involves selectively seeking evidence that supports one’s expectations or failing to seek evidence that could challenge them. An attacker stealing financial data might look for spreadsheet files. To counter this, in this study, the real financial information was camouflaged in file types that are not spreadsheets, and decoy spreadsheets with less valuable “honey-data” were deployed to lure attackers away from valuable information.
In anchoring bias, initial information is difficult to deviate from. Exposing attackers to honeypots early in the experiment led them to believe that the system would have more honeypots as they infiltrated deeper into the network. Attackers then tended to ignore valuable assets engineered to display honeypot characteristics, and instead blindly focused on honeypots while missing objects of actual value.
In asymmetric dominance, the presence of a decoy item can shift one’s preferences among other items. Movie concessions, for example, profit most from consumers choosing large buckets of popcorn. The medium size is intentionally designed to cost only a little less than the large size. When choosing between small, medium, or large popcorns, consumers perceive the large as the best value. The research team tested this concept in a cyber defense setting by examining how attackers make choices between low-value, low-risk targets and high-value, high-risk ones. They introduced decoy targets intended to steer attackers toward the less valuable options. While the decoys had little overall effect, analysis of individual decision points revealed some influence on specific target selections.
CIRCE approach and playbook
Charles River conducted these studies with expert attackers on realistic networks. The approach proved advantageous because the results are generalizable—for example, the confirmation bias process could be applied to different parts of the attack. In addition, the specifics of each attack closely follow the well-established MITRE ATT&CK® cyberattack framework, which means CIRCE is defending against known cyber kill chains.
Charles River also created a playbook at the conclusion of the CIRCE effort, which explains how to deploy and generalize these cyber defenses. The playbook allows defenders to learn about these cyberpsychological defenses through examples and sort out the details about what they might need to implement them.

“We were able to frame cognitive vulnerabilities in a cyberattack context and show that attackers could be manipulated. Our team’s strong experimental design gave us confidence in our results and in the validity of the cyberpsychological approach.”

Dr. Spencer Lynn
Senior Scientist and Modeling Lead on CIRCE
Advancing cyberpsychology
Results from CIRCE informed Charles River’s presentation at I/ITSEC, the leading modeling, simulation, and training conference for defense and security professionals. The presentation and paper, Challenges and Solutions in Using Virtual Testbeds to Study Hacker Cognitive Constraints, describes how to use cyber test beds, commonly used for training, as powerful environments for studying behavior and cognition in cybersecurity contexts.
“The premise of deterring breaches through cyberpsychology could also apply to AI-driven attacks,” Guarino said. “While AI and humans might not share the same set of biases, the ones that AI might be partial to are becoming increasingly common knowledge. Once you learn about them, you can build cyber defenses that are designed specifically to take advantage of these biases and mislead the AI,” he said.
Building on a highly successful effort that laid the groundwork for the CIRCE cyberpsychology-driven cybersecurity approach, the Charles River team is now seeking opportunities to transition the effort into a larger-scale program.
This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via N66001‑24‑C‑4501. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.