Red Teaming for Program Managers

RT4PM introduces program managers, analysts, and decision makers to a four-step approach designed to help focus effort, save time and energy, and avoid common difficulties in using adversary-based assessments.

Red teaming or adversary-based assessment is a flexible tool that program managers use to understand threat and to deliver components and systems that achieve their mission in hostile environments. Red teaming methods apply across the full life-cycle from concept through retirement.

The Red Teaming for Program Managers Process

Step one — Determine your need for red teaming

Successful use of red teaming begins with clear understanding of the programmatic need for adversary-based assessment. What are you trying to accomplish, and what types of red teaming can help you achieve your goals?

Red teaming may help you to:

Step two — Specify what your red team should do

Specifying what your red team should do comes next. RT4PM introduces eight types of red teaming that are described as black-boxes by their objectives, special considerations, key cost factors, and common deliverables.

Design assurance red teaming

Design assurance helps ensure that a system will achieve its mission in hostile environments. It is usually performed with the cooperation of the development team and typically models goal-directed adversaries motivated to defeat the system’s mission. Design assurance assessments do not require functional systems, and often the greatest benefits result from assessment of prototypes or even early design documentation.

Red team hypothesis testing

Hypothesis testing helps to confirm or reject a conjecture, whether formally or informally conceived, and to understand the merits of competing alternatives. Experiments designed to evaluate hypotheses frequently help determine the viability of proposed security measures. Hypothesis testing often involves multiple teams, including white and blue teams and often multiple red teams.

Red team gaming

Gaming facilitates interactive, exploratory development of adversarial scenarios in a simulated environment. Unlike traditional gaming, red team gaming focuses more on the adversary’s goals and activity than on the defender’s mission. Games help to explore ideas, test operational concepts, challenge perspectives, and train staff. Gaming applies mainly to problems involving human decision making.

Behavioral red teaming

Behavioral red teaming records how a specific adversary might act in a given context. This can help analysts and designers assess what might deter or prevent an adversary from acting, distinguish malicious from routine behaviors, and determine meaningful attack indicators. Behavioral red teams often depend on subject matter experts and team members drawn from the adversary demographic.

Red team benchmarking

Benchmarking establishes a baseline for comparing system responses to adversary actions and helps measure progress of an implementation toward a security specification, progress of an implementation relative to an earlier benchmark, and measured security of one implementation relative to another. Security specifications used in benchmarking are often sensitive or even classified.

Operational red teaming

Operational red teaming models an active adversary within a live or simulated context. Operational red teams seek to defeat the target system’s mission in realistic deployment environments. Operational red teaming helps to train staff, conduct testing and evaluation, validate concepts of operation, and identify vulnerabilities. Operational red teams will usually have less time than real-world adversaries to prepare.

Analytical red teaming

Analytical red teaming applies formal and mathematical methods to identify and evaluate the courses of action an adversary might take to achieve a mission. Most forms of analytical red teaming explore and model the potential attack space and reduce this space by comparing specific adversary models. Most analytical red teams do not do field work but might use field data. Analysis often includes consideration of tactics, techniques, and procedures.

Penetration testing

Penetration testing determines whether and by what methods a red team, possibly modeling a particular adversary, can defeat security controls designed to prevent unauthorized access or control of systems and data. Penetration tests help determine what access or control an insider, an outsider, or an outsider working with an insider may obtain. Penetration tests usually require functional systems and consider only what can be done at a given point in time.

Step three — Identify the right red team

Hiring or standing-up a red team is not always straightforward. RT4PM identifies and explains important issues and questions it is helpful to consider, including:

Often, you develop a deeper understanding of your assessment goals simply by finding the questions that matter most.

Step four — Plan to use your red team deliverables

Using red team deliverables effectively can greatly enhance the value of an assessment. RT4PM identifies six common red team deliverables:

For example, attack graphs and trees:

Attack trees usually focus on consequence or vulnerability. Attack graphs usually focus on adversary activity and interaction with the target.

Top of page