Pioneering threat assessment and mitigation for AI systems
Redwood Research is a nonprofit AI safety and security research organization
In the coming years or decades, AI systems will very plausibly match or exceed human capabilities across most intellectual tasks, fundamentally transforming society. Our research specifically addresses the risks that could arise if these powerful AI systems purposefully act against the interests of their developers and human institutions broadly.
We work to better understand these risks, and to develop methodologies that will allow us to manage them while still realizing the benefits of AI.
Our Focus Areas

AI Control
We introduced and have continued to propel the research area of “AI control.” In our ICML oral paperAI Control: Improving Risk Despite Intentional Subversion, we proposed protocols for monitoring malign LLM agents.
AI companies and other researchers have since built on this work, and AI control has been framed as a bedrock approach for mitigating catastrophic risk from misaligned AI.
Evaluations and demonstrations of risk from strategic deception
In Alignment Faking in Large Language Models, we (in collaboration with Anthropic) demonstrated that Claude sometimes hides misaligned intentions. This work is the strongest concrete evidence that LLMs might naturally fake alignment in order to resist attempts to train them.


Consulting on risks from misalignment
We collaborate with governments and advise AI companies including Google DeepMind and Anthropic on practices for assessing and mitigating risks from misaligned AI agents. For example, we partnered with UK AISI to produceA sketch of an AI control safety case. This describes how developers can construct a structured argument that models are incapable of subverting control measures.
Highlighted Research
Our most impactful work on AI safety and security
AI Control
Improving Safety Despite Intentional Subversion
Our research introduces and evaluates protocols designed to be robust even when AI models are trying to deceive us. Using GPT-4 as a stand-in for a potentially deceptive model, we tested strategies to detect hidden backdoors in code.
Alignment Faking
When AI Models Pretend to Be Safe
We demonstrate that state-of-the-art LLMs can strategically fake alignment during training to avoid being changed, revealing a crucial challenge for AI safety. A model might pretend to follow human values while pursuing different goals.