Chain-of-Thought Hijacking exploits AI models’ extended reasoning to bypass safety filters, achieving up to 100% success rates on systems like GPT, Claude, and Gemini. This vulnerability reveals that longer thinking