Cryptopolitan
2025-10-10 06:12:03

Reflection AI raised $2 billion to position the firm as an alternative to OpenAI

Reflection AI on Friday raised $2 billion at an $8 billion valuation, surpassing its previous valuation just seven months ago by 15x from $545 million. The initiative aims to position the firm as both an open-source alternative to closed-frontier labs like OpenAI and Anthropic, and a Western equivalent to Chinese AI firms like DeepSeek. The startup was founded in March 2024 by two former Google DeepMind researchers, Misha Laskin, who led reward modeling for DeepMind’s Gemini project, and Ioannis Antonoglou, who co-created the AlphaGo AI system. The background of the two former Google DeepMind Researchers developing AI systems led to their pitch, which is that the right AI talent can build frontier models outside established tech companies. Reflection AI’s latest initiative also changes its trajectory, which originally focused on autonomous coding agents, to now being an open source alternative to closed frontier AI labs. Reflection AI recruits a team of top talent from DeepMind and OpenAI We are bringing the open model frontier back to the US to build a thriving AI ecosystem globally. Thankful for the support of our investors including NVIDIA, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan, Eric Schmidt, Citi, Sequoia, CRV, and others. https://t.co/r75YntGnjG — Misha Laskin (@MishaLaskin) October 9, 2025 Reflection AI has announced that it has onboarded a team of top talent from DeepMind and OpenAI to work on its new initiative. The firm stated that it has developed an advanced AI training stack, which it promises will be open to all. The AI startup added that it has also identified a scalable commercial model that aligns with the company’s open intelligence strategy. Reflection AI’s CEO, Misha Laskin, revealed that the firm’s team includes 60 members, including AI researchers and engineers across infrastructure, data training, and algorithm development. He also acknowledged that the firm has secured a compute cluster and plans to release a frontier language model in 2026 that’s trained on tens of trillions of tokens. The AI firm stated that it has developed a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoE) models at the frontier scale, a feat it claims was once thought possible only within the world’s top labs. Reflection AI claimed it saw the effectiveness of its approach first-hand when the team applied it to the critical domain of autonomous coding. The firm admitted that the unlocked milestone allows it to bring such methods to general agentic reasoning now. MoEs are specific architectures that power frontier LLMs, which, previously, were only capable of being trained at scale by large, closed AI labs. DeepSeek was the first to figure out how to train such models at scale and in an open way, followed by Qwen, Kimi, and other models in China. “DeepSeek and Qwen and all these models are our wake-up call because if we don’t do anything about it, then effectively, the global standard of intelligence will be built by someone else. It won’t be built by America.” -Misha Laskin, CEO of Reflection AI Laskin also argued that the initiative puts the U.S. and its allies at a disadvantage since enterprises and sovereign states avoid using Chinese models due to potential legal repercussions. He added that companies and sovereign countries can either choose to live at a competitive disadvantage or rise to the occasion. Reflection AI aims to continue building and releasing frontier models sustainably Reflection AI revealed that it raised significant capital and identified a scalable commercial model that aligns with its open intelligence strategy, which it said ensures the firm can continue building and releasing frontier models sustainably. The AI company said it’s scaling up to build open models that bring together large-scale pretraining and advanced reinforcement learning from the ground up. David Sacks, the White House AI and Crypto Czar, celebrated Reflection AI’s new mission, saying it’s great to see more American open-source AI models. He believes a significant segment of the global market will prefer the cost, customizability, and control that open source offers. Co-founder and CEO of Hugging Face, Clem Delangue, believes that the challenge now will be to show high velocity of sharing open AI models and datasets. Laskin revealed that the Reflection AI would release model weights for public use while largely keeping datasets and full training pipelines proprietary. Model weights are core parameters that determine how an AI system works, and Laskin said only a select handful of companies can actually use the infrastructure stack. Claim your free seat in an exclusive crypto trading community - limited to 1,000 members.

Crypto 뉴스 레터 받기
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.