
Reflection AI raised $2 billion on Friday at an $8 billion valuation, a 15x increase from its previous valuation of $545 million just seven months ago. The effort aims to position the company as an open source alternative to closed frontier labs like OpenAI and Anthropic, and as the Western equivalent of Chinese AI companies like DeepSeek.
The startup was founded in March 2024 by two former Google DeepMind researchers: Misha Laskin, who led reward modeling for DeepMind’s Gemini project, and Ioannis Antonoglou, who co-created the AlphaGo AI system. Two former Google DeepMind researchers are developing AI systems, making the case that the right AI talent can build frontier models outside of established technology companies.
Reflection AI’s latest efforts change its trajectory from an initial focus on autonomous coding agents to now an open-source alternative to closed-frontier AI labs.
Reflection AI hires team of talent from DeepMind and OpenAI
We’re bringing the frontiers of open models back to the United States to build a thriving AI ecosystem around the world.
We would like to thank our investors including NVIDIA, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan, Eric Schmidt, Citi, Sequoia, and CRV for their support. https://t.co/r75YntGnjG
– Misha Calculator (@mishalaskin) October 9, 2025
Reflection AI announced that it has brought on a team of top talent from DeepMind and OpenAI to work on new initiatives. The company said it has developed an advanced AI training stack and is committed to making it available to everyone. The AI startup added that it has also identified a scalable commercial model. line up Adopting the company’s open intelligence strategy.
Misha Raskin, CEO of Reflection AI, revealed that the company’s team includes 60 members, including AI researchers and engineers in infrastructure, data training, and algorithm development. He also confirmed that the company has secured computing clusters and plans to release a frontier language model trained on tens of trillions of tokens in 2026.
The AI company announced it has developed a large-scale LLM and reinforcement learning platform that can train large-scale mixture of experts (MoE) models at frontline scale. The company claims that this feat was once thought to be possible only within world-class laboratories. Reflection AI claimed We saw the effectiveness of the approach firsthand when the team applied it to key areas of autonomous coding. The company acknowledged that the unlocked milestone now allows for the introduction of such techniques into general agent inference.
MoE is a specific architecture that powers Frontier LLM. Frontier LLMs could previously only be trained at scale in large, closed AI labs. DeepSeek was the first to discover how to train such models at scale and in an open manner, followed by Qwen, Kimi, and other models from China.
“Deep Seek and Kwen and all these models are a wake-up call to us, because if we don’t do anything, the de facto world-standard intelligence is going to be built by someone else. It’s not going to be built by America.”
-Misha calculator, CEO of Reflection ai
Raskin also argued that the initiative puts the United States and its allies at a disadvantage because businesses and sovereign nations avoid using the Chinese model due to potential legal implications. He added that businesses and sovereign states can choose to be placed at a competitive disadvantage or to confront the situation.
Reflection AI aims to continue building and releasing frontier models on a sustained basis
Reflection AI said it has raised significant capital and identified a scalable commercial model that is consistent with its open intelligence strategy, allowing the company to continue building and releasing frontier models on a sustained basis. AI companies said: scaling Build open models that integrate massive pre-training and advanced reinforcement learning from the ground up.
White House AI and Crypto Czar David Sachs celebrated Reflection AI’s new mission. I’m saying It’s great to see more American open source AI models. He believes that a significant portion of the global market will prefer the cost, customization, and control that open source offers.
Clem Delangue, co-founder and CEO of Hugging Face, believes the future will be the rapid sharing of open AI models and datasets. Raskin revealed that Reflection AI keeps the majority of the dataset and complete training pipeline proprietary, while making the weights of the model publicly available. Model weights are a core parameter that determines how AI systems operate, and Raskin said only a select few companies can actually use the infrastructure stack.
If you’re reading this, you’re already ahead of the curve. Read our newsletter and stay there.

