Mistral Launches Magistral to Compete in the Reasoning AI Race While Magistral puts Mistral in closer competition with well-known reasoning AI models, there are still doubts across the industry about how well current LLMs can actually "reason"
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

French artificial intelligence firm Mistral has announced the release of its latest large language model (LLM), Magistral, marking its entry into the growing space of "reasoning" AI models. The new model aims to improve the transparency and traceability of AI-generated outputs, particularly in tasks that require step-by-step logical processing.
Unveiled on Tuesday during London Tech Week, Magistral is available through Mistral's platforms and the open-source AI repository Hugging Face. The company has released two versions of the model: Magistral Small, a 24-billion-parameter model licensed as open-source, and a more powerful, proprietary version, Magistral Medium, currently available in limited preview.
Mistral describes Magistral as suitable for general-purpose use cases that involve more complex reasoning and demand greater accuracy. The model is designed to provide a visible "chain of thought," which the company says helps users understand how conclusions are reached. This feature may appeal to professionals in law, healthcare, finance, and public services where regulatory compliance and interpretability are key concerns.
According to CEO Arthur Mensch, a key distinction of Magistral is its multilingual reasoning capability, especially in European languages. "Historically, we've seen U.S. models reason in English and Chinese models reason in Chinese," he said during a session at London Tech Week. Mensch noted that Magistral is initially focused on European languages, with plans to expand support to other languages over time.
The launch comes as more AI companies shift their focus from building larger models to improving how existing models process and present information. Reasoning models are designed to handle more sophisticated tasks by simulating logical steps, rather than generating answers based solely on pattern recognition. This shift also responds to ongoing concerns about the interpretability of AI systems, which often function as black boxes even to their creators.
Mistral claims that Magistral Medium can process up to 1,000 tokens per second, potentially offering faster performance than several competing models. It joins a growing list of reasoning-focused models released over the past year, including OpenAI's o1 and o3, Google's Gemini variants, Anthropic's Claude, and DeepSeek's R1.
The release also highlights Mistral's continuing emphasis on open-source AI development. The company, founded in Paris in 2023, has received significant backing from investors including Microsoft, DST Global, and General Catalyst. It raised approximately USD 685.7 million million in a Series B round in June 2024, bringing total funding to over USD 1.37 billion and reaching a reported valuation of USD 6.63 billion.
Despite its relatively short history, Mistral has seen considerable commercial traction. As per the media reports, the company has secured over USD 114.3 million in contracted sales within 15 months of launching its first commercial offerings.
While Magistral puts Mistral in closer competition with well-known reasoning AI models, there are still doubts across the industry about how well current large language models (LLMs) can actually "reason." A recent research paper from Apple, titled The Illusion of Thinking, questions the belief that today's models truly have general reasoning abilities. The researchers found that these models tend to struggle or fail when tasks become too complex, revealing key limitations in their capabilities.