Mistral launches powerful Devstral 2 coding model including open source, laptop-friendly version

VentureBeat — AITuesday, December 9, 2025 at 7:44:00 PM
Mistral launches powerful Devstral 2 coding model including open source, laptop-friendly version
  • French AI startup Mistral has launched the Devstral 2 coding model, which includes a laptop-friendly version optimized for software engineering tasks. This release follows the introduction of the Mistral 3 LLM family, aimed at enhancing local hardware capabilities for developers.
  • The introduction of Devstral 2 is significant for Mistral as it addresses the growing demand for efficient, open-source AI tools that can operate offline, thereby appealing to both enterprise and independent developers seeking privacy and performance.
  • This development reflects a broader trend in the AI industry towards creating smaller, more efficient models that can outperform larger counterparts, as seen in recent advancements by competitors like Google with its Gemini 3 model, which emphasizes real-world trust and performance in AI applications.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Anthropic Gives Claude ‘Agent Skills’ to Act More Like a Programmable Co-Worker
NeutralArtificial Intelligence
Anthropic has introduced new 'Agent Skills' for its AI model Claude, enabling it to function more like a programmable co-worker. This enhancement allows Claude to perform tasks with greater autonomy and efficiency, positioning it as a more capable assistant in various work environments.
Accenture and Anthropic Launch Partnership Built around Claude
PositiveArtificial Intelligence
Accenture and Anthropic have announced an expansion of their partnership, focusing on the deployment of Anthropic's AI model, Claude, with plans to train approximately 30,000 Accenture employees. This collaboration aims to transition enterprises from AI pilot projects to full-scale implementations, positioning Accenture as one of Anthropic's largest enterprise customers.
Mistral's open coding model Devstral 2 claims sevenfold cost advantage over Claude Sonnet
PositiveArtificial Intelligence
Mistral AI has launched its second generation of open-source coding models, Devstral 2 and Devstral Small 2, claiming a sevenfold cost advantage over the Claude Sonnet model. This release is part of Mistral's ongoing efforts to enhance its product offerings in the competitive AI landscape.
Large Language Model-Based Generation of Discharge Summaries
PositiveArtificial Intelligence
Recent research has demonstrated the potential of Large Language Models (LLMs) in automating the generation of discharge summaries, which are critical documents in patient care. The study evaluated five models, including proprietary systems like GPT-4 and Gemini 1.5 Pro, and found that Gemini, particularly with one-shot prompting, produced summaries most similar to gold standards. This advancement could significantly reduce the workload of healthcare professionals and enhance the accuracy of patient information.
Leveraging KV Similarity for Online Structured Pruning in LLMs
PositiveArtificial Intelligence
A new online structured pruning technique called Token Filtering has been introduced for large language models (LLMs), allowing pruning decisions to be made during inference without the need for calibration data. This method measures token redundancy through joint key-value similarity, effectively reducing inference costs while maintaining essential information. The approach also includes a variance-aware fusion strategy to ensure important tokens are preserved even with high pruning ratios.
Deep Learning and Machine Learning, Advancing Big Data Analytics and Management: Unveiling AI's Potential Through Tools, Techniques, and Applications
PositiveArtificial Intelligence
Recent advancements in artificial intelligence (AI), particularly in machine learning and deep learning, are significantly enhancing big data analytics and management. This development focuses on large language models (LLMs) like ChatGPT, Claude, and Gemini, which are transforming industries through improved natural language processing and autonomous decision-making capabilities.
Optimizing LLMs Using Quantization for Mobile Execution
PositiveArtificial Intelligence
A recent study has demonstrated the application of Post-Training Quantization (PTQ) to optimize Large Language Models (LLMs) for mobile execution, specifically focusing on Meta's Llama 3.2 3B model. The research achieved a 68.66% reduction in model size through 4-bit quantization, enabling efficient inference on Android devices using the Termux environment and the Ollama framework.
GSAE: Graph-Regularized Sparse Autoencoders for Robust LLM Safety Steering
PositiveArtificial Intelligence
The introduction of Graph-Regularized Sparse Autoencoders (GSAEs) aims to enhance the safety of large language models (LLMs) by addressing their vulnerabilities to adversarial prompts and jailbreak attacks. GSAEs extend traditional sparse autoencoders by incorporating a Laplacian smoothness penalty, allowing for the recovery of distributed safety representations across multiple features rather than isolating them in a single latent dimension.