📡 Breaking news
Analyzing latest trends...

MiniMax Moves to a Gated Model for its Latest 229B Coding AI.

MiniMax Moves to a Gated Model for its Latest 229B Coding AI.
MiniMax Releases M2.7 Model: High-Performance Coding AI with New Strict Commercial Licensing

MiniMax, the prominent Chinese AI startup, has officially released the weights for its latest model, MiniMax-M2.7. This new iteration, specifically optimized for advanced programming and software engineering tasks, follows a period where the model was exclusively available through the company’s proprietary API.

The End of "Modified MIT" Era

The most significant change accompanying this release is the shift in licensing. Previous models in the MiniMax-M2 series utilized a Modified MIT License, which allowed commercial use as long as clear attribution to MiniMax was provided.

However, with the release of M2.7, MiniMax has moved to a Restrictive Commercial License. This means that while the weights are downloadable for local execution, any commercial application or service utilizing the model is strictly prohibited without explicit, formal permission from MiniMax.

API Consolidation and Pricing

The MiniMax-M2.7 maintains the same architecture size as its predecessors at 229B parameters. The API pricing remains consistent at $0.30 per million input tokens and $1.20 per million output tokens.

A notable shift in the ecosystem is also occurring; whereas earlier versions like MiniMax-M2.5 saw widespread third-party hosting, the 2.7 version is being steered toward a centralized API model. Users wishing to integrate the new model must now purchase API access directly from MiniMax or authorized partners.

Early Authorized Partners

Despite the restrictions, some major cloud providers have already secured licensing rights. Ollama Cloud has officially confirmed that it has received authorization to host and serve MiniMax-M2.7 on its platform for its users.

MiniMax-M2.7 is tuned to directly compete with models like DeepSeek-Coder or CodeLlama. Its upgrade to a 229B token size, while maintaining a strong focus on coding, gives it superior long-context reasoning capabilities for large-scale software projects compared to general-purpose models.

MiniMax's shift from a modified MIT model to a permission-based system reflects the trend among Chinese AI companies seeking to prevent competitors from commercially approving their models without benefiting the original owners. This creates a stronger business moat as open-weighted models become increasingly sophisticated.

Ollama Cloud's early approval demonstrates MiniMax's desire to maintain broad developer accessibility through popular tools while simultaneously controlling quality of service and security through carefully selected partners.

At $0.3 / $1.2 per million tokens, MiniMax-M2.7 positions itself as cost-effective for its 229B size, appealing to startups needing high-level AI programming within limited budgets.

 

 

From $3 to $18 Analyzing Z.ai Bold Pricing Strategy Following a 6x Stock Surge. 

 

Source: Hugging Face

💬 AI Content Assistant

Ask me anything about this article. No data is stored for your question.

Comments

Popular posts from this blog

Netflix Releases VOID The AI Eraser That Understands Shadows and Physics.

Samsung AI Bet Pays Off Q1 Profits Explode by 800% Amid Chip Scarcity.

Apple Research Shows LLMs Can Level Up via Self-Distillation.

Intel and Elon Musk Join Forces The Terafab Project Aims for 1 Terawatt of AI Power.

TSMC Beats Expectations AI Demand Drives Q1 Revenue to $35.6 billion.

Amazon Reveals Demand for Graviton Chips Two Secret Clients Tried to Buy the Entire 2026 Supply.

Anthropic Goes Big Gigawatt-Scale TPU Deal with Google to Fuel Claude 2027 Evolution.