NVIDIA Build (NIM API)
80 pts
1 wins
VS
👑
MiniMax
85 pts
2 wins
🏆 Overall, MiniMax offers more free value (2/6 categories)
📊 Side-by-Side
Category
NVIDIA Build (NIM API)
MiniMax
Free Tier
✅ Unlimited (40 RPM rate limit)
✅ No explicit limit
Free API
✅ 无限制(已取消额度限制)
✅ ¥15
Rate Limit
40 RPM(可申请提升到 200 RPM)
Varies
Open Source
❌ No
✅ Yes
Free Models
10
2
GitHub Stars
-
⭐ 3,417
🧠 Model Details
MiniMax M2.7
230B params, coding/reasoning/office all-rounder
Kimi K2.5
Moonshot native multimodal agentic model, 15T tokens training, 1M context, top Chinese ability
GLM-5.1
Zhipu's latest flagship, GLM-5 upgrade, optimized for agentic coding/long-horizon reasoning. GLM-5 deprecated 2026-04-20
DeepSeek V3.2
671B MoE, coding champion
DeepSeek R1
671B MoE, reasoning champion
Gemma 4 31B-IT
Google's latest open source, strong agentic capability, runs on consumer hardware
Nemotron-3-Super-120B
NVIDIA's own flagship, hybrid Mamba-Transformer MoE, 1M context, 7.5x throughput vs Qwen3.5-122B
Llama 4 Maverick
Meta's latest open source LLM
Qwen 3.5
Alibaba Qwen, native multimodal, 397B params with only 17B active, extremely efficient
Step 3.5 Flash
StepFun, extremely fast
MiniMax-M2.7
230B params, SOTA in coding/tool-use/search/office, 1M context
MiniMax-01
Million-token context, free to use