MiniMax / minimax/minimax-m2.5

MiniMax M2.5 - access through LLMTR

MiniMax M2.5 is a fit for teams that want a price-performance balance across code generation, refactoring, technical help, and general text product flows. Its 204,800-token context window can carry long instructions and large project context.

Technical specifications

Canonical IDminimax/minimax-m2.5
ProviderMiniMax
Context window204,800 tokens
OperationsCHAT_COMPLETIONS
Modalitiestext

Pricing

A 6% platform margin applies to credit top-ups; model usage prices are not separately marked up.

OperationMetricUnitPrice
CHAT_COMPLETIONSCACHE_READPER_1M_TOKENS$0.030000
CHAT_COMPLETIONSCACHE_WRITEPER_1M_TOKENS$0.375000
CHAT_COMPLETIONSINPUT_TEXTPER_1M_TOKENS$0.300000
CHAT_COMPLETIONSOUTPUT_TEXTPER_1M_TOKENS$1.20

Example usage

With existing OpenAI SDK flows, change only the base URL and model identifier.

curl https://llmtr.com/v1/chat/completions \
  -H "Authorization: Bearer sk_your_key" \
  -H "Content-Type: application/json" \
  -d '{"model":"minimax/minimax-m2.5","messages":[{"role":"user","content":"Hello"}]}'

Related models