LongCat-Flash-Chat

LongCat-Flash-Chat is the first open language model developed by Meituan. This Mixture-of-Experts (MoE) architecture model has 560 billion total parameters, with a variable number of activated parameters (an average of 27 billion). Its base architecture consists of only 28 layers, which is unusual for models of this scale.

The model demonstrates a unique approach to efficient computational resource utilization: thanks to its Zero-computation Experts mechanism, it dynamically activates between 18.6 and 31.3 billion parameters per token depending on the contextual complexity, significantly optimizing both training and inference. Architectural innovations include Shortcut-connected MoE (ScMoE) to expand the computation-communication overlap window, as well as a modified Multi-head Latent Attention (MLA) with scaling correction factors for stable scaling.

On benchmarks, LongCat-Flash-Chat demonstrates excellent results, successfully competing with well-known proprietary and open models. On the Arena Hard dataset, which tests complex reasoning and instruction following, LongCat-Flash-Chat takes leading positions, surpassing models like DeepSeek-V3.1 and Kimi-K2 Base. On IFEval (an instruction-following benchmark), the model set a record with a score of 89.65, outperforming all existing solutions at the time of release, including flagship models from OpenAI and Anthropic. Most interestingly, the model shines in the Agentic Tool Use segment, where it leads in practically all benchmarks.

The model is not small and will require infrastructure investment, but it is well-adapted for inference, fast, and economical. LongCat-Flash-Chat is ideally suited for in-depth document analysis with complex instruction execution, conversational assistants, and, of course, for agent systems (AI Agents). The developers particularly emphasize the latter, noting the model's strength in tasks requiring planning, reasoning, and tool invocation, making it a powerful engine for autonomous agents.


Announce Date: 29.08.2025
Parameters: 562B
Experts: 512
Activated at inference: 27B
Context: 131K
Layers: 28
Attention Type: Multi-head Latent Attention
Developer: Meituan-longcat
Transformers Version: 4.57.1
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore LongCat-Flash-Chat capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting LongCat-Flash-Chat

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-4.32.384.480.nvlink
131,072.0
tensor
32 393216 480 4 $9.52 Launch
h200-3.32.512.480
131,072.0
pipeline
32 524288 480 3 $14.36 Launch
h100-4.44.512.480
131,072.0
tensor
44 524288 480 4 $15.66 Launch
h100nvl-4.32.384.480
131,072.0
tensor
32 393216 480 4 $16.23 Launch
h200-4.32.768.480
131,072.0
tensor
32 786432 480 4 $19.23 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
h200-8.52.1024.960
131,072.0
tensor
52 1048576 960 8 $37.37 Launch
There are no configurations for this model, context and quantization yet.

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.