LongCat-Flash-Chat is the first open language model developed by Meituan. This Mixture-of-Experts (MoE) architecture model has 560 billion total parameters, with a variable number of activated parameters (an average of 27 billion). Its base architecture consists of only 28 layers, which is unusual for models of this scale.
The model demonstrates a unique approach to efficient computational resource utilization: thanks to its Zero-computation Experts mechanism, it dynamically activates between 18.6 and 31.3 billion parameters per token depending on the contextual complexity, significantly optimizing both training and inference. Architectural innovations include Shortcut-connected MoE (ScMoE) to expand the computation-communication overlap window, as well as a modified Multi-head Latent Attention (MLA) with scaling correction factors for stable scaling.
On benchmarks, LongCat-Flash-Chat demonstrates excellent results, successfully competing with well-known proprietary and open models. On the Arena Hard dataset, which tests complex reasoning and instruction following, LongCat-Flash-Chat takes leading positions, surpassing models like DeepSeek-V3.1 and Kimi-K2 Base. On IFEval (an instruction-following benchmark), the model set a record with a score of 89.65, outperforming all existing solutions at the time of release, including flagship models from OpenAI and Anthropic. Most interestingly, the model shines in the Agentic Tool Use segment, where it leads in practically all benchmarks.
The model is not small and will require infrastructure investment, but it is well-adapted for inference, fast, and economical. LongCat-Flash-Chat is ideally suited for in-depth document analysis with complex instruction execution, conversational assistants, and, of course, for agent systems (AI Agents). The developers particularly emphasize the latter, noting the model's strength in tasks requiring planning, reasoning, and tool invocation, making it a powerful engine for autonomous agents.
| Model Name | Context | Type | GPU | TPS | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
131,072.0 tensor |
32 | 393216 | 480 | 4 | $9.52 | Launch | |
131,072.0 pipeline |
32 | 524288 | 480 | 3 | $14.36 | Launch | |
131,072.0 tensor |
44 | 524288 | 480 | 4 | $15.66 | Launch | |
131,072.0 tensor |
32 | 393216 | 480 | 4 | $16.23 | Launch | |
131,072.0 tensor |
32 | 786432 | 480 | 4 | $19.23 | Launch | |
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
131,072.0 tensor |
52 | 1048576 | 960 | 8 | $37.37 | Launch | |
There are no configurations for this model, context and quantization yet.
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.