Qwen3.5-397B-A17B represents a new generation of unified vision-language models that integrate the understanding of text, images, and video within a single architectural solution from the very beginning of training. Unlike previous versions, where multimodality was often achieved through add-ons to the language core, this model uses early fusion of modalities, allowing it to connect visual and textual information more deeply. The model supports a context of 262,144 tokens, with the ability to extend it up to 1 million tokens. Furthermore, language support has been expanded to 201 languages and dialects, making the model truly multilingual.
The architectural heart of the model is a hybrid design that combines innovative attention mechanisms and a sparse expert system. It is based on two types of blocks: Gated DeltaNet, which is linear attention with recurrent state updates (efficient for ultra-long sequences), and classic Gated Attention (transformer attention), responsible for precise relationship extraction. These blocks are integrated into a Mixture of Experts (MoE) layer containing 512 experts, of which only 10 routed experts and 1 shared expert are activated for each token. Thus, with a total of 397 billion parameters, computations require activating only 17 billion, ensuring high efficiency.
The uniqueness of Qwen3.5-397B-A17B is evident in its outstanding results on key benchmarks. In language tests, the model demonstrates exceptional instruction following: 76.5 on IFBench and 67.6 on MultiChallenge, indicating its ability to handle complex, compound queries. In the field of multimodal understanding, it holds leading positions: MathVision (88.6) requires solving mathematical problems based on diagrams, ZEROBench (41.0 on the subtest) evaluates genuine understanding without overfitting, and OmniDocBench1.5 (90.8) confirms its quality in document analysis and text recognition. In agentic scenarios, the model achieves an impressive score of 78.6 on BrowseComp, surpassing models such as DeepSeek-V3.2 and Kimi K2.5 in tasks involving multi-step search and interaction with external tools.
Thanks to its capabilities, the model has a wide range of applications. It is ideally suited for creating multimodal agents capable not only of interpreting screen content but also of performing actions within a graphical interface. In a corporate environment, it can automate the processing of documents, spreadsheets, and diagrams, while its deep understanding of code and the command line allows it to be used in software development. Thus, Qwen3.5 stands as one of the most versatile solutions on the market.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
4 | $9.17 | 9.183 | Launch | ||
262,144.0 tensor |
2 | $9.42 | 5.337 | Launch | ||
262,144.0 pipeline |
3 | $12.38 | 5.008 | Launch | ||
262,144.0 tensor |
4 | $14.99 | 9.183 | Launch | ||
262,144.0 tensor |
4 | $16.23 | 15.822 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
6 | $14.10 | 5.228 | Launch | ||
262,144.0 tensor |
8 | $18.35 | 23.536 | Launch | ||
262,144.0 tensor |
4 | $19.23 | 15.844 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
8 | $37.37 | 32.114 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.