Qwen3.5-397B-A17B

reasoning
multimodal

Qwen3.5-397B-A17B represents a new generation of unified vision-language models that integrate the understanding of text, images, and video within a single architectural solution from the very beginning of training. Unlike previous versions, where multimodality was often achieved through add-ons to the language core, this model uses early fusion of modalities, allowing it to connect visual and textual information more deeply. The model supports a context of 262,144 tokens, with the ability to extend it up to 1 million tokens. Furthermore, language support has been expanded to 201 languages and dialects, making the model truly multilingual.

The architectural heart of the model is a hybrid design that combines innovative attention mechanisms and a sparse expert system. It is based on two types of blocks: Gated DeltaNet, which is linear attention with recurrent state updates (efficient for ultra-long sequences), and classic Gated Attention (transformer attention), responsible for precise relationship extraction. These blocks are integrated into a Mixture of Experts (MoE) layer containing 512 experts, of which only 10 routed experts and 1 shared expert are activated for each token. Thus, with a total of 397 billion parameters, computations require activating only 17 billion, ensuring high efficiency.

The uniqueness of Qwen3.5-397B-A17B is evident in its outstanding results on key benchmarks. In language tests, the model demonstrates exceptional instruction following: 76.5 on IFBench and 67.6 on MultiChallenge, indicating its ability to handle complex, compound queries. In the field of multimodal understanding, it holds leading positions: MathVision (88.6) requires solving mathematical problems based on diagrams, ZEROBench (41.0 on the subtest) evaluates genuine understanding without overfitting, and OmniDocBench1.5 (90.8) confirms its quality in document analysis and text recognition. In agentic scenarios, the model achieves an impressive score of 78.6 on BrowseComp, surpassing models such as DeepSeek-V3.2 and Kimi K2.5 in tasks involving multi-step search and interaction with external tools.

Thanks to its capabilities, the model has a wide range of applications. It is ideally suited for creating multimodal agents capable not only of interpreting screen content but also of performing actions within a graphical interface. In a corporate environment, it can automate the processing of documents, spreadsheets, and diagrams, while its deep understanding of code and the command line allows it to be used in software development. Thus, Qwen3.5 stands as one of the most versatile solutions on the market.


Announce Date: 16.02.2026
Parameters: 397B
Experts: 512
Activated at inference: 17B
Context: 263K
Layers: 60, using full attention: 15
Attention Type: Linear Attention
Developer: Qwen
Transformers Version: 4.57.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3.5-397B-A17B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3.5-397B-A17B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-4.16.256.480
262,144.0
tensor
4 $9.17 9.183 Launch
h200-2.24.256.320
262,144.0
tensor
2 $9.42 5.337 Launch
h100nvl-3.24.384.480
262,144.0
pipeline
3 $12.38 5.008 Launch
h100-4.16.256.480
262,144.0
tensor
4 $14.99 9.183 Launch
h100nvl-4.32.384.480
262,144.0
tensor
4 $16.23 15.822 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-6.44.512.480.nvlink
262,144.0
pipeline
6 $14.10 5.228 Launch
teslaa100-8.44.512.480.nvlink
262,144.0
tensor
8 $18.35 23.536 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 15.844 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
h200-8.52.1024.960
262,144.0
tensor
8 $37.37 32.114 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.