Qwen3-Coder-480B-A35B-Instruct

Qwen3-Coder introduces a groundbreaking approach to automated software development. This innovative model from the Alibaba Qwen team leverages an advanced Mixture-of-Experts (MoE) architecture with 480 billion parameters, of which only 35 billion are actively engaged at any time, achieving an optimal balance between performance and computational efficiency. The model’s unique capabilities in agent-based programming mark a true breakthrough in the industry. Qwen3-Coder goes beyond simple code generation by autonomously planning, utilizing tools, receiving feedback, and making decisions within complex, multi-stage software development workflows. Trained on a dataset of 7.5 trillion tokens—70% of which are code—and refined through reinforcement learning across 20,000 parallel environments, Qwen3-Coder has mastered real-world software development scenarios. Particularly impressive is its native support for contexts up to 256K tokens, extendable to 1 million, enabling the model to process entire code repositories and complex projects within a single context.

Qwen3-Coder's superiority over competitors is demonstrated by outstanding results on key benchmarks. On SWE-Bench Verified, it achieves state-of-the-art performance among open-source models, surpassing DeepSeek V3 (78%) and Kimi K2 (82%) with a score comparable to Claude Sonnet 4 (86%). The model also leads on CodeForces ELO and LiveCodeBench v5, setting new standards for open-source programming solutions.

The application scenarios for Qwen3-Coder span the full spectrum of modern software development—from building interactive web applications to modernizing legacy systems. The model excels in agent-driven development workflows, including autonomous feature development covering backend APIs, frontend components, and databases. It can generate complete games, simulations with dynamic objects, 3D visualizations, and animated backgrounds with mouse movement responsiveness. Qwen3-Coder is also ideal for legacy system modernization, where it can analyze architecture, identify security vulnerabilities, plan migrations, and implement changes while maintaining backward compatibility.


Announce Date: 22.07.2025
Parameters: 480B
Experts: 160
Activated: 35B
Context: 263K
Attention Type: Full or Sliding Window Attention
VRAM requirements: 282.7 GB using 4 bits quantization
Developer: Alibaba
Transformers Version: 4.51.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints to test inference and explore Qwen3-Coder-480B-A35B-Instruct capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting Qwen3-Coder-480B-A35B-Instruct

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-4.44.512.320 44 524288 320 4 $10.68 Launch
teslah100-4.44.512.320 44 524288 320 4 $20.77 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.