Qwen2.5-VL-7B

multimodal

Qwen2.5-VL-7B represents an optimal balance between performance and computational requirements, setting new standards in the quality of multimodal data processing. The revolutionary MRoPE (Multimodal Rotary Position Embedding) system with absolute time alignment enables the model to learn temporal dynamics and event speed through intervals between time measurements—without additional computational cost. Architectural innovations in the 7B model include an enhanced Vision Transformer that combines full attention and window attention, where only 4 layers use full attention, while the remaining layers employ window attention with a maximum window size of 112×112. This ensures linear scaling of computational costs and allows the model to natively process images of any resolution. Dynamic FPS processing for video expands the model's capabilities across the temporal dimension, enabling precise event localization.

The performance of the 7B model is impressive: 58.6% on MMMU, 95.7% on DocVQA, 84.9% on TextVQA, and 68.2% on MathVista, surpassing many models of comparable size. In agent-based tasks, the model demonstrates outstanding results: 84.7% on ScreenSpot, 81.9% on AITZ, and 91.4% on MobileMiniWob++, confirming its ability to effectively interact with graphical user interfaces. Especially impressive are its video understanding capabilities, achieving 69.6% on MVBench and 70.5% on PerceptionTest.

Use cases for this model span across professional document automation systems, intelligent video surveillance systems with behavior analysis, educational platforms with interactive multimedia content, and corporate solutions for analyzing large volumes of visual data. The model is ideally suited for deployment in cloud services where high-quality processing is required at reasonable computational cost, as well as for on-premises servers in medium and large organizations. Thanks to its excellent OCR capabilities, the model becomes indispensable for fintech applications, invoice processing systems, and accounting automation workflows.


Announce Date: 19.02.2025
Parameters: 8.29B
Context: 128K
Attention Type: Full Attention
VRAM requirements: 10.7 GB using 4 bits quantization
Developer: Alibaba
Transformers Version: 4.41.2
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints to test inference and explore Qwen2.5-VL-7B capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting Qwen2.5-VL-7B

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.16.16.160 16 16384 160 1 $0.46 Launch
teslaa10-1.16.32.160 16 32768 160 1 $0.53 Launch
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
rtx2080ti-2.12.64.160 12 65536 160 2 $0.69 Launch
rtx3090-1.16.24.160 16 24576 160 1 $0.88 Launch
rtx3080-2.16.32.160 16 32762 160 2 $0.97 Launch
rtx4090-1.16.32.160 16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-1.16.32.160 16 32768 160 1 $0.53 Launch
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
rtx2080ti-2.12.64.160 12 65536 160 2 $0.69 Launch
teslat4-2.16.32.160 16 32768 160 2 $0.80 Launch
rtx3090-1.16.24.160 16 24576 160 1 $0.88 Launch
rtx3080-2.16.32.160 16 32762 160 2 $0.97 Launch
rtx4090-1.16.32.160 16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
teslat4-2.16.32.160 16 32768 160 2 $0.80 Launch
teslaa10-2.16.64.160 16 65536 160 2 $0.93 Launch
rtx2080ti-3.16.64.160 16 65536 160 3 $0.95 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx3080-3.16.64.160 16 65536 160 3 $1.43 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
rtx3090-2.16.64.160 16 65536 160 2 $1.67 Launch
rtx4090-2.16.64.160 16 65536 160 2 $2.19 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch

Related models

QwQ

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.