whisper-large-v3

The Whisper large-v3 model represents the latest and most advanced version in the family of ASR models from OpenAI. The model is designed to solve a wide range of speech processing tasks within a single, unified framework. Unlike traditional ASR systems that require complex pipelines of several specialized components, Whisper uses a unified sequence-to-sequence architecture. This allows the model to perform not only speech transcription but also tasks such as language identification, voice activity detection, and translation into English, processing audio "out of the box" without fine-tuning.

Architecturally, large-v3 is based on the proven Transformer encoder-decoder, retaining the overall structure of previous versions (large and large-v2), but with two key improvements. Firstly, the model uses 128 Mel frequency bins to process the input audio signal instead of the 80 used previously, allowing it to capture finer details in the audio, which is especially important for languages with rich tonality and complex phonetics. Secondly, large-v3 was trained using the established method of weak supervision on an impressive volume of data – approximately 5 million hours. A new language token for Cantonese was also added to the model. Compared to its predecessor, Whisper large-v2, the new model shows a 10-20% reduction in error rates for a wide range of languages, making it the most accurate version among all those released by OpenAI.

Whisper large-v3 is ideally suited for automatically creating subtitles for videos, lectures, podcasts, webinars, and interviews in multiple languages, with the ability to return timestamps at the word or sentence level significantly simplifying this process. It can also be used for translating audio content or serve as the foundation for voice control systems, text dictation, and for analyzing calls in contact centers, automatically identifying the topic of conversation and key requirements. Finally, the model is an excellent starting point for researchers who can fine-tune it for highly specialized tasks using a small amount of labeled data.


Announce Date: 07.11.2023
Parameters: 2B
Context: 448
Layers: 32
Attention Type: Full Attention
Developer: OpenAI
Transformers Version: 4.36.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore whisper-large-v3 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting whisper-large-v3

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160 1 $0.33 8.361 Launch
rtx2080ti-1.10.16.500 1 $0.38 4.947 Launch
teslaa2-1.16.32.160 1 $0.38 8.361 Launch
teslaa10-1.16.32.160 1 $0.53 13.822 Launch
rtx3080-1.16.32.160 1 $0.57 4.265 Launch
rtx3090-1.16.24.160 1 $0.83 13.822 Launch
rtx4090-1.16.32.160 1 $1.02 13.822 Launch
teslav100-1.12.64.160 1 $1.20 19.283 Launch
rtxa5000-2.16.64.160.nvlink
tensor
2 $1.23 28.310 Launch
rtx5090-1.16.64.160 1 $1.59 19.283 Launch
teslaa100-1.16.64.160 1 $2.37 52.051 Launch
h100-1.16.64.160 1 $3.83 52.051 Launch
h100nvl-1.16.96.160 1 $4.11 61.609 Launch
h200-1.16.128.160 1 $4.74 93.694 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160 1 $0.33 7.834 Launch
rtx2080ti-1.10.16.500 1 $0.38 4.421 Launch
teslaa2-1.16.32.160 1 $0.38 7.834 Launch
teslaa10-1.16.32.160 1 $0.53 13.296 Launch
rtx3080-1.16.32.160 1 $0.57 3.738 Launch
rtx3090-1.16.24.160 1 $0.83 13.296 Launch
rtx4090-1.16.32.160 1 $1.02 13.296 Launch
teslav100-1.12.64.160 1 $1.20 18.757 Launch
rtxa5000-2.16.64.160.nvlink
tensor
2 $1.23 27.783 Launch
rtx5090-1.16.64.160 1 $1.59 18.757 Launch
teslaa100-1.16.64.160 1 $2.37 51.525 Launch
h100-1.16.64.160 1 $3.83 51.525 Launch
h100nvl-1.16.96.160 1 $4.11 61.082 Launch
h200-1.16.128.160 1 $4.74 93.168 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160 1 $0.33 6.836 Launch
rtx2080ti-1.10.16.500 1 $0.38 3.423 Launch
teslaa2-1.16.32.160 1 $0.38 6.836 Launch
teslaa10-1.16.32.160 1 $0.53 12.298 Launch
rtx3080-1.16.32.160 1 $0.57 2.740 Launch
rtx3090-1.16.24.160 1 $0.83 12.298 Launch
rtx4090-1.16.32.160 1 $1.02 12.298 Launch
teslav100-1.12.64.160 1 $1.20 17.759 Launch
rtxa5000-2.16.64.160.nvlink
tensor
2 $1.23 26.785 Launch
rtx5090-1.16.64.160 1 $1.59 17.759 Launch
teslaa100-1.16.64.160 1 $2.37 50.527 Launch
h100-1.16.64.160 1 $3.83 50.527 Launch
h100nvl-1.16.96.160 1 $4.11 60.084 Launch
h200-1.16.128.160 1 $4.74 92.170 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.