The Whisper large-v3 model represents the latest and most advanced version in the family of ASR models from OpenAI. The model is designed to solve a wide range of speech processing tasks within a single, unified framework. Unlike traditional ASR systems that require complex pipelines of several specialized components, Whisper uses a unified sequence-to-sequence architecture. This allows the model to perform not only speech transcription but also tasks such as language identification, voice activity detection, and translation into English, processing audio "out of the box" without fine-tuning.
Architecturally, large-v3 is based on the proven Transformer encoder-decoder, retaining the overall structure of previous versions (large and large-v2), but with two key improvements. Firstly, the model uses 128 Mel frequency bins to process the input audio signal instead of the 80 used previously, allowing it to capture finer details in the audio, which is especially important for languages with rich tonality and complex phonetics. Secondly, large-v3 was trained using the established method of weak supervision on an impressive volume of data – approximately 5 million hours. A new language token for Cantonese was also added to the model. Compared to its predecessor, Whisper large-v2, the new model shows a 10-20% reduction in error rates for a wide range of languages, making it the most accurate version among all those released by OpenAI.
Whisper large-v3 is ideally suited for automatically creating subtitles for videos, lectures, podcasts, webinars, and interviews in multiple languages, with the ability to return timestamps at the word or sentence level significantly simplifying this process. It can also be used for translating audio content or serve as the foundation for voice control systems, text dictation, and for analyzing calls in contact centers, automatically identifying the topic of conversation and key requirements. Finally, the model is an excellent starting point for researchers who can fine-tune it for highly specialized tasks using a small amount of labeled data.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
| 1 | $0.33 | 8.361 | Launch | |||
| 1 | $0.38 | 4.947 | Launch | |||
| 1 | $0.38 | 8.361 | Launch | |||
| 1 | $0.53 | 13.822 | Launch | |||
| 1 | $0.57 | 4.265 | Launch | |||
| 1 | $0.83 | 13.822 | Launch | |||
| 1 | $1.02 | 13.822 | Launch | |||
| 1 | $1.20 | 19.283 | Launch | |||
tensor |
2 | $1.23 | 28.310 | Launch | ||
| 1 | $1.59 | 19.283 | Launch | |||
| 1 | $2.37 | 52.051 | Launch | |||
| 1 | $3.83 | 52.051 | Launch | |||
| 1 | $4.11 | 61.609 | Launch | |||
| 1 | $4.74 | 93.694 | Launch | |||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
| 1 | $0.33 | 7.834 | Launch | |||
| 1 | $0.38 | 4.421 | Launch | |||
| 1 | $0.38 | 7.834 | Launch | |||
| 1 | $0.53 | 13.296 | Launch | |||
| 1 | $0.57 | 3.738 | Launch | |||
| 1 | $0.83 | 13.296 | Launch | |||
| 1 | $1.02 | 13.296 | Launch | |||
| 1 | $1.20 | 18.757 | Launch | |||
tensor |
2 | $1.23 | 27.783 | Launch | ||
| 1 | $1.59 | 18.757 | Launch | |||
| 1 | $2.37 | 51.525 | Launch | |||
| 1 | $3.83 | 51.525 | Launch | |||
| 1 | $4.11 | 61.082 | Launch | |||
| 1 | $4.74 | 93.168 | Launch | |||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
| 1 | $0.33 | 6.836 | Launch | |||
| 1 | $0.38 | 3.423 | Launch | |||
| 1 | $0.38 | 6.836 | Launch | |||
| 1 | $0.53 | 12.298 | Launch | |||
| 1 | $0.57 | 2.740 | Launch | |||
| 1 | $0.83 | 12.298 | Launch | |||
| 1 | $1.02 | 12.298 | Launch | |||
| 1 | $1.20 | 17.759 | Launch | |||
tensor |
2 | $1.23 | 26.785 | Launch | ||
| 1 | $1.59 | 17.759 | Launch | |||
| 1 | $2.37 | 50.527 | Launch | |||
| 1 | $3.83 | 50.527 | Launch | |||
| 1 | $4.11 | 60.084 | Launch | |||
| 1 | $4.74 | 92.170 | Launch | |||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.