There is a discount from 10% to 50% on all GPU models for an advance payment of 1-6 months.
immers.cloud platform
All GPU servers are based on Intel® Xeon® Scalable 2nd and 3rd generation processors and contain up to 96 virtual cores and up to 8192 GB of DDR4 ECC Reg 3200 MHz RAM.
Each processor is equipped with two Intel® AVX-512 units and supports Intel® AVX-512 Deep Learning Boost functions, which accelerate multiplication and addition operations with reduced accuracy, enhancing performance in deep learning algorithms.
Local storage is organized using Intel® and Samsung solid-state drives designed specifically for data centers, with a capacity of up to 7,68 TB.
100% Performance
Each physical core or GPU adapter is dedicated to a single client. This means:
100% vCPU time is available;
Physical pass-through of GPUs inside virtual servers;
Reduced storage and network load on hypervisors, delivering more storage and network performance to clients.
Up to 75,000 IOPS1 for RANDOM READ and up to 20,000 IOPS for RANDOM WRITE on Virtual Servers with local SSDs
Up to 70 000 IOPS1 for RANDOM READ and up to 60 000 IOPS for RANDOM WRITE on Virtual Servers with block storage volumes
You can be confident that Virtual Servers do not share vCPU or GPU resources with one another.
IOPS — Input/Output Operations Per Second.
GPU Tesla® H100 80 GB
Tesla H100 GPU provides unsurpassed acceleration for AI tasks, data analysis and for solving the most complex computing tasks.
Basic configurations with Tesla H100 80 GB
Prices:
Subscribe to the availability notification
Specify the number of required flavors . When they become available, you will receive a notification by email.
OK
Cancel
You successfully subscribed on notification.
You already subscribed on notification.
You already have reached the limit of TeslaH100 flavors.
Ada Lovelace architecture, based on a new 5 nm process technology, provides a huge leap in performance, efficiency and graphics. Each accelerator has 16384 CUDA cores, and 24 GB of GDDR6X memory.
Basic configurations with RTX 4090 24 GB
Prices:
Subscribe to the availability notification
Specify the number of required flavors . When they become available, you will receive a notification by email.
OK
Cancel
You successfully subscribed on notification.
You already subscribed on notification.
You already have reached the limit of RTX4090 flavors.
RTX 3090 graphics cards are based on the powerful Ampere architecture and a improved RTX hardware ray tracing platform. Each accelerator has 328 tensor cores, 10496 CUDA cores, and 24 GB of memory.
Basic configurations with RTX 3090 24 GB
Prices:
Subscribe to the availability notification
Specify the number of required flavors . When they become available, you will receive a notification by email.
OK
Cancel
You successfully subscribed on notification.
You already subscribed on notification.
You already have reached the limit of RTX3090 flavors.
RTX 3080 graphics cards are based on the powerful Ampere architecture and a improved RTX hardware ray tracing platform. Each accelerator has 272 tensor cores, 8704 CUDA cores, and 10 GB of memory.
Basic configurations with RTX 3080 10 GB
Prices:
Subscribe to the availability notification
Specify the number of required flavors . When they become available, you will receive a notification by email.
OK
Cancel
You successfully subscribed on notification.
You already subscribed on notification.
You already have reached the limit of RTX3080 flavors.
Tesla A100 GPU provides unsurpassed acceleration for AI tasks, data analysis and for solving the most complex computing tasks. The A100 is the most productive integrated platform for AI and HPC, allowing you to get real-time results and deploy scalable solutions.
Basic configurations with Tesla A100 80 GB
Prices:
Subscribe to the availability notification
Specify the number of required flavors . When they become available, you will receive a notification by email.
OK
Cancel
You successfully subscribed on notification.
You already subscribed on notification.
You already have reached the limit of TeslaA100 flavors.
The RTX A5000 graphics accelerator has the perfect balance of power, performance and reliability to solve complex tasks. This GPU is built on the basis of the latest Ampere architecture and has 24 GB of video memory — everything so that designers, engineers and artists can implement the projects they dreamed of today.
Thanks to the new CUDA cores, which provide up to 2.5 times FP32 performance compared to the previous generation, work with graphics is accelerated.
Higher rendering accuracy is provided by hardware acceleration of motion blur and higher ray tracing performance.
In flavors with an even number of GPUs, graphics adapters are combined using NVLink, which allows you to increase the amount of memory and improve performance for performing complex visual calculations.
Basic configurations with RTX A5000 24 GB
Prices:
Subscribe to the availability notification
Specify the number of required flavors . When they become available, you will receive a notification by email.
OK
Cancel
You successfully subscribed on notification.
You already subscribed on notification.
You already have reached the limit of RTXA5000 flavors.
Tesla A10 graphics accelerators, featuring tensor cores, are built on the Ampere architecture, which enhances performance and efficiency for various computing tasks.
Thanks to CUDA cores, the Tesla A10 accelerators deliver twice the number of single-precision floating-point operations (FP32) compared to previous generations, significantly speeding up work with graphics, video, and modeling complex 3D models in computer-aided design (CAD) software.
The second generation of RT cores enables simultaneous ray tracing, shading, or noise reduction, accelerating tasks such as photorealistic rendering of film materials, architectural project evaluation, and motion rendering for faster, more accurate results.
Support for Tensor Float 32 (TF32) operations in Tesla A10 accelerators boosts training speeds for AI models and data processing by five times compared to previous generations, without requiring changes in the code. Tensor cores also enable AI-based technologies such as DLSS, noise reduction, and photo and video editing functions in select applications.
PCI Express Gen 4 doubles the bandwidth of PCIe Gen 3, accelerating data transfer from processor memory for resource-intensive tasks like AI, data processing, and 3D graphics rendering.
Thanks to ultra-fast GDDR6 memory, scientists, engineers, and data science specialists gain the necessary resources for processing large datasets and conducting advanced modeling.
Basic configurations with Tesla A10 24 GB
Prices:
Subscribe to the availability notification
Specify the number of required flavors . When they become available, you will receive a notification by email.
OK
Cancel
You successfully subscribed on notification.
You already subscribed on notification.
You already have reached the limit of TeslaA10 flavors.
RTX 2080 Ti graphics cards are based on the powerful Turing architecture and a completely new RTX hardware ray tracing platform. Each accelerator has 544 2nd gen tensor cores, 4352 CUDA cores, and 11 GB of memory.
Basic configurations with RTX 2080 Ti 11 GB
Prices:
Subscribe to the availability notification
Specify the number of required flavors . When they become available, you will receive a notification by email.
OK
Cancel
You successfully subscribed on notification.
You already subscribed on notification.
You already have reached the limit of RTX2080TI flavors.
The Tesla A2 graphics accelerator is optimized for inference tasks and provides up to 1.3 times greater performance for smart cities, industry and retail tasks.
Basic configurations with Tesla A2 16 GB
Prices:
Subscribe to the availability notification
Specify the number of required flavors . When they become available, you will receive a notification by email.
OK
Cancel
You successfully subscribed on notification.
You already subscribed on notification.
You already have reached the limit of TeslaA2 flavors.
Tesla® T4 with tensor and RT cores is the one of most advanced and energy-efficient graphics accelerator for inference, video transcoding, streaming, and remote desktops.
Each accelerator has 320 tensor cores, 2560 CUDA cores, and 16 GB of memory.
T4 graphics accelerators are ideal for operating neural network models in a production environment (inferencing), speech processing, and NLP.
In addition to tensor cores, T4 has RT cores that perform hardware ray tracing (retrace).
Basic configurations with Tesla T4 16 GB
Prices:
Subscribe to the availability notification
Specify the number of required flavors . When they become available, you will receive a notification by email.
OK
Cancel
You successfully subscribed on notification.
You already subscribed on notification.
You already have reached the limit of TeslaT4 flavors.
Please write us about the required configuration in the chat.
Answers to frequently asked questions
What is the minimum hosting period for a virtual GPU-server?
You can host a virtual server for any duration. Simply make a payment starting from 1.1 $ and work within the prepaid balance. When you're finished, delete the server to stop incurring charges.
Can I create GPU-servers myself?
Yes, you can create GPU-servers through the control panel by choosing the hardware configuration and operating system. The ordered resources will be available for use within a few minutes.
What operating systems can be installed on a virtual GPU-server?
You can choose from the following basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, and OpenSUSE, or use a pre-configured image from the Marketplace.
All operating systems are installed automatically when the GPU-server is created.
How to connect to a virtual GPU-server?
By default, we provide RDP access for Windows-based servers and SSH access for Linux-based servers.
You can configure any connection method that is convenient for you.
Is it possible to rent a virtual GPU-server with an custom configuration?
Yes, it is possible. Contact our 24/7 support service (at https://t.me/immerscloudsupport} and tell us your desired configuration.
A Bit More About Us
Pay-as-you-go billing
and free VM pause (shelve). You only pay for the actual use of your VMs
24/7/365 Tech Support
Tech support is always available via chat and responds within minutes
Free traffic
Speeds up to 20 Gb/s with no extra charge for incoming or outgoing traffic
Our Data Centers
Built to TIER III standards
100% of power is yours
We do not share resources you purchased with other users