GPU cloud servers

Get truly revolutionary performance with the latest NVIDIA® Tesla® and NVIDIA GeForce RTX™ graphics adapters

GPU cloud servers

Get truly revolutionary performance with the latest NVIDIA® Tesla® and NVIDIA GeForce RTX™ graphics adapters

immers.cloud platform

All of GPU servers are based on Intel® Xeon® Scalable (Cascade Lake) second generation central processors and contain up to 96 virtual processors and up to 512 GB of DDR4 ECC Reg 2400—2933 MHz RAM.

Each processor contains two Intel® AVX-512 units and supports Intel® AVX-512 Deep Learning Boost functions. This set of instructions speeds up multiplication and addition operations with reduced accuracy, which are used in many internal cycles of the deep learning algorithm.

Local storage is organized on Intel® solid-state drives that are designed specifically for data centers and have a capacity of up to 1.92 TB.

100% performance

Each physical core or GPU adapter assigned only to a single client.
It means that:

  • Available vCPU time is 100%
  • Physical pass through of GPU inside a VM
  • Less storage and network load on hypervisors, more storage and network performance for a client.

Up to 75 000 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Machines with local SSDs.

Up to 22 500 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Machines with block storage Volumes.

You can be sure that Virtual Machines are not sharing vCPU or GPU among each other.

  1. IOPS — Input/Output Operations Per Second.

NVIDIA® Tesla® A10

NVIDIA A10 graphics accelerators with tensor cores are built on the Ampere architecture.

Thanks to CUDA cores, the number of single-precision floating-point operations (FP32) has been increased by 2 times. This allows you to significantly speed up work with graphics and video, as well as modeling complex 3D models in computer-aided design (CAD) software.

The second generation of RT cores simultaneously provides ray tracing and shading or noise reduction. This allows you to speed up the tasks of photorealistic rendering of film materials, evaluating architectural projects and rendering motion, allowing you to create a more accurate image faster.

Support for Tensor Float 32 (TF32) operations allows you to speed up the training of models for artificial intelligence (AI) and data processing by 5 times compared to the previous generation without changes in the code. Tensor cores also provide AI-based technologies such as DLSS, noise reduction, and photo and video editing functions in some applications.

PCI Express Gen 4 doubles the bandwidth of PCIe Gen 3, speeding up data transfer from processor memory for resource-intensive tasks such as AI, data processing and working with 3D graphics.

Thanks to the ultra-fast GDDR6 memory, scientists, engineers and data science specialists get the necessary resources for processing large data sets and modeling.

Basic configurations with NVIDIA® Tesla® A10 24 GB

Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-1.4.16.60 4 16384 60 1 Launch
teslaa10-1.8.32.120 8 32768 120 1 Launch
teslaa10-1.8.32.160 8 32768 160 1 Launch
teslaa10-1.16.64.160 16 65536 160 1 Launch
teslaa10-2.16.64.160 16 65536 160 2
Occupied
teslaa10-3.16.96.160 16 98304 160 3
Occupied
teslaa10-4.16.128.160 16 131072 160 4
Occupied

NVIDIA® Tesla® V100

NVIDIA® Tesla® V100 with tensor cores is the world's most technically advanced GPU for data centers.

Each graphics accelerator has 640 tensor cores, 5120 CUDA cores, and 32 GB of HBM2 memory with a maximum throughput of 900 GB/s.

The total computing performance of the server is 28 TFLOPS with double precision and 448 TFLOPS with mixed-precision and tensor cores.

V100 accelerators are ideal for training deep neural networks.

Basic configurations with NVIDIA® Tesla® V100 32 GB

Name vCPU RAM, MB Disk, GB GPU Price, hour
teslav100-1.8.64.60 8 65536 60 1 Launch
teslav100-1.8.64.80 8 65536 80 1 Launch
teslav100-1.8.64.160 8 65536 160 1 Launch
teslav100-1.12.64.160 12 65536 160 1 Launch
teslav100-1.16.64.160 16 65536 160 1 Launch
teslav100-1.16.128.160 16 131072 160 1 Launch
teslav100-1.32.128.160 32 131072 160 1 Launch
teslav100-2.32.128.160 32 131072 160 2
Occupied
teslav100-2.32.192.160 32 196608 160 2
Occupied
teslav100-4.32.96.160 32 98304 160 4
Occupied
teslav100-4.32.256.160 32 262144 160 4
Occupied

NVIDIA® Tesla® T4

NVIDIA® Tesla® T4 with tensor and RT cores is the most advanced and energy-efficient NVIDIA® accelerator for deep learning of neural networks and inference, video transcoding, streaming, and remote desktops.

Each accelerator has 320 tensor cores, 2560 CUDA cores, and 16 GB of memory.

T4 graphics accelerators are ideal for operating neural network models in a production environment (inferencing), speech processing, and NLP.

In addition to tensor cores, T4 has RT cores that perform hardware ray tracing (retrace).

Basic configurations with NVIDIA® Tesla® T4 16 GB

Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.4.16.60 4 16384 60 1 Launch
teslat4-1.4.16.120 4 16384 120 1 Launch
teslat4-1.8.32.80 8 32768 80 1 Launch
teslat4-1.8.32.120 8 32768 120 1 Launch
teslat4-1.16.64.160 16 65536 160 1 Launch
teslat4-1.32.128.160 32 131072 160 1 Launch
teslat4-1.48.256.160 48 262144 160 1 Launch
teslat4-2.4.16.160 4 16384 160 2 Launch
teslat4-2.32.64.120 32 65536 120 2 Launch
teslat4-2.32.128.160 32 131072 160 2 Launch
teslat4-2.32.192.160 32 196608 160 2 Launch
teslat4-4.16.64.40.custom.6240R 16 65536 40 4
Occupied
teslat4-4.16.64.160 16 65536 160 4
Occupied
teslat4-4.16.96.160 16 98304 160 4
Occupied
teslat4-4.16.128.160 16 131072 160 4
Occupied
teslat4-4.32.64.160 32 65536 160 4
Occupied
teslat4-4.32.128.160 32 131072 160 4
Occupied
teslat4-4.48.192.160 48 196608 160 4
Occupied
teslat4-4.48.256.160 48 262144 160 4
Occupied

NVIDIA® RTX 3090

GeForce® RTX 3090 graphics cards are based on the powerful Ampere architecture and a improved RTX hardware ray tracing platform. Each accelerator has 328 tensor cores, 10496 CUDA cores, and 24 GB of memory.

Basic configurations with NVIDIA® RTX™ 3090 24 GB

Name vCPU RAM, MB Disk, GB GPU Price, hour
rtx3090-1.8.16.40 8 16384 40 1 Launch
rtx3090-1.8.16.60 8 16384 60 1 Launch
rtx3090-1.8.16.80 8 16384 80 1 Launch
rtx3090-1.8.16.120 8 16384 120 1 Launch
rtx3090-1.8.16.160 8 16384 160 1 Launch
rtx3090-1.8.32.160 8 32768 160 1 Launch
rtx3090-1.8.128.60 8 131072 60 1 Launch
rtx3090-2.16.64.160 16 65536 160 2 Launch
rtx3090-3.16.96.160 16 98304 160 3 Launch
rtx3090-4.8.32.40 8 32768 40 4 Launch
rtx3090-4.8.96.160 8 98304 160 4 Launch
rtx3090-4.8.128.160 8 131072 160 4 Launch
rtx3090-4.16.32.160 16 32768 160 4 Launch
rtx3090-4.16.64.160 16 65536 160 4 Launch
rtx3090-4.16.128.160 16 131072 160 4 Launch
rtx3090-4.24.128.160 24 131072 160 4 Launch
rtx3090-4.44.256.160 44 262144 160 4 Launch

NVIDIA® RTX 3080

GeForce® RTX 3080 graphics cards are based on the powerful Ampere architecture and a improved RTX hardware ray tracing platform. Each accelerator has 272 tensor cores, 8704 CUDA cores, and 10 GB of memory.

Basic configurations with NVIDIA® RTX™ 3080 10 GB LHR

Name vCPU RAM, MB Disk, GB GPU Price, hour
rtx3080-1.8.16.40 8 16384 40 1 Launch
rtx3080-1.8.16.60 8 16384 60 1 Launch
rtx3080-1.8.16.80 8 16384 80 1 Launch
rtx3080-1.8.16.120 8 16384 120 1 Launch
rtx3080-1.8.16.160 8 16384 160 1 Launch
rtx3080-1.8.32.160 8 32762 160 1 Launch
rtx3080-2.16.32.160 16 32762 160 2 Launch
rtx3080-2.16.64.160 16 65536 160 2 Launch
rtx3080-3.16.64.160 16 65536 160 3 Launch
rtx3080-3.16.96.160 16 98304 160 3 Launch
rtx3080-4.8.64.160 8 65536 160 4
Occupied
rtx3080-4.8.96.160 8 98304 160 4
Occupied
rtx3080-4.16.64.160 16 65536 160 4
Occupied
rtx3080-4.16.96.160 16 98304 160 4
Occupied

NVIDIA® RTX 2080 Ti

GeForce® RTX 2080 Ti graphics cards are based on the powerful Turing architecture and a completely new RTX hardware ray tracing platform. Each accelerator has 544 tensor cores, 4352 CUDA cores, and 11 GB of memory.

Basic configurations with NVIDIA® RTX™ 2080 Ti 11 GB

Name vCPU RAM, MB Disk, GB GPU Price, hour
rtx2080ti-1.4.8.40.custom 4 8192 40 1
Occupied
rtx2080ti-1.4.8.40 4 8192 40 1
Occupied
rtx2080ti-1.4.16.60.custom 4 16384 60 1
Occupied
rtx2080ti-1.4.16.60 4 16384 60 1
Occupied
rtx2080ti-1.4.16.160.custom 4 16384 160 1
Occupied
rtx2080ti-1.4.32.40 4 32768 40 1
Occupied
rtx2080ti-1.4.48.40 4 49152 40 1
Occupied
rtx2080ti-1.6.256.160 6 262144 160 1
Occupied
rtx2080ti-1.8.8.160 8 8192 160 1
Occupied
rtx2080ti-1.8.16.160 8 16384 160 1
Occupied
rtx2080ti-1.8.16.160.custom 8 16384 160 1
Occupied
rtx2080ti-1.8.24.80 8 24576 80 1
Occupied
rtx2080ti-1.8.24.120 8 24576 120 1
Occupied
rtx2080ti-1.8.24.160 8 24576 160 1
Occupied
rtx2080ti-1.8.32.160 8 32768 160 1
Occupied
rtx2080ti-1.8.48.160 8 49152 160 1
Occupied
rtx2080ti-1.8.288.160 8 294912 160 1
Occupied
rtx2080ti-1.16.32.160 16 32768 160 1
Occupied
rtx2080ti-1.16.32.160.custom 16 32768 160 1
Occupied
rtx2080ti-1.16.64.160 16 65536 160 1
Occupied
rtx2080ti-1.16.128.160 16 131072 160 1
Occupied
rtx2080ti-1.32.64.160 32 65536 160 1
Occupied
rtx2080ti-1.44.64.160 44 65536 160 1
Occupied
rtx2080ti-1.44.128.160 44 131072 160 1
Occupied
rtx2080ti-1.44.256.160 44 262144 160 1
Occupied
rtx2080ti-2.4.8.40 4 8192 40 2
Occupied
rtx2080ti-2.4.32.40 4 32768 40 2
Occupied
rtx2080ti-2.4.48.40 4 49152 40 2
Occupied
rtx2080ti-2.8.16.80 8 16384 80 2
Occupied
rtx2080ti-2.8.16.160 8 16384 160 2
Occupied
rtx2080ti-2.8.32.160 8 32768 160 2
Occupied
rtx2080ti-2.8.56.160 8 57344 160 2
Occupied
rtx2080ti-2.8.64.160 8 65536 160 2
Occupied
rtx2080ti-2.12.64.160 12 65536 160 2
Occupied
rtx2080ti-2.16.48.160 16 49152 160 2
Occupied
rtx2080ti-2.16.64.160 16 65536 160 2
Occupied
rtx2080ti-3.12.24.120 12 24576 120 3
Occupied
rtx2080ti-3.16.64.160 16 65536 160 3
Occupied
rtx2080ti-3.24.72.160 24 73728 160 3
Occupied
rtx2080ti-3.32.48.160 32 49152 160 3
Occupied
rtx2080ti-4.4.32.60 4 32768 60 4
Occupied
rtx2080ti-4.8.32.40 8 32768 40 4
Occupied
rtx2080ti-4.16.32.160 16 32768 160 4
Occupied
rtx2080ti-4.16.64.160 16 65536 160 4
Occupied
rtx2080ti-4.32.96.160 32 98304 160 4
Occupied
rtx2080ti-4.44.128.160 44 131072 160 4
Occupied
rtx2080ti-4.44.256.160 44 262144 160 4
Occupied

Answers to frequently asked questions

What is the minimum rental period for a virtual GPU-server?

You can rent a virtual server for any period. Make a payment for any amount from 1.3 $ and work within the prepaid balance. When the work is completed, delete the server to stop spending money.

How quickly can I get started with a virtual GPU-server?

You create GPU-servers yourself in the control panel, choosing the hardware configuration and operating system. As a rule, the ordered capacities are available for use within a few minutes.

If something went wrong-write to our round-the-clock support service: https://t.me/immerscloudsupport.

What operating systems can be installed on a virtual GPU-server?

You can choose from basic images: Windows Server 2019, Ubuntu, Debian, CentOS, Fedora, OpenSUSE. Or use a pre-configured image from the Marketplace.

All operating systems are installed automatically when the GPU-server is created.

How to connect to a virtual GPU-server?

By default, we provide connection to Windows-based servers via RDP, and for Linux-based servers-via SSH.

You can configure any connection method that is convenient for you yourself.

Is it possible to rent a virtual GPU-server with an custom configuration?

Yes, it is possible. Contact our round-the-clock support service (https://t.me/immerscloudsupport) and tell us what configuration you need.

Why immers.cloud?

  • Cheapest GPU rates

    Find cheaper — get a discount!
  • Second
    billing

    Use virtual machines just as much, as needed.

  • No
    waiting

    Automatic OS installation. Virtual machines are ready in a few minutes.
  • Free
    Internet

    Up to 1 Gb/s incoming and outgoing traffic for free.
  • Round-the-clock
    support

    Live chat and Telegram support — 24/7.
Sign up

Pre-installed images

Create virtual machines based on any of the pre-installed operating systems with the necessary set of additional software.
  • Ubuntu
     
  • Debian
     
  • CentOS
     
  • Fedora
     
  • OpenSUSE
     
  • MS Windows Server
     
  • Ubuntu
    NVIDIA drivers, CUDA, cuDNN
  • MS Windows Server
    NVIDIA drivers
View all the pre-installed images in the Marketplace.

Pure OpenStack API

Developers and system administrators can manage the cloud using the full OpenStack API.
Authenticate ninja_user example: $ curl -g -i -X POST https://api.immers.cloud:5000/v3/auth/tokens \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
Create ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
Delete ninja_vm example: $ curl -g -i -X DELETE https://api.immers.cloud:8774/v2.1/servers/{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Create ninja_network example: $ curl -g -i -X POST https://api.immers.cloud:9696/v2.0/networks \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"network": {"name": "ninja_net", "admin_state_up": true, "router:external": false}}'
Documentation
Signup

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport
support@immers.cloud
+7 499 110-44-94

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport support@immers.cloud +7 499 110-44-94
Sign up

Subscribe to our newsletter

Get notifications about new promotions and special offers by email.

 I agree to the processing of personal data