Cloud servers with RTX 4090

Get a huge leap in productivity, efficiency and graphics, thanks to the Ada Lovelace architecture

Graphics servers with RTX 4090

All graphics servers with RTX 4090 are based on two Intel Xeon Gold 3rd generation 6336Y CPUs with a base clock frequency of 2.4 GHz and a maximum clock frequency with Turbo Boost technology of 3.6 GHz.

Each processor contains two Intel® AVX-512 units and supports Intel® AVX-512 Deep Learning Boost functions. This set of instructions speeds up multiplication and addition operations with reduced accuracy, which are used in many internal cycles of the deep learning algorithm.

Each server has up to 8192 GB of DDR4 ECC Reg 3200 MHz RAM.

GPU RTX 4090

Go to a completely new level when working with graphics—use RTX 4090 graphics adapters. Each accelerator has 16384 CUDA cores, and 24 GB of GDDR6X memory.

Due to the transition to a new technological process — 5 nm, Ada Lovelace architecture chips contain 2.7 times more transistors.

Video memory capacity 24 GB
Type of video memory GDDR6X
PCI Express Gen 4 yes
CUDA cores 16 384 pcs.
Ray tracing cores 3rd generation
Tensor cores 4th generation
Encoder 8th generation
Decoder 5th generation

GPU performance benchmarks

Performance benchmarks results in a virtual environment for 1 RTX 4090 graphics card.
  • OctaneBench 2020

    up to
    1290
    pts
  • Matrix multiply example

    4600
    GFlop/s
  • Hashcat bcrypt

    232 800
    H/s

Basic configurations with RTX 4090 24 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
rtx4090-1.8.16.40 8 16384 40 1 Launch
rtx4090-1.8.16.60 8 16384 60 1 Launch
rtx4090-1.8.16.80 8 16384 80 1 Launch
rtx4090-1.8.16.120 8 16384 120 1 Launch
rtx4090-1.8.16.160 8 16384 160 1 Launch
rtx4090-1.8.32.160 8 32768 160 1 Launch
rtx4090-1.8.128.60 8 131072 60 1 Launch
rtx4090-1.16.64.160 16 65536 160 1 Launch
rtx4090-1.16.96.160 16 98304 160 1 Launch
rtx4090-1.16.128.160 16 131072 160 1 Launch
rtx4090-1.32.64.160 32 65536 160 1 Launch
rtx4090-1.32.128.160 32 131072 160 1 Launch
rtx4090-1.44.256.160 44 262144 160 1 Launch
rtx4090-2.16.64.160 16 65536 160 2 Launch
rtx4090-2.16.128.160 16 131072 160 2 Launch
rtx4090-3.16.96.160 16 98304 160 3 Launch
rtx4090-3.16.128.160 16 131072 160 3 Launch
rtx4090-4.8.96.160 8 98304 160 4 Launch
rtx4090-4.8.128.160 8 131072 160 4 Launch
rtx4090-4.16.32.160 16 32768 160 4 Launch
rtx4090-4.16.64.160 16 65536 160 4 Launch
rtx4090-4.16.128.160 16 131072 160 4 Launch
rtx4090-4.16.192.160 16 196608 160 4 Launch
rtx4090-4.24.128.160 24 131072 160 4 Launch
rtx4090-4.32.128.160 32 131072 160 4 Launch
rtx4090-4.44.256.160 44 262144 160 4 Launch
rtx4090-6.44.256.160 44 262144 160 6 Launch
rtx4090-8.44.256.160 44 262144 160 8 Launch

100% performance

Each physical core or GPU adapter assigned only to a single client.
It means that:

  • Available vCPU time is 100%
  • Physical pass through of GPU inside a VM
  • Less storage and network load on hypervisors, more storage and network performance for a client.

Up to 75 000 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Machines with local SSDs.

Up to 22 500 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Machines with block storage Volumes.

You can be sure that Virtual Machines are not sharing vCPU or GPU among each other.

  1. IOPS — Input/Output Operations Per Second.

Answers to frequently asked questions

You can rent a virtual server for any period. Make a payment for any amount from 1.1 $ and work within the prepaid balance. When the work is completed, delete the server to stop spending money.

You create GPU-servers yourself in the control panel, choosing the hardware configuration and operating system. As a rule, the ordered capacities are available for use within a few minutes.

If something went wrong-write to our round-the-clock support service: https://t.me/immerscloudsupport.

You can choose from basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, OpenSUSE. Or use a pre-configured image from the Marketplace.

All operating systems are installed automatically when the GPU-server is created.

By default, we provide connection to Windows-based servers via RDP, and for Linux-based servers-via SSH.

You can configure any connection method that is convenient for you yourself.

Yes, it is possible. Contact our round-the-clock support service (https://t.me/immerscloudsupport) and tell us what configuration you need.

A bit more about us

  • Per-second billing

    and free VM pause (shelve). You pay for the actual use of your VMs
  • 24/7/365

    Tech support is always in touch in the chat and responds in a few minutes

  • Free traffic

    Speeds up to 2 Gb/s without paying for incoming and outgoing traffic

  • Our data centers

    Built according to the TIER III standard
  • 100% of power is yours

    We do not share resources you have purchased with other users
  • 20 000+

    Clients trust us with their data and tasks
Sign up

Ready-made OS images with the required software

Create virtual servers using our pre-configured OS images with Windows or Linux and specialized pre-installed software.
  • Ubuntu
     
  • Debian
     
  • CentOS
     
  • Fedora
     
  • OpenSUSE
     
  • MS Windows Server
     
  • 3ds Max
     
  • Cinema 4D
     
  • Corona
     
  • Deadline
     
  • Blender
     
  • Archicad
     
  • Ubuntu
    Graphics drivers, CUDA, cuDNN
  • MS Windows Server
    Graphics drivers, CUDA, cuDNN
  • Nginx
     
  • Apache
     
  • Git
     
  • Jupyter
     
  • Django
     
  • MySQL
     
View all the pre-installed images in the Marketplace.

Pure OpenStack API

Developers and system administrators can manage the cloud using the full OpenStack API.
Authenticate ninja_user example: $ curl -g -i -X POST https://api.immers.cloud:5000/v3/auth/tokens \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
Create ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
STOP ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-stop" : null}'
START ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-start" : null}'
SHELVE ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"shelve" : null}'
DELETE ninja_vm example: $ curl -g -i -X DELETE https://api.immers.cloud:8774/v2.1/servers/{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Documentation
Signup

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport
support@immers.cloud
+7 499 110-44-94

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport support@immers.cloud +7 499 110-44-94
Sign up

Subscribe to our newsletter

Get notifications about new promotions and special offers by email.

 I agree to the processing of personal data