Subscribe to pre-order cloud servers with Tesla A100 and H100 graphics accelerators

Launch information

Up-to-date information about the launch of new virtual servers with graphics accelerators.

Name Status Plan
H100 with NVLink Ready 29.04.2024
A100 with NVLink Ready 18.03.2024
A100 Deploying 07.03.2024
H100 Ready 26.01.2024
A100 Ready 24.11.2023
A100 Ready 06.11.2023
4090 Ready 01.11.2023
4090 Ready 04.08.2023
A5000 with NVLink Ready 16.07.2023
A2 Ready 24.03.2023
A100 Ready 16.02.2023
A10 Ready 16.02.2023

Notifications about commissioning

Subscribe to notifications about the launch of new servers with graphics accelerators on immers.cloud platform.

 I agree to the processing of personal data

Hopper architecture

Accelerate your transition to a new era of artificial intelligence with the latest H100 chips with fourth-generation tensor cores.

The H100 GPUs, equipped with fourth-generation tensor cores and Transformer Engine with FP8 accuracy, accelerate learning up to 9 times compared to the previous generation for Expert Team models (MoE).

Fourth generation tensor cores accelerate all types of precision, including FP64, TF32, FP32, FP16 and INT8, and Transformer Engine uses FP8 and FP16 together to reduce memory usage and improve performance while maintaining accuracy for large language models.

PCI Express Gen 5 doubles the bandwidth of PCIe Gen 4, speeding up data transfer from processor memory for resource-intensive tasks such as AI, data processing and working with 3D graphics.

Thanks to the ultra-fast HBM3 memory, scientists, engineers and data science specialists get the necessary resources for processing large data sets and modeling.

Notifications about commissioning

Subscribe to notifications about the launch of new servers with graphics accelerators on immers.cloud platform.

 I agree to the processing of personal data

Ampere architecture

Tesla A100, A10, A2 and 3090 graphics accelerators with tensor cores are built on the Ampere architecture.

Thanks to CUDA cores, the number of single-precision floating-point operations (FP32) has been increased by 2 times. This allows you to significantly speed up work with graphics and video, as well as modeling complex 3D models in computer-aided design (CAD) software.

The second generation of RT cores simultaneously provides ray tracing and shading or noise reduction. This allows you to speed up the tasks of photorealistic rendering of film materials, evaluating architectural projects and rendering motion, allowing you to create a more accurate image faster.

Support for Tensor Float 32 (TF32) operations allows you to speed up the training of models for artificial intelligence (AI) and data processing by 5 times compared to the previous generation without changes in the code. Tensor cores also provide AI-based technologies such as DLSS, noise reduction, and photo and video editing functions in some applications.

PCI Express Gen 4 doubles the bandwidth of PCIe Gen 3, speeding up data transfer from processor memory for resource-intensive tasks such as AI, data processing and working with 3D graphics.

Thanks to the ultra-fast GDDR6 and HBM2 memory, scientists, engineers and data science specialists get the necessary resources for processing large data sets and modeling.

Notifications about commissioning

Subscribe to notifications about the launch of new servers with graphics accelerators on immers.cloud platform.

 I agree to the processing of personal data

100% performance

Each physical core or GPU adapter is assigned only to a single client.
It means that:

Up to 75 000 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Servers with local SSDs.

Up to 70 000 IOPS1 for the RANDOM READ and up to 60 000 IOPS for the RANDOM WRITE for the Virtual Servers with block storage Volumes.

You can be sure that Virtual Servers are not sharing vCPU or GPU between each other.

  1. IOPS — Input/Output Operations Per Second.

Answers to frequently asked questions

You can rent a virtual server for any period. Make a payment for any amount starting from 1.1 $ and work within the prepaid balance. When the work is completed, delete the server to stop spending money.

You can create GPU-servers yourself under the control panel, choosing the hardware configuration and operating system. The ordered capacities are available for use within a few minutes.

If something went wrong - write to our tech support. We are available 24/7: https://t.me/immerscloudsupport.

You can choose from basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, OpenSUSE. Or use a pre-configured image from the Marketplace.

All operating systems are installed automatically when the GPU-server is created.

By default, we provide connection to Windows-based servers via RDP, and for Linux-based servers-via SSH.

You can configure any connection method that is convenient to you yourself.

Yes, it is possible. Contact our round-the-clock support service (https://t.me/immerscloudsupport) and tell us what configuration you need.

A bit more about us

Sign up

Ready-made OS images with the required software

Create virtual servers by utilizing our pre-configured OS images with either Windows or Linux, along with specialized pre-installed software.
  • Ubuntu
     
  • Debian
     
  • CentOS
     
  • Fedora
     
  • OpenSUSE
     
  • MS Windows Server
     
  • 3ds Max
     
  • Cinema 4D
     
  • Corona
     
  • Deadline
     
  • Blender
     
  • Archicad
     
  • Ubuntu
    Graphics drivers, CUDA, cuDNN
  • MS Windows Server
    Graphics drivers, CUDA, cuDNN
  • Nginx
     
  • Apache
     
  • Git
     
  • Jupyter
     
  • Django
     
  • MySQL
     
View all the pre-installed images in the Marketplace.

Pure OpenStack API

Developers and system administrators can manage the cloud using the full OpenStack API.
Authenticate ninja_user example: $ curl -g -i -X POST https://api.immers.cloud:5000/v3/auth/tokens \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
Create ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
STOP ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-stop" : null}'
START ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-start" : null}'
SHELVE ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"shelve" : null}'
DELETE ninja_vm example: $ curl -g -i -X DELETE https://api.immers.cloud:8774/v2.1/servers/{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Documentation
Signup

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport
support@immers.cloud
+7 499 110-44-94

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport support@immers.cloud +7 499 110-44-94
Sign up

Subscribe to our newsletter

Get notifications about new promotions and special offers by email.

 I agree to the processing of personal data