GPU cloud servers

Get truly revolutionary performance with the latest Tesla® and RTX™ graphics adapters
There is a discount from 25% to 50% on all GPU models for an advance payment of 1-2 months.

immers.cloud platform

All GPU servers are based on Intel® Xeon® Scalable 2nd and 3rd generation processors and contain up to 96 virtual cores and up to 8192 GB of DDR4 ECC Reg 3200 MHz RAM.

Each processor is equipped with two Intel® AVX-512 units and supports Intel® AVX-512 Deep Learning Boost functions, which accelerate multiplication and addition operations with reduced accuracy, enhancing performance in deep learning algorithms.

Local storage is organized using Intel® and Samsung solid-state drives designed specifically for data centers, with a capacity of up to 7,68 TB

100% performance

Each physical core or GPU adapter is assigned only to a single client.
It means that:

  • Available vCPU time is 100%;
  • Physical pass-through of GPU inside a virtual server;
  • Less storage and network load on hypervisors, more storage and network performance for a client.

Up to 75 000 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Servers with local SSDs.

Up to 70 000 IOPS1 for the RANDOM READ and up to 60 000 IOPS for the RANDOM WRITE for the Virtual Servers with block storage Volumes.

You can be sure that Virtual Servers are not sharing vCPU or GPU between each other.

  1. IOPS — Input/Output Operations Per Second.

GPU Tesla® H100 80 GB

Tesla H100 GPU provides unsurpassed acceleration for AI tasks, data analysis and for solving the most complex computing tasks. The H100 is the most productive integrated platform for AI and HPC, allowing you to get real-time results and deploy scalable solutions.

Basic configurations with Tesla H100 80 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslah100-1.16.64.160 16 65536 160 1 Launch
teslah100-1.16.128.160 16 131072 160 1 Launch
teslah100-2.24.256.160 24 262144 160 2 Launch
teslah100-3.32.384.160 32 393216 160 3 Launch
teslah100-4.16.128.120 16 131072 120 4 Launch
teslah100-4.16.256.120 16 262144 120 4 Launch
teslah100-4.44.256.120 44 262288 120 4 Launch
teslah100-4.44.512.160 44 524288 160 4 Launch

GPU RTX 4090

Ada Lovelace architecture, based on a new 5 nm process technology, provides a huge leap in performance, efficiency and graphics. Each accelerator has 16384 CUDA cores, and 24 GB of GDDR6X memory.

Basic configurations with RTX 4090 24 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
rtx4090-1.8.16.40 8 16384 40 1 Launch
rtx4090-1.8.16.60 8 16384 60 1 Launch
rtx4090-1.8.16.80 8 16384 80 1 Launch
rtx4090-1.8.16.120 8 16384 120 1 Launch
rtx4090-1.8.16.160 8 16384 160 1 Launch
rtx4090-1.8.32.160 8 32768 160 1 Launch
rtx4090-1.8.128.60 8 131072 60 1 Launch
rtx4090-1.16.64.160 16 65536 160 1 Launch
rtx4090-1.16.96.160 16 98304 160 1 Launch
rtx4090-1.16.128.160 16 131072 160 1 Launch
rtx4090-1.32.64.160 32 65536 160 1 Launch
rtx4090-1.32.128.160 32 131072 160 1 Launch
rtx4090-1.44.256.160 44 262144 160 1 Launch
rtx4090-2.16.64.160 16 65536 160 2 Launch
rtx4090-2.16.128.160 16 131072 160 2 Launch
rtx4090-3.16.96.160 16 98304 160 3 Launch
rtx4090-3.16.128.160 16 131072 160 3 Launch
rtx4090-4.8.96.160 8 98304 160 4 Launch
rtx4090-4.8.128.160 8 131072 160 4 Launch
rtx4090-4.16.32.160 16 32768 160 4 Launch
rtx4090-4.16.64.160 16 65536 160 4 Launch
rtx4090-4.16.128.160 16 131072 160 4 Launch
rtx4090-4.16.192.160 16 196608 160 4 Launch
rtx4090-4.24.128.160 24 131072 160 4 Launch
rtx4090-4.32.128.160 32 131072 160 4 Launch
rtx4090-4.44.256.160 44 262144 160 4 Launch
rtx4090-6.44.256.160 44 262144 160 6 Launch
rtx4090-8.44.256.160 44 262144 160 8 Launch

GPU RTX 3090

RTX 3090 graphics cards are based on the powerful Ampere architecture and a improved RTX hardware ray tracing platform. Each accelerator has 328 tensor cores, 10496 CUDA cores, and 24 GB of memory.

Basic configurations with RTX™ 3090 24 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
rtx3090-1.8.16.40 8 16384 40 1 Launch
rtx3090-1.8.16.50 8 16384 50 1 Launch
rtx3090-1.8.16.60 8 16384 60 1 Launch
rtx3090-1.8.16.80 8 16384 80 1 Launch
rtx3090-1.8.16.120 8 16384 120 1 Launch
rtx3090-1.8.16.160 8 16384 160 1 Launch
rtx3090-1.8.32.160 8 32768 160 1 Launch
rtx3090-1.8.128.60 8 131072 60 1 Launch
rtx3090-1.16.64.160 16 65536 160 1 Launch
rtx3090-1.16.96.160 16 98304 160 1 Launch
rtx3090-1.32.64.160 32 65536 160 1 Launch
rtx3090-1.32.128.160 32 131072 160 1 Launch
rtx3090-1.44.256.160 44 262144 160 1 Launch
rtx3090-2.16.64.160 16 65536 160 2 Launch
rtx3090-2.16.128.160 16 131072 160 2 Launch
rtx3090-2.32.64.160 32 65536 160 2 Launch
rtx3090-3.16.96.160 16 98304 160 3 Launch
rtx3090-4.8.96.160 8 98304 160 4 Launch
rtx3090-4.8.128.160 8 131072 160 4 Launch
rtx3090-4.16.32.160 16 32768 160 4 Launch
rtx3090-4.16.64.160 16 65536 160 4 Launch
rtx3090-4.16.128.160 16 131072 160 4 Launch
rtx3090-4.16.192.160 16 196608 160 4 Launch
rtx3090-4.24.128.160 24 131072 160 4 Launch
rtx3090-4.44.256.160 44 262144 160 4 Launch

GPU RTX 3080

RTX 3080 graphics cards are based on the powerful Ampere architecture and a improved RTX hardware ray tracing platform. Each accelerator has 272 tensor cores, 8704 CUDA cores, and 10 GB of memory.

Basic configurations with RTX™ 3080 10 GB LHR

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
rtx3080-1.8.16.40 8 16384 40 1 Launch
rtx3080-1.8.16.60 8 16384 60 1 Launch
rtx3080-1.8.16.80 8 16384 80 1 Launch
rtx3080-1.8.16.120 8 16384 120 1 Launch
rtx3080-1.8.16.160 8 16384 160 1 Launch
rtx3080-1.8.32.160 8 32762 160 1 Launch
rtx3080-1.8.64.160 8 65536 160 1 Launch
rtx3080-1.16.16.60 16 16384 60 1 Launch
rtx3080-1.16.32.160 16 32768 160 1 Launch
rtx3080-1.16.64.160 16 65536 160 1 Launch
rtx3080-1.16.128.160 16 131072 160 1 Launch
rtx3080-1.20.64.160 20 65536 160 1 Launch
rtx3080-1.44.32.160 44 32768 160 1 Launch
rtx3080-1.44.64.160 44 65536 160 1 Launch
rtx3080-1.44.256.160 44 262144 160 1 Launch
rtx3080-2.16.32.160 16 32762 160 2 Launch
rtx3080-2.16.64.160 16 65536 160 2 Launch
rtx3080-2.32.64.160 32 65536 160 2 Launch
rtx3080-3.16.64.160 16 65536 160 3 Launch
rtx3080-3.16.96.160 16 98304 160 3 Launch
rtx3080-4.8.32.60 8 32768 60 4 Launch
rtx3080-4.8.64.160 8 65536 160 4 Launch
rtx3080-4.8.96.160 8 98304 160 4 Launch
rtx3080-4.16.64.160 16 65536 160 4 Launch
rtx3080-4.16.96.160 16 98304 160 4 Launch

GPU Tesla® A100 80 GB

Tesla A100 GPU provides unsurpassed acceleration for AI tasks, data analysis and for solving the most complex computing tasks. The A100 is the most productive integrated platform for AI and HPC, allowing you to get real-time results and deploy scalable solutions.

Basic configurations with Tesla A100 80 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-1.16.64.160 16 65536 160 1 Launch
teslaa100-1.16.128.160 16 131072 160 1 Launch
teslaa100-2.24.256.160 24 262144 160 2 Launch
teslaa100-3.32.384.160 32 393216 160 3 Launch
teslaa100-4.16.128.120 16 131072 120 4 Launch
teslaa100-4.16.256.120 16 262144 120 4 Launch
teslaa100-4.44.256.120 44 262288 120 4 Launch
teslaa100-4.44.512.160 44 524288 160 4 Launch

GPU RTX® A5000

The RTX A5000 graphics accelerator has the perfect balance of power, performance and reliability to solve complex tasks. This GPU is built on the basis of the latest Ampere architecture and has 24 GB of video memory — everything so that designers, engineers and artists can implement the projects they dreamed of today.

Thanks to the new CUDA cores, which provide up to 2.5 times FP32 performance compared to the previous generation, work with graphics is accelerated.

Higher rendering accuracy is provided by hardware acceleration of motion blur and higher ray tracing performance.

In flavors with an even number of GPUs, graphics adapters are combined using NVLink, which allows you to increase the amount of memory and improve performance for performing complex visual calculations.

Basic configurations with RTX A5000 24 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour

GPU Tesla® A10

Tesla A10 graphics accelerators, featuring tensor cores, are built on the Ampere architecture, which enhances performance and efficiency for various computing tasks.

Thanks to CUDA cores, the Tesla A10 accelerators deliver twice the number of single-precision floating-point operations (FP32) compared to previous generations, significantly speeding up work with graphics, video, and modeling complex 3D models in computer-aided design (CAD) software.

The second generation of RT cores enables simultaneous ray tracing, shading, or noise reduction, accelerating tasks such as photorealistic rendering of film materials, architectural project evaluation, and motion rendering for faster, more accurate results.

Support for Tensor Float 32 (TF32) operations in Tesla A10 accelerators boosts training speeds for AI models and data processing by five times compared to previous generations, without requiring changes in the code. Tensor cores also enable AI-based technologies such as DLSS, noise reduction, and photo and video editing functions in select applications.

PCI Express Gen 4 doubles the bandwidth of PCIe Gen 3, accelerating data transfer from processor memory for resource-intensive tasks like AI, data processing, and 3D graphics rendering.

Thanks to ultra-fast GDDR6 memory, scientists, engineers, and data science specialists gain the necessary resources for processing large datasets and conducting advanced modeling.

Basic configurations with Tesla A10 24 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-1.4.16.60 4 16384 60 1 Launch
teslaa10-1.4.16.120 4 16384 120 1 Launch
teslaa10-1.8.16.120 8 16384 120 1 Launch
teslaa10-1.8.32.120 8 32768 120 1 Launch
teslaa10-1.8.32.160 8 32768 160 1 Launch
teslaa10-1.8.96.160 8 98304 160 1 Launch
teslaa10-1.16.32.160 16 32768 160 1 Launch
teslaa10-1.16.64.160 16 65536 160 1 Launch
teslaa10-2.8.32.160 8 32768 160 2 Launch
teslaa10-2.16.64.160 16 65536 160 2 Launch
teslaa10-2.16.128.160 16 131072 160 2 Launch
teslaa10-3.16.96.160 16 98304 160 3 Launch
teslaa10-3.44.256.160 44 262144 160 3 Launch
teslaa10-4.12.48.160 12 49152 160 4 Launch
teslaa10-4.16.32.60 16 32768 58 4 Launch
teslaa10-4.16.32.160 16 32768 160 4 Launch
teslaa10-4.16.64.160 16 65536 160 4 Launch
teslaa10-4.16.128.160 16 131072 160 4 Launch
teslaa10-4.44.256.160 44 262144 160 4 Launch

GPU RTX 2080 Ti

RTX 2080 Ti graphics cards are based on the powerful Turing architecture and a completely new RTX hardware ray tracing platform. Each accelerator has 544 tensor cores, 4352 CUDA cores, and 11 GB of memory.

Basic configurations with RTX™ 2080 Ti 11 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
rtx2080ti-1.4.8.40 4 8192 40 1 Launch
rtx2080ti-1.4.16.60 4 16384 60 1 Launch
rtx2080ti-1.4.32.40 4 32768 40 1 Launch
rtx2080ti-1.4.48.40 4 49152 40 1 Launch
rtx2080ti-1.6.256.160 6 262144 160 1 Launch
rtx2080ti-1.8.8.160 8 8192 160 1 Launch
rtx2080ti-1.8.16.160 8 16384 160 1 Launch
rtx2080ti-1.8.24.80 8 24576 80 1 Launch
rtx2080ti-1.8.24.120 8 24576 120 1 Launch
rtx2080ti-1.8.24.160 8 24576 160 1 Launch
rtx2080ti-1.8.32.160 8 32768 160 1 Launch
rtx2080ti-1.8.48.160 8 49152 160 1 Launch
rtx2080ti-1.8.64.160 8 65536 160 1 Launch
rtx2080ti-1.8.288.160 8 294912 160 1 Launch
rtx2080ti-1.16.32.160 16 32768 160 1 Launch
rtx2080ti-1.16.64.160 16 65536 160 1 Launch
rtx2080ti-1.16.128.160 16 131072 160 1 Launch
rtx2080ti-1.32.64.160 32 65536 160 1 Launch
rtx2080ti-1.44.64.160 44 65536 160 1 Launch
rtx2080ti-1.44.128.160 44 131072 160 1 Launch
rtx2080ti-1.44.256.160 44 262144 160 1 Launch
rtx2080ti-2.4.8.40 4 8192 40 2 Launch
rtx2080ti-2.4.32.40 4 32768 40 2 Launch
rtx2080ti-2.4.48.40 4 49152 40 2 Launch
rtx2080ti-2.8.16.80 8 16384 80 2 Launch
rtx2080ti-2.8.16.160 8 16384 160 2 Launch
rtx2080ti-2.8.32.160 8 32768 160 2 Launch
rtx2080ti-2.8.56.160 8 57344 160 2 Launch
rtx2080ti-2.8.64.160 8 65536 160 2 Launch
rtx2080ti-2.12.64.160 12 65536 160 2 Launch
rtx2080ti-2.16.48.160 16 49152 160 2 Launch
rtx2080ti-2.16.64.160 16 65536 160 2 Launch
rtx2080ti-3.12.24.60 12 24576 60 3 Launch
rtx2080ti-3.12.24.120 12 24576 120 3 Launch
rtx2080ti-3.16.64.160 16 65536 160 3 Launch
rtx2080ti-3.24.72.160 24 73728 160 3 Launch
rtx2080ti-3.32.48.160 32 49152 160 3 Launch
rtx2080ti-4.4.32.60 4 32768 60 4 Launch
rtx2080ti-4.8.32.40 8 32768 40 4 Launch
rtx2080ti-4.8.128.160 8 131072 160 4 Launch
rtx2080ti-4.8.256.160 8 262144 160 4 Launch
rtx2080ti-4.16.32.160 16 32768 160 4 Launch
rtx2080ti-4.16.64.160 16 65536 160 4 Launch
rtx2080ti-4.16.128.160 16 131072 160 4 Launch
rtx2080ti-4.16.256.160 16 262144 160 4 Launch
rtx2080ti-4.32.96.160 32 98304 160 4 Launch
rtx2080ti-4.44.128.160 44 131072 160 4 Launch
rtx2080ti-4.44.256.160 44 262144 160 4 Launch

GPU Tesla® A2

The Tesla A2 graphics accelerator is optimized for inference tasks and provides up to 1.3 times greater performance for smart cities, industry and retail tasks.

Basic configurations with Tesla A2 16 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa2-1.4.8.60 4 8192 60 1 Launch
teslaa2-1.4.8.120 4 8192 120 1 Launch
teslaa2-1.4.16.120 4 16384 120 1 Launch
teslaa2-1.6.64.120 6 65536 120 1 Launch
teslaa2-1.8.16.120 8 16384 120 1 Launch
teslaa2-1.8.16.160 8 16384 160 1 Launch
teslaa2-1.8.32.120 8 32768 120 1 Launch
teslaa2-1.8.32.160 8 32768 160 1 Launch
teslaa2-1.32.64.160 32 65536 160 1 Launch
teslaa2-1.32.128.160 32 131072 160 1 Launch
teslaa2-2.16.32.160 16 32768 160 2 Launch
teslaa2-2.16.64.160 16 65536 160 2 Launch
teslaa2-2.16.128.160 16 131072 160 2 Launch
teslaa2-2.32.128.160 32 131072 160 2 Launch
teslaa2-3.32.128.160 32 131072 160 3 Launch
teslaa2-3.32.256.160 32 262144 160 3 Launch
teslaa2-4.32.128.160 32 131072 160 4 Launch
teslaa2-6.32.128.160 32 131072 160 6 Launch

GPU Tesla® T4

Tesla® T4 with tensor and RT cores is the one of most advanced and energy-efficient graphics accelerator for deep learning of neural networks and inference, video transcoding, streaming, and remote desktops.

Each accelerator has 320 tensor cores, 2560 CUDA cores, and 16 GB of memory.

T4 graphics accelerators are ideal for operating neural network models in a production environment (inferencing), speech processing, and NLP.

In addition to tensor cores, T4 has RT cores that perform hardware ray tracing (retrace).

Basic configurations with Tesla® T4 16 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.4.8.60 4 8192 60 1 Launch
teslat4-1.4.8.120 4 8192 120 1 Launch
teslat4-1.4.16.60 4 16384 60 1 Launch
teslat4-1.4.16.120 4 16384 120 1 Launch
teslat4-1.8.16.120 8 16384 120 1 Launch
teslat4-1.8.16.160 8 16384 160 1 Launch
teslat4-1.8.32.80 8 32768 80 1 Launch
teslat4-1.8.32.120 8 32768 120 1 Launch
teslat4-1.16.16.160 16 16384 160 1 Launch
teslat4-1.16.64.160 16 65536 160 1 Launch
teslat4-1.32.64.120 32 65536 120 1 Launch
teslat4-1.32.128.160 32 131072 160 1 Launch
teslat4-1.48.256.160 48 262144 160 1 Launch
teslat4-2.4.16.160 4 16384 160 2 Launch
teslat4-2.16.32.160 16 32768 160 2 Launch
teslat4-2.16.64.120 16 65536 120 2 Launch
teslat4-2.32.64.120 32 65536 120 2 Launch
teslat4-2.32.128.160 32 131072 160 2 Launch
teslat4-2.32.192.160 32 196608 160 2 Launch
teslat4-3.32.64.160 32 65536 160 3 Launch
teslat4-3.32.128.160 32 131072 160 3 Launch
teslat4-3.32.256.160 32 262144 160 3 Launch
teslat4-4.16.64.40.custom.6240R 16 65536 40 4 Launch
teslat4-4.16.64.160 16 65536 160 4 Launch
teslat4-4.16.96.160 16 98304 160 4 Launch
teslat4-4.16.128.160 16 131072 160 4 Launch
teslat4-4.32.64.160 32 65536 160 4 Launch
teslat4-4.32.128.160 32 131072 160 4 Launch
teslat4-4.48.192.160 48 196608 160 4 Launch
teslat4-4.48.256.160 48 262144 160 4 Launch

GPU Tesla® V100

Tesla® V100 with tensor cores is the world's most technically advanced GPU for data centers.

Each graphics accelerator has 640 tensor cores, 5120 CUDA cores, and 32 GB of HBM2 memory with a maximum throughput of 900 GB/s.

The total computing performance of the server is 28 TFLOPS with double precision and 448 TFLOPS with mixed-precision and tensor cores.

V100 accelerators are ideal for training deep neural networks.

Basic configurations with Tesla® V100 32 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslav100-1.8.16.128 8 16384 128 1 Launch
teslav100-1.8.64.60 8 65536 60 1 Launch
teslav100-1.8.64.80 8 65536 80 1 Launch
teslav100-1.8.64.160 8 65536 160 1 Launch
teslav100-1.12.64.160 12 65536 160 1 Launch
teslav100-1.16.64.160 16 65536 160 1 Launch
teslav100-1.16.128.160 16 131072 160 1 Launch
teslav100-1.32.128.160 32 131072 160 1 Launch
teslav100-2.32.128.160 32 131072 160 2 Launch
teslav100-2.32.192.160 32 196608 160 2 Launch
teslav100-4.32.64.160 32 65536 160 4 Launch
teslav100-4.32.96.160 32 98304 160 4 Launch
teslav100-4.32.256.160 32 262144 160 4 Launch

Answers to frequently asked questions

You can rent a virtual server for any period. Make a payment for any amount starting from 1.1 $ and work within the prepaid balance. When the work is completed, delete the server to stop spending money.

You can create GPU-servers yourself under the control panel, choosing the hardware configuration and operating system. The ordered capacities are available for use within a few minutes.

If something went wrong - write to our tech support. We are available 24/7: https://t.me/immerscloudsupport.

You can choose from basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, OpenSUSE. Or use a pre-configured image from the Marketplace.

All operating systems are installed automatically when the GPU-server is created.

By default, we provide connection to Windows-based servers via RDP, and for Linux-based servers-via SSH.

You can configure any connection method that is convenient to you yourself.

Yes, it is possible. Contact our round-the-clock support service (https://t.me/immerscloudsupport) and tell us what configuration you need.

A bit more about us

  • Per-second billing

    and free VM pause (shelve). You pay for the actual use of your VMs
  • 24/7/365

    Tech support is always available via chat and responds within minutes

  • Free traffic

    Speed up to 2 Gb/s without extra charge for incoming and outgoing traffic

  • Our data centers

    Built according to the TIER III standard
  • 100% of power is yours

    We do not share resources you purchased with other users
  • 20 000+

    Clients trust us with their data and tasks
Sign up

Ready-made OS images with the required software

Create virtual servers by utilizing our pre-configured OS images with either Windows or Linux, along with specialized pre-installed software.
  • Ubuntu
     
  • Debian
     
  • CentOS
     
  • Fedora
     
  • OpenSUSE
     
  • MS Windows Server
     
  • 3ds Max
     
  • Cinema 4D
     
  • Corona
     
  • Deadline
     
  • Blender
     
  • Archicad
     
  • Ubuntu
    Graphics drivers, CUDA, cuDNN
  • MS Windows Server
    Graphics drivers, CUDA, cuDNN
  • Nginx
     
  • Apache
     
  • Git
     
  • Jupyter
     
  • Django
     
  • MySQL
     
View all the pre-installed images in the Marketplace.

Pure OpenStack API

Developers and system administrators can manage the cloud using the full OpenStack API.
Authenticate ninja_user example: $ curl -g -i -X POST https://api.immers.cloud:5000/v3/auth/tokens \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
Create ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
STOP ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-stop" : null}'
START ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-start" : null}'
SHELVE ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"shelve" : null}'
DELETE ninja_vm example: $ curl -g -i -X DELETE https://api.immers.cloud:8774/v2.1/servers/{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Documentation
Signup

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport
support@immers.cloud
+7 499 110-44-94

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport support@immers.cloud +7 499 110-44-94
Sign up

Subscribe to our newsletter

Get notifications about new promotions and special offers by email.

 I agree to the processing of personal data