immers.cloud platform
All of GPU servers are based on Intel® Xeon® Scalable (Cascade Lake) second generation central processors and contain up to 96 virtual processors and up to 512 GB of DDR4 ECC Reg 2400—2933 MHz RAM.
Each processor contains two Intel® AVX-512 units and supports Intel® AVX-512 Deep Learning Boost functions. This set of instructions speeds up multiplication and addition operations with reduced accuracy, which are used in many internal cycles of the deep learning algorithm.
Local storage is organized on Intel® solid-state drives that are designed specifically for data centers and have a capacity of up to 1.92 TB.
100% performance
Each physical core or GPU adapter assigned only to a single client.
It means that:
- Available vCPU time is 100%
- Physical pass through of GPU inside a VM
- Less storage and network load on hypervisors, more storage and network performance for a client.
Up to 75 000 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Machines with local SSDs.
Up to 22 500 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Machines with block storage Volumes.
You can be sure that Virtual Machines are not sharing vCPU or GPU among each other.
NVIDIA® Tesla® V100
NVIDIA® Tesla® V100 with tensor cores is the world's most technically advanced GPU for data centers.
Each graphics accelerator has 640 tensor cores, 5120 CUDA cores, and 32 GB of HBM2 memory with a maximum throughput of 900 GB/s.
The total computing performance of the server is 28 TFLOPS with double precision and 448 TFLOPS with mixed-precision and tensor cores.
V100 accelerators are ideal for training deep neural networks.
NVIDIA® Tesla® T4
NVIDIA® Tesla® T4 with tensor and RT cores is the most advanced and energy-efficient NVIDIA® accelerator for deep learning of neural networks and inference, video transcoding, streaming, and remote desktops.
Each accelerator has 320 tensor cores, 2560 CUDA cores, and 16 GB of memory.
T4 graphics accelerators are ideal for operating neural network models in a production environment (inferencing), speech processing, and NLP.
In addition to tensor cores, T4 has RT cores that perform hardware ray tracing (retrace).
NVIDIA® RTX™ 3090
GeForce® RTX 3090 graphics cards are based on the powerful Ampere architecture and a improved RTX hardware ray tracing platform. Each accelerator has 328 tensor cores, 10496 CUDA cores, and 24 GB of memory.
NVIDIA® RTX™ 2080 Ti
GeForce® RTX 2080 Ti graphics cards are based on the powerful Turing architecture and a completely new RTX hardware ray tracing platform. Each accelerator has 544 tensor cores, 4352 CUDA cores, and 11 GB of memory.
Pre-installed images
Create virtual machines based on any of the pre-installed operating systems with the necessary set of additional software.
Pure OpenStack API
Developers and system administrators can manage the cloud using the full OpenStack API.
Authenticate ninja_user
example:
$ curl -g -i -X POST https://api.immers.cloud:5000/v3/auth/tokens \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
Create ninja_vm
example:
$ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
Delete ninja_vm
example:
$ curl -g -i -X DELETE https://api.immers.cloud:8774/v2.1/servers/{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Create ninja_network
example:
$ curl -g -i -X POST https://api.immers.cloud:9696/v2.0/networks \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"network": {"name": "ninja_net", "admin_state_up": true, "router:external": false}}'
Documentation
Subscribe to our newsletter
Get notifications about new promotions and special offers by email.