Launch information
Up-to-date information about the launch of new virtual servers with Tesla A10, A40, A100 graphics accelerators.
Name | Status | Plan |
---|---|---|
A10 | Servers are running | — |
A40 | — | — |
A100 | Servers are running | — |
Up-to-date information about the launch of new virtual servers with Tesla A10, A40, A100 graphics accelerators.
Name | Status | Plan |
---|---|---|
A10 | Servers are running | — |
A40 | — | — |
A100 | Servers are running | — |
Subscribe to notifications about the launch of new servers with Tesla A-Series graphics accelerators on immers.cloud platform.
Tesla A100, A40 and A10 graphics accelerators with tensor cores are built on the Ampere architecture.
Thanks to CUDA cores, the number of single-precision floating-point operations (FP32) has been increased by 2 times. This allows you to significantly speed up work with graphics and video, as well as modeling complex 3D models in computer-aided design (CAD) software.
The second generation of RT cores simultaneously provides ray tracing and shading or noise reduction. This allows you to speed up the tasks of photorealistic rendering of film materials, evaluating architectural projects and rendering motion, allowing you to create a more accurate image faster.
Support for Tensor Float 32 (TF32) operations allows you to speed up the training of models for artificial intelligence (AI) and data processing by 5 times compared to the previous generation without changes in the code. Tensor cores also provide AI-based technologies such as DLSS, noise reduction, and photo and video editing functions in some applications.
PCI Express Gen 4 doubles the bandwidth of PCIe Gen 3, speeding up data transfer from processor memory for resource-intensive tasks such as AI, data processing and working with 3D graphics.
Thanks to the ultra-fast GDDR6 memory, scientists, engineers and data science specialists get the necessary resources for processing large data sets and modeling.
Subscribe to notifications about the launch of new servers with Tesla A-Series graphics accelerators on immers.cloud platform.
The servers are running. More detailed information on the page: GPU servers with Tesla A100.
Tesla A40 graphics accelerator provides a significant performance jump thanks to a combination of advanced graphics and computing capabilities, AI acceleration for modern science, design and graphics tasks. Tesla A40 provides modern capabilities for ray tracing rendering, modeling, working in a virtual environment and performing other tasks.
Specialized encoders (NVENC) and decoders (NVDEC) provide the performance necessary for simultaneous work with multiple streams, faster video export and allow you to use applications for broadcasting, video protection and streaming.
Video memory capacity | 48 GB |
Type of video memory | GDDR6 ECC |
Memory bandwidth | 696 Gb/s |
Encode/decode | 1 encoder, 2 decoder (+AV1 decode) |
Subscribe to notifications about the launch of new servers with Tesla A-Series graphics accelerators on immers.cloud platform.
The servers are running. More detailed information on the page: GPU servers with Tesla A10.
Each physical core or GPU adapter assigned only to a single client.
It means that:
You can be sure that Virtual Machines are not sharing vCPU or GPU among each other.
You create GPU-servers yourself in the control panel, choosing the hardware configuration and operating system. As a rule, the ordered capacities are available for use within a few minutes.
If something went wrong-write to our round-the-clock support service: https://t.me/immerscloudsupport.You can choose from basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, OpenSUSE. Or use a pre-configured image from the Marketplace.
All operating systems are installed automatically when the GPU-server is created.By default, we provide connection to Windows-based servers via RDP, and for Linux-based servers-via SSH.
You can configure any connection method that is convenient for you yourself.ninja_user
example:
$ curl -g -i -X POST https://api.immers.cloud:5000/v3/auth/tokens \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
ninja_vm
example:
$ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
ninja_vm
example:
$ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-stop" : null}'
ninja_vm
example:
$ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-start" : null}'
ninja_vm
example:
$ curl -g -i -X DELETE https://api.immers.cloud:8774/v2.1/servers/{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Any questions?Write to us via live chat, email, or call by phone:@immerscloudsupport support@immers.cloud +7 499 110-44-94 |
Sign up |
Get notifications about new promotions and special offers by email.