Subscribe to pre-order cloud servers with Tesla A series graphics accelerators

Launch information

Up-to-date information about the launch of new virtual servers with Tesla A10, A40, A100 graphics accelerators.

Name Status Plan
A10 Servers are running
A40
A100 Equipment purchase october 2022

Notifications about commissioning

Subscribe to notifications about the launch of new servers with Tesla A-Series graphics accelerators on immers.cloud platform.

 I agree to the processing of personal data

Ampere architecture

Tesla A100, A40 and A10 graphics accelerators with tensor cores are built on the Ampere architecture.

Thanks to CUDA cores, the number of single-precision floating-point operations (FP32) has been increased by 2 times. This allows you to significantly speed up work with graphics and video, as well as modeling complex 3D models in computer-aided design (CAD) software.

The second generation of RT cores simultaneously provides ray tracing and shading or noise reduction. This allows you to speed up the tasks of photorealistic rendering of film materials, evaluating architectural projects and rendering motion, allowing you to create a more accurate image faster.

Support for Tensor Float 32 (TF32) operations allows you to speed up the training of models for artificial intelligence (AI) and data processing by 5 times compared to the previous generation without changes in the code. Tensor cores also provide AI-based technologies such as DLSS, noise reduction, and photo and video editing functions in some applications.

PCI Express Gen 4 doubles the bandwidth of PCIe Gen 3, speeding up data transfer from processor memory for resource-intensive tasks such as AI, data processing and working with 3D graphics.

Thanks to the ultra-fast GDDR6 memory, scientists, engineers and data science specialists get the necessary resources for processing large data sets and modeling.

Notifications about commissioning

Subscribe to notifications about the launch of new servers with Tesla A-Series graphics accelerators on immers.cloud platform.

 I agree to the processing of personal data

GPU Tesla A100 40 GB and 80 GB

Tesla A100 GPU provides unsurpassed acceleration for AI tasks, data analysis and for solving the most complex computing tasks. The A100 is the most productive integrated platform for AI and HPC, allowing you to get real-time results and deploy scalable solutions.

When training deep learning algorithms, tensor cores with Tensor Float (TF32) support increase performance by 20 times, without requiring changes in the code, and speed up the automatic function of working with different accuracy and FP16 by 2 times.

Double-precision tensor cores provide the greatest performance for high-performance computing since the advent of the GPU. HPC applications can also use TF32 to achieve up to 11 times the throughput for precise operations.

Data science specialists need to analyze, visualize large data sets and extract valuable information from them. To cope with workloads, servers equipped with A100 graphics accelerators provide the necessary computing power thanks to large amounts of high-speed memory with high bandwidth.

Video memory capacity 40 GB or 80 GB
Type of video memory HBM2 / HBM2e
Memory bandwidth 1555 Gb/s / 1935 Gb/s
Encode/decode 1 encoder, 2 decoder (+AV1 decode)

Notifications about commissioning

Subscribe to notifications about the launch of new servers with Tesla A-Series graphics accelerators on immers.cloud platform.

 I agree to the processing of personal data

GPU Tesla A40 48 GB

Tesla A40 graphics accelerator provides a significant performance jump thanks to a combination of advanced graphics and computing capabilities, AI acceleration for modern science, design and graphics tasks. Tesla A40 provides modern capabilities for ray tracing rendering, modeling, working in a virtual environment and performing other tasks.

Specialized encoders (NVENC) and decoders (NVDEC) provide the performance necessary for simultaneous work with multiple streams, faster video export and allow you to use applications for broadcasting, video protection and streaming.

Video memory capacity 48 GB
Type of video memory GDDR6 ECC
Memory bandwidth 696 Gb/s
Encode/decode 1 encoder, 2 decoder (+AV1 decode)

Notifications about commissioning

Subscribe to notifications about the launch of new servers with Tesla A-Series graphics accelerators on immers.cloud platform.

 I agree to the processing of personal data

GPU Tesla A10 24 GB

The servers are running. More detailed information on the page: GPU servers with Tesla A10.

100% performance

Each physical core or GPU adapter assigned only to a single client.
It means that:

Up to 75 000 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Machines with local SSDs.

Up to 22 500 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Machines with block storage Volumes.

You can be sure that Virtual Machines are not sharing vCPU or GPU among each other.

  1. IOPS — Input/Output Operations Per Second.

Answers to frequently asked questions

You can rent a virtual server for any period. Make a payment for any amount from 1.7 $ and work within the prepaid balance. When the work is completed, delete the server to stop spending money.

You create GPU-servers yourself in the control panel, choosing the hardware configuration and operating system. As a rule, the ordered capacities are available for use within a few minutes.

If something went wrong-write to our round-the-clock support service: https://t.me/immerscloudsupport.

You can choose from basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, OpenSUSE. Or use a pre-configured image from the Marketplace.

All operating systems are installed automatically when the GPU-server is created.

By default, we provide connection to Windows-based servers via RDP, and for Linux-based servers-via SSH.

You can configure any connection method that is convenient for you yourself.

Yes, it is possible. Contact our round-the-clock support service (https://t.me/immerscloudsupport) and tell us what configuration you need.

Why immers.cloud?

Sign up

Pre-installed images

Create virtual machines based on any of the pre-installed operating systems with the necessary set of additional software.
  • Ubuntu
     
  • Debian
     
  • CentOS
     
  • Fedora
     
  • OpenSUSE
     
  • MS Windows Server
     
  • 3ds Max
     
  • Cinema 4D
     
  • Corona
     
  • Deadline
     
  • Blender
     
  • Archicad
     
  • Ubuntu
    Graphics drivers, CUDA, cuDNN
  • MS Windows Server
    Graphics drivers, CUDA, cuDNN
  • Nginx
     
  • Apache
     
  • Git
     
  • Jupyter
     
  • Django
     
  • MySQL
     
View all the pre-installed images in the Marketplace.

Pure OpenStack API

Developers and system administrators can manage the cloud using the full OpenStack API.
Authenticate ninja_user example: $ curl -g -i -X POST https://api.immers.cloud:5000/v3/auth/tokens \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
Create ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
Delete ninja_vm example: $ curl -g -i -X DELETE https://api.immers.cloud:8774/v2.1/servers/{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Create ninja_network example: $ curl -g -i -X POST https://api.immers.cloud:9696/v2.0/networks \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"network": {"name": "ninja_net", "admin_state_up": true, "router:external": false}}'
Documentation
Signup

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport
support@immers.cloud
+7 499 110-44-94

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport support@immers.cloud +7 499 110-44-94
Sign up

Subscribe to our newsletter

Get notifications about new promotions and special offers by email.

 I agree to the processing of personal data