Cloud servers with Tesla A100

Unsurpassed acceleration for solving the most complex computational tasks of AI, data analysis and HPC

Graphics servers with Tesla A100

All graphics servers with Tesla A100 are based on two Intel Xeon Gold 3rd generation 6336Y CPUs with a base clock frequency of 2.4 GHz and a maximum clock frequency with Turbo Boost technology of 3.6 GHz.

Each processor contains two Intel® AVX-512 units and supports Intel® AVX-512 Deep Learning Boost functions. This set of instructions speeds up multiplication and addition operations with reduced accuracy, which are used in many internal cycles of the deep learning algorithm.

Each server has up to 4096 GB of DDR4 ECC Reg 3200 MHz RAM. Local storage with a total capacity of 1920 GB is organized on Intel® solid-state drives, designed specifically for data centers.

GPU Tesla A100

Tesla A100 GPU provides unsurpassed acceleration for AI tasks, data analysis and for solving the most complex computing tasks. The A100 is the most productive integrated platform for AI and HPC, allowing you to get real-time results and deploy scalable solutions.

When training Deep Learning algorithms, tensor cores with Tensor Float (TF32) support increase performance by 20 times without requiring changes in the code and speed up the automatic function of working with different accuracies, including FP16, by 2 times.

Double-precision tensor cores provide the greatest performance for high-performance computing since the advent of the GPU. HPC applications can also use TF32 to achieve up to 11 times the throughput for precision operations.

Data science specialists need to analyze and visualize large datasets, extracting valuable information from them. Servers equipped with A100 graphics accelerators provide the necessary computing power, thanks to large amounts of high-speed memory with high bandwidth, to cope with these workloads.

Video memory capacity 80 GB
Type of video memory HBM2e
Memory bandwidth 1935 Gb/s
Encode/decode 1 encoder, 2 decoder (+AV1 decode)

GPU performance benchmarks

Performance benchmarks results in a virtual environment for 1 Tesla A100 graphics card.
  • OctaneBench 2020

    up to
    500
    pts
  • Matrix multiply example

    4300
    GFlop/s
  • Hashcat bcrypt

    117 000
    H/s

Basic configurations with Tesla A100 80 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-1.16.64.160 16 65536 160 1 Launch
teslaa100-1.16.64.240 16 65536 240 1 Launch
teslaa100-1.16.64.320 16 65536 320 1 Launch
teslaa100-1.16.64.480 16 65536 480 1 Launch
teslaa100-1.16.128.160 16 131072 160 1 Launch
teslaa100-1.16.128.240 16 131072 240 1 Launch
teslaa100-1.16.128.320 16 131072 320 1 Launch
teslaa100-1.16.128.480 16 131072 480 1 Launch
teslaa100-1.32.128.320 32 131072 320 1 Launch
teslaa100-2.24.256.160 24 262144 160 2 Launch
teslaa100-2.24.256.240 24 262144 240 2 Launch
teslaa100-2.24.256.320 24 262144 320 2 Launch
teslaa100-2.24.256.480 24 262144 480 2 Launch
teslaa100-3.32.384.160 32 393216 160 3 Launch
teslaa100-3.32.384.240 32 393216 240 3 Launch
teslaa100-3.32.384.320 32 393216 320 3 Launch
teslaa100-3.32.384.480 32 393216 480 3 Launch
teslaa100-4.16.128.120 16 131072 120 4 Launch
teslaa100-4.16.256.120 16 262144 120 4 Launch
teslaa100-4.16.256.240 16 262144 240 4 Launch
teslaa100-4.16.256.480 16 262144 480 4 Launch
teslaa100-4.44.256.120 44 262288 120 4 Launch
teslaa100-4.44.256.240 44 262288 240 4 Launch
teslaa100-4.44.256.480 44 262288 480 4 Launch
teslaa100-4.44.512.160 44 524288 160 4 Launch
teslaa100-4.44.512.240 44 524288 240 4 Launch
teslaa100-4.44.512.320 44 524288 320 4 Launch
teslaa100-4.44.512.480 44 524288 480 4 Launch
teslaa100-4.44.512.960 44 524288 960 4 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-2.24.256.160.nvlink 24 262144 160 2 Launch
teslaa100-2.24.256.320.nvlink 24 262144 320 2 Launch
teslaa100-2.24.256.480.nvlink 24 262144 480 2 Launch
teslaa100-4.32.384.160.nvlink 32 393216 160 4 Launch
teslaa100-4.32.384.320.nvlink 32 393216 320 4 Launch
teslaa100-4.32.384.480.nvlink 32 393216 480 4 Launch
teslaa100-6.44.512.160.nvlink 44 524288 160 6 Launch
teslaa100-6.44.512.320.nvlink 44 524288 320 6 Launch
teslaa100-6.44.512.480.nvlink 44 524288 480 6 Launch
teslaa100-6.44.512.960.nvlink 44 524288 960 6 Launch
teslaa100-8.44.704.160.nvlink 44 720896 160 8 Launch
teslaa100-8.44.704.320.nvlink 44 720896 320 8 Launch
teslaa100-8.44.704.480.nvlink 44 720896 480 8 Launch
teslaa100-8.44.704.960.nvlink 44 720896 960 8 Launch

100% performance

Each physical core or GPU adapter is assigned only to a single client.
It means that:

  • Available vCPU time is 100%;
  • Physical pass-through of GPU inside a virtual server;
  • Less storage and network load on hypervisors, more storage and network performance for a client.

Up to 75 000 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Servers with local SSDs.

Up to 70 000 IOPS1 for the RANDOM READ and up to 60 000 IOPS for the RANDOM WRITE for the Virtual Servers with block storage Volumes.

You can be sure that Virtual Servers are not sharing vCPU or GPU between each other.

  1. IOPS — Input/Output Operations Per Second.

Answers to frequently asked questions

You can rent a virtual server for any period. Make a payment for any amount starting from 1.1 $ and work within the prepaid balance. When the work is completed, delete the server to stop spending money.

You can create GPU-servers yourself under the control panel, choosing the hardware configuration and operating system. The ordered capacities are available for use within a few minutes.

If something went wrong - write to our tech support. We are available 24/7: https://t.me/immerscloudsupport.

You can choose from basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, OpenSUSE. Or use a pre-configured image from the Marketplace.

All operating systems are installed automatically when the GPU-server is created.

By default, we provide connection to Windows-based servers via RDP, and for Linux-based servers-via SSH.

You can configure any connection method that is convenient to you yourself.

Yes, it is possible. Contact our round-the-clock support service (https://t.me/immerscloudsupport) and tell us what configuration you need.

A bit more about us

  • Per-second billing

    and free VM pause (shelve). You pay for the actual use of your VMs
  • 24/7/365

    Tech support is always available via chat and responds within minutes

  • Free traffic

    Speed up to 2 Gb/s without extra charge for incoming and outgoing traffic

  • Our data centers

    Built according to the TIER III standard
  • 100% of power is yours

    We do not share resources you purchased with other users
  • 20 000+

    Clients trust us with their data and tasks
Sign up

Ready-made OS images with the required software

Create virtual servers by utilizing our pre-configured OS images with either Windows or Linux, along with specialized pre-installed software.
  • Ubuntu
     
  • Debian
     
  • CentOS
     
  • Fedora
     
  • OpenSUSE
     
  • MS Windows Server
     
  • 3ds Max
     
  • Cinema 4D
     
  • Corona
     
  • Deadline
     
  • Blender
     
  • Archicad
     
  • Ubuntu
    Graphics drivers, CUDA, cuDNN
  • MS Windows Server
    Graphics drivers, CUDA, cuDNN
  • Nginx
     
  • Apache
     
  • Git
     
  • Jupyter
     
  • Django
     
  • MySQL
     
View all the pre-installed images in the Marketplace.

Pure OpenStack API

Developers and system administrators can manage the cloud using the full OpenStack API.
Authenticate ninja_user example: $ curl -g -i -X POST https://api.immers.cloud:5000/v3/auth/tokens \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
Create ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
STOP ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-stop" : null}'
START ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-start" : null}'
SHELVE ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"shelve" : null}'
DELETE ninja_vm example: $ curl -g -i -X DELETE https://api.immers.cloud:8774/v2.1/servers/{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Documentation
Signup

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport
support@immers.cloud
+7 499 110-44-94

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport support@immers.cloud +7 499 110-44-94
Sign up

Subscribe to our newsletter

Get notifications about new promotions and special offers by email.

 I agree to the processing of personal data