Cloud servers with Tesla A100

Unsurpassed acceleration for solving the most complex computational tasks of AI, data analysis and HPC

Graphics servers with Tesla A100

All graphics servers with Tesla A100 are based on two Intel Xeon Gold 6240R CPUs with a base clock speed of 2.4 GHz and a maximum clock speed with Turbo Boost technology of 4.0 GHz.

Each processor contains two Intel® AVX-512 units and supports Intel® AVX-512 Deep Learning Boost functions. This set of instructions speeds up multiplication and addition operations with reduced accuracy, which are used in many internal cycles of the deep learning algorithm.

Each server has 768 GB of DDR4 ECC Reg 2933 MHz RAM. Local storage with a total capacity of 3200 GB is organized on Intel® solid-state drives, designed specifically for data centers.

GPU Tesla A100

Tesla A100 GPU provides unsurpassed acceleration for AI tasks, data analysis and for solving the most complex computing tasks. The A100 is the most productive integrated platform for AI and HPC, allowing you to get real-time results and deploy scalable solutions.

When training deep learning algorithms, tensor cores with Tensor Float (TF32) support increase performance by 20 times, without requiring changes in the code, and speed up the automatic function of working with different accuracy and FP16 by 2 times.

Double-precision tensor cores provide the greatest performance for high-performance computing since the advent of the GPU. HPC applications can also use TF32 to achieve up to 11 times the throughput for precise operations.

Data science specialists need to analyze, visualize large data sets and extract valuable information from them. To cope with workloads, servers equipped with A100 graphics accelerators provide the necessary computing power thanks to large amounts of high-speed memory with high bandwidth.

Video memory capacity 80 GB
Type of video memory HBM2e
Memory bandwidth 1935 Gb/s
Encode/decode 1 encoder, 2 decoder (+AV1 decode)

GPU performance benchmarks

Performance benchmarks results in a virtual environment for 1 Tesla A100 graphics card.
  • OctaneBench 2020

    up to
  • Matrix multiply example

  • Hashcat bcrypt

    117 000

Basic configurations with Tesla A100 80 GB

Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100- 16 65536 160 1 Launch
teslaa100- 16 65536 240 1 Launch
teslaa100- 16 65536 320 1 Launch
teslaa100- 16 65536 480 1 Launch
teslaa100- 16 131072 160 1 Launch
teslaa100- 16 131072 240 1 Launch
teslaa100- 16 131072 320 1 Launch
teslaa100- 16 131072 480 1 Launch
teslaa100- 24 262144 160 2 Launch
teslaa100- 24 262144 240 2 Launch
teslaa100- 24 262144 320 2 Launch
teslaa100- 24 262144 480 2 Launch
teslaa100-3.32.384.160 32 393216 160 3 Launch
teslaa100-3.32.384.240 32 393216 240 3 Launch
teslaa100-3.32.384.320 32 393216 320 3 Launch
teslaa100-3.32.384.480 32 393216 480 3 Launch
teslaa100- 16 131072 120 4 Launch
teslaa100- 16 262144 120 4 Launch
teslaa100-4.44.512.160 44 524288 160 4
teslaa100-4.44.512.240 44 524288 240 4
teslaa100-4.44.512.320 44 524288 320 4
teslaa100-4.44.512.480 44 524288 480 4
teslaa100-4.44.512.960 44 524288 960 4

100% performance

Each physical core or GPU adapter assigned only to a single client.
It means that:

  • Available vCPU time is 100%
  • Physical pass through of GPU inside a VM
  • Less storage and network load on hypervisors, more storage and network performance for a client.

Up to 75 000 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Machines with local SSDs.

Up to 22 500 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Machines with block storage Volumes.

You can be sure that Virtual Machines are not sharing vCPU or GPU among each other.

  1. IOPS — Input/Output Operations Per Second.

Answers to frequently asked questions

You can rent a virtual server for any period. Make a payment for any amount from 1.6 $ and work within the prepaid balance. When the work is completed, delete the server to stop spending money.

You create GPU-servers yourself in the control panel, choosing the hardware configuration and operating system. As a rule, the ordered capacities are available for use within a few minutes.

If something went wrong-write to our round-the-clock support service:

You can choose from basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, OpenSUSE. Or use a pre-configured image from the Marketplace.

All operating systems are installed automatically when the GPU-server is created.

By default, we provide connection to Windows-based servers via RDP, and for Linux-based servers-via SSH.

You can configure any connection method that is convenient for you yourself.

Yes, it is possible. Contact our round-the-clock support service ( and tell us what configuration you need.


  • Cheapest GPU rates

    Find cheaper — get a discount!
  • Discounts
    for prepayment

    25% and 50% discount on prepayment for 1 and 3 months

  • Second

    Use virtual machines just as much, as needed

  • No

    Automatic OS installation, virtual machines are ready in a few minutes
  • Free

    Up to 1 Gb/s incoming and outgoing traffic for free
  • Round-the-clock

    Live chat and Telegram support — 24/7
Sign up

Pre-installed images

Create virtual machines based on any of the pre-installed operating systems with the necessary set of additional software.
  • Ubuntu
  • Debian
  • CentOS
  • Fedora
  • OpenSUSE
  • MS Windows Server
  • 3ds Max
  • Cinema 4D
  • Corona
  • Deadline
  • Blender
  • Archicad
  • Ubuntu
    Graphics drivers, CUDA, cuDNN
  • MS Windows Server
    Graphics drivers, CUDA, cuDNN
  • Nginx
  • Apache
  • Git
  • Jupyter
  • Django
  • MySQL
View all the pre-installed images in the Marketplace.

Pure OpenStack API

Developers and system administrators can manage the cloud using the full OpenStack API.
Authenticate ninja_user example: $ curl -g -i -X POST \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
Create ninja_vm example: $ curl -g -i -X POST \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
Delete ninja_vm example: $ curl -g -i -X DELETE{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Create ninja_network example: $ curl -g -i -X POST \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"network": {"name": "ninja_net", "admin_state_up": true, "router:external": false}}'

Any questions?

Write to us via live chat, email, or call by phone:
+7 499 110-44-94

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport +7 499 110-44-94
Sign up

Subscribe to our newsletter

Get notifications about new promotions and special offers by email.

 I agree to the processing of personal data