Cloud servers with Tesla A100

Unsurpassed acceleration for solving the most complex computational tasks of AI, data analysis and HPC

Graphics servers with Tesla A100

All graphics servers with Tesla A100 are based on two Intel Xeon Gold 3rd generation 6336Y CPUs with a base clock frequency of 2.4 GHz and a maximum clock frequency with Turbo Boost technology of 3.6 GHz.

Each processor contains two Intel® AVX-512 units and supports Intel® AVX-512 Deep Learning Boost functions. This set of instructions speeds up multiplication and addition operations with reduced accuracy, which are used in many internal cycles of the deep learning algorithm.

Each server has up to 4096 GB of DDR4 ECC Reg 3200 MHz RAM. Local storage with a total capacity of 1920 GB is organized on Intel® solid-state drives, designed specifically for data centers.

GPU Tesla A100

Tesla A100 GPU provides unsurpassed acceleration for AI tasks, data analysis and for solving the most complex computing tasks. The A100 is the most productive integrated platform for AI and HPC, allowing you to get real-time results and deploy scalable solutions.

When training Deep Learning algorithms, tensor cores with Tensor Float (TF32) support increase performance by 20 times without requiring changes in the code and speed up the automatic function of working with different accuracies, including FP16, by 2 times.

Double-precision tensor cores provide the greatest performance for high-performance computing since the advent of the GPU. HPC applications can also use TF32 to achieve up to 11 times the throughput for precision operations.

Data science specialists need to analyze and visualize large datasets, extracting valuable information from them. Servers equipped with A100 graphics accelerators provide the necessary computing power, thanks to large amounts of high-speed memory with high bandwidth, to cope with these workloads.

Video memory capacity 80 GB
Type of video memory HBM2e
Memory bandwidth 1935 Gb/s
Encode/decode 1 encoder, 2 decoder (+AV1 decode)

GPU performance benchmarks

Performance benchmarks results in a virtual environment for 1 Tesla A100 graphics card.
  • OctaneBench 2020

    up to
    500
    pts
  • Matrix multiply example

    4300
    GFlop/s
  • Hashcat bcrypt

    117 000
    H/s

Basic configurations with Tesla A100 80 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslaa100-1.16.64.240 16 65536 240 1 $2.59 Launch
teslaa100-1.16.64.320 16 65536 320 1 $2.60 Launch
teslaa100-1.16.64.480 16 65536 480 1 $2.62 Launch
teslaa100-1.16.128.160 16 131072 160 1 $2.71 Launch
teslaa100-1.16.128.240 16 131072 240 1 $2.72 Launch
teslaa100-1.16.128.320 16 131072 320 1 $2.73 Launch
teslaa100-1.16.128.480 16 131072 480 1 $2.74 Launch
teslaa100-1.16.128.480.1socket 16 131072 480 1 $2.74 Launch
teslaa100-1.16.256.480 16 262144 480 1 $3.00 Launch
teslaa100-1.32.128.320 32 131072 320 1 $2.83 Launch
teslaa100-2.24.256.160 24 262144 160 2 $5.35 Launch
teslaa100-2.24.256.240 24 262144 240 2 $5.36 Launch
teslaa100-2.24.256.320 24 262144 320 2 $5.37 Launch
teslaa100-2.24.256.480 24 262144 480 2 $5.39 Launch
teslaa100-3.32.384.160 32 393216 160 3 $7.99 Launch
teslaa100-3.32.384.240 32 393216 240 3 $8.00 Launch
teslaa100-3.32.384.320 32 393216 320 3 $8.01 Launch
teslaa100-3.32.384.480 32 393216 480 3 $8.03 Launch
teslaa100-4.16.128.120 16 131072 120 4 $9.73 Launch
teslaa100-4.16.256.120 16 262144 120 4 $9.98 Launch
teslaa100-4.16.256.240 16 262144 240 4 $9.99 Launch
teslaa100-4.16.256.480 16 262144 480 4 $10.02 Launch
teslaa100-4.44.256.120 44 262288 120 4 $10.15 Launch
teslaa100-4.44.256.240 44 262288 240 4 $10.16 Launch
teslaa100-4.44.256.480 44 262288 480 4 $10.19 Launch
teslaa100-4.44.512.160 44 524288 160 4 $10.66 Launch
teslaa100-4.44.512.240 44 524288 240 4 $10.67 Launch
teslaa100-4.44.512.320 44 524288 320 4 $10.68 Launch
teslaa100-4.44.512.480 44 524288 480 4 $10.69 Launch
teslaa100-4.44.512.960 44 524288 960 4 $10.75 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-2.24.96.160.nvlink 24 98304 160 2 $5.04 Launch
teslaa100-2.24.128.160.nvlink 24 131072 160 2 $5.10 Launch
teslaa100-2.24.192.160.nvlink 24 196608 160 2 $5.23 Launch
teslaa100-2.24.256.160.nvlink 24 262144 160 2 $5.35 Launch
teslaa100-2.24.256.320.nvlink 24 262144 320 2 $5.37 Launch
teslaa100-2.24.256.480.nvlink 24 262144 480 2 $5.39 Launch
teslaa100-4.32.384.160.nvlink 32 393216 160 4 $10.33 Launch
teslaa100-4.32.384.320.nvlink 32 393216 320 4 $10.35 Launch
teslaa100-4.32.384.480.nvlink 32 393216 480 4 $10.37 Launch
teslaa100-6.44.512.160.nvlink 44 524288 160 6 $15.34 Launch
teslaa100-6.44.512.320.nvlink 44 524288 320 6 $15.36 Launch
teslaa100-6.44.512.480.nvlink 44 524288 480 6 $15.37 Launch
teslaa100-6.44.512.960.nvlink 44 524288 960 6 $15.43 Launch
teslaa100-8.44.704.160.nvlink 44 720896 160 8 $20.40 Launch
teslaa100-8.44.704.320.nvlink 44 720896 320 8 $20.41 Launch
teslaa100-8.44.704.480.nvlink 44 720896 480 8 $20.43 Launch
teslaa100-8.44.704.960.nvlink 44 720896 960 8 $20.48 Launch

100% Performance

Each physical core or GPU adapter is dedicated to a single client.
This means:

  • 100% vCPU time is available;
  • Physical pass-through of GPUs inside virtual servers;
  • Reduced storage and network load on hypervisors, delivering more storage and network performance to clients.

Up to 75,000 IOPS1 for RANDOM READ and up to 20,000 IOPS for RANDOM WRITE on Virtual Servers with local SSDs

Up to 70 000 IOPS1 for RANDOM READ and up to 60 000 IOPS for RANDOM WRITE on Virtual Servers with block storage volumes

You can be confident that Virtual Servers do not share vCPU or GPU resources with one another.

  1. IOPS — Input/Output Operations Per Second.

Answers to frequently asked questions

You can host a virtual server for any duration. Simply make a payment starting from 1.1 $ and work within the prepaid balance. When you're finished, delete the server to stop incurring charges.

Yes, you can create GPU-servers through the control panel by choosing the hardware configuration and operating system. The ordered resources will be available for use within a few minutes.

If something goes wrong, contact our tech support. We are available 24/7: https://t.me/immerscloudsupport.

You can choose from the following basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, and OpenSUSE, or use a pre-configured image from the Marketplace.

All operating systems are installed automatically when the GPU-server is created.

By default, we provide RDP access for Windows-based servers and SSH access for Linux-based servers.

You can configure any connection method that is convenient for you.

Yes, it is possible. Contact our 24/7 support service (at https://t.me/immerscloudsupport} and tell us your desired configuration.

A Bit More About Us

  • Pay-as-you-go billing

    and free VM pause (shelve). You only pay for the actual use of your VMs
  • 24/7/365 Tech Support

    Tech support is always available via chat and responds within minutes

  • Free traffic

    Speeds up to 20 Gb/s with no extra charge for incoming or outgoing traffic

  • Our Data Centers

    Built to TIER III standards
  • 100% of power is yours

    We do not share resources you purchased with other users
  • 20 000+

    Users trust us with their data and tasks
Sign up

Ready-made OS images with the required software

Create virtual servers by utilizing our pre-configured OS images with either Windows or Linux, along with specialized pre-installed software.
  • Ubuntu
     
  • Debian
     
  • CentOS
     
  • Fedora
     
  • OpenSUSE
     
  • MS Windows Server
     
  • 3ds Max
     
  • Cinema 4D
     
  • Corona
     
  • Deadline
     
  • Blender
     
  • Archicad
     
  • Ubuntu
    Graphics drivers, CUDA, cuDNN
  • MS Windows Server
    Graphics drivers, CUDA, cuDNN
  • Nginx
     
  • Apache
     
  • Git
     
  • Jupyter
     
  • Django
     
  • MySQL
     
View all the pre-installed images in the Marketplace.

Pure OpenStack API

Developers and system administrators can manage the cloud using the full OpenStack API.
Authenticate ninja_user example: $ curl -g -i -X POST https://api.immers.cloud:5000/v3/auth/tokens \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
Create ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
STOP ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-stop" : null}'
START ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-start" : null}'
SHELVE ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"shelve" : null}'
DELETE ninja_vm example: $ curl -g -i -X DELETE https://api.immers.cloud:8774/v2.1/servers/{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Documentation
Signup

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport
support@immers.cloud
+7 499 110-44-94

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport support@immers.cloud +7 499 110-44-94
Sign up

Subscribe to our newsletter

Get notifications about new promotions and special offers by email.

 I agree to the processing of personal data