Cloud servers with Tesla V100

Accelerate the solution of artificial intelligence, HPC, data science and graphics tasks

Graphics servers with Tesla V100

All graphics servers with Tesla V100 are based on two Intel® Xeon® Gold 2nd generation 6240R CPUs with a base clock speed of 2.4 GHz and a maximum clock speed with Turbo Boost technology of 4.0 GHz.

Each processor contains two Intel® AVX-512 units and supports Intel® AVX-512 Deep Learning Boost functions. This set of instructions speeds up multiplication and addition operations with reduced accuracy, which are used in many internal cycles of the deep learning algorithm.

Each server has up to 3072 GB of DDR4 ECC Reg 2993 MHz RAM. Local storage with a total capacity of 1920 GB is organized on Intel® solid-state drives, designed specifically for data centers.

GPU Tesla V100

Equipped with 640 Tensor Cores, the Tesla V100 is the first accelerator to overcome the performance barrier of 100 teraoperations per second (TOPS) in deep learning tasks. Models that took weeks and months to train on previous generation systems can now be trained in just a few days.

High Performance Computing (HPC) is the fundamental pillar of modern science. From weather forecasting and the creation of new medicines to the search for energy sources, scientists constantly employ large computing systems to model our world and predict events within it. AI expands the capabilities of HPC, enabling scientists to analyze vast amounts of data and extract useful information where simulations alone cannot provide a complete picture of what is happening.

The Tesla V100 graphics accelerator is designed to provide a fusion of HPC and AI. This is a solution for HPC systems, which will prove itself adept in computing for simulations and data processing for extracting useful information from them. By combining CUDA and Tensor Cores in one architecture, a server equipped with Tesla V100 graphics accelerators can augment or replace hundreds of traditional CPU servers, performing both traditional HPC and AI tasks. Now, every scientist can access a supercomputer to tackle the most challenging problems.

Video memory capacity 32 GB
Type of video memory HBM2 (ECC)
Memory bandwidth 900 Gb/s
Tensor cores 640
CUDA cores 5120

GPU performance benchmarks

Performance benchmarks results in a virtual environment for 1 Tesla V100 graphics card.
  • OctaneBench 2020

    up to
    360
    pts
  • Matrix multiply example

    2430
    GFlop/s
  • Hashcat bcrypt

    46 600
    H/s

Basic configurations with Tesla V100 32 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslav100-1.8.16.128 8 16384 128 1 $1.08 Launch
teslav100-1.8.64.60 8 65536 60 1 $1.17 Launch
teslav100-1.8.64.80 8 65536 80 1 $1.17 Launch
teslav100-1.8.64.160 8 65536 160 1 $1.18 Launch
teslav100-1.8.64.320 8 65536 320 1 $1.20 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
teslav100-1.16.64.160 16 65536 160 1 $1.23 Launch
teslav100-1.16.64.200 16 65536 200 1 $1.23 Launch
teslav100-1.16.64.320 16 65536 320 1 $1.25 Launch
teslav100-1.16.128.160 16 131072 160 1 $1.36 Launch
teslav100-1.32.128.160 32 131072 160 1 $1.45 Launch
teslav100-1.64.256.320 64 262144 320 1 $1.92 Launch
teslav100-2.8.64.240 8 65535 240 2 $2.17 Launch
teslav100-2.16.64.240 16 65535 240 2 $2.22 Launch
teslav100-2.32.128.160 32 131072 160 2 $2.44 Launch
teslav100-2.32.128.320 32 131072 320 2 $2.46 Launch
teslav100-2.32.192.160 32 196608 160 2 $2.56 Launch
teslav100-2.32.192.320 32 196608 320 2 $2.58 Launch
teslav100-2.32.256.320 32 262144 320 2 $2.71 Launch
teslav100-3.64.256.320 64 262144 320 3 $3.89 Launch
teslav100-4.32.64.160 32 65536 160 4 $4.28 Launch
teslav100-4.32.96.160 32 98304 160 4 $4.35 Launch
teslav100-4.32.256.160 32 262144 160 4 $4.66 Launch
teslav100-4.32.256.320 32 262144 320 4 $4.68 Launch

100% Performance

Each physical core or GPU adapter is dedicated to a single client.
This means:

  • 100% vCPU time is available;
  • Physical pass-through of GPUs inside virtual servers;
  • Reduced storage and network load on hypervisors, delivering more storage and network performance to clients.

Up to 75,000 IOPS1 for RANDOM READ and up to 20,000 IOPS for RANDOM WRITE on Virtual Servers with local SSDs

Up to 70 000 IOPS1 for RANDOM READ and up to 60 000 IOPS for RANDOM WRITE on Virtual Servers with block storage volumes

You can be confident that Virtual Servers do not share vCPU or GPU resources with one another.

  1. IOPS — Input/Output Operations Per Second.

Answers to frequently asked questions

You can host a virtual server for any duration. Simply make a payment starting from 1.1 $ and work within the prepaid balance. When you're finished, delete the server to stop incurring charges.

Yes, you can create GPU-servers through the control panel by choosing the hardware configuration and operating system. The ordered resources will be available for use within a few minutes.

If something goes wrong, contact our tech support. We are available 24/7: https://t.me/immerscloudsupport.

You can choose from the following basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, and OpenSUSE, or use a pre-configured image from the Marketplace.

All operating systems are installed automatically when the GPU-server is created.

By default, we provide RDP access for Windows-based servers and SSH access for Linux-based servers.

You can configure any connection method that is convenient for you.

Yes, it is possible. Contact our 24/7 support service (at https://t.me/immerscloudsupport} and tell us your desired configuration.

A Bit More About Us

  • Pay-as-you-go billing

    and free VM pause (shelve). You only pay for the actual use of your VMs
  • 24/7/365 Tech Support

    Tech support is always available via chat and responds within minutes

  • Free traffic

    Speeds up to 20 Gb/s with no extra charge for incoming or outgoing traffic

  • Our Data Centers

    Built to TIER III standards
  • 100% of power is yours

    We do not share resources you purchased with other users
  • 20 000+

    Users trust us with their data and tasks
Sign up

Ready-made OS images with the required software

Create virtual servers by utilizing our pre-configured OS images with either Windows or Linux, along with specialized pre-installed software.
  • Ubuntu
     
  • Debian
     
  • CentOS
     
  • Fedora
     
  • OpenSUSE
     
  • MS Windows Server
     
  • 3ds Max
     
  • Cinema 4D
     
  • Corona
     
  • Deadline
     
  • Blender
     
  • Archicad
     
  • Ubuntu
    Graphics drivers, CUDA, cuDNN
  • MS Windows Server
    Graphics drivers, CUDA, cuDNN
  • Nginx
     
  • Apache
     
  • Git
     
  • Jupyter
     
  • Django
     
  • MySQL
     
View all the pre-installed images in the Marketplace.

Pure OpenStack API

Developers and system administrators can manage the cloud using the full OpenStack API.
Authenticate ninja_user example: $ curl -g -i -X POST https://api.immers.cloud:5000/v3/auth/tokens \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
Create ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
STOP ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-stop" : null}'
START ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-start" : null}'
SHELVE ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"shelve" : null}'
DELETE ninja_vm example: $ curl -g -i -X DELETE https://api.immers.cloud:8774/v2.1/servers/{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Documentation
Signup

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport
support@immers.cloud
+7 499 110-44-94

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport support@immers.cloud +7 499 110-44-94
Sign up

Subscribe to our newsletter

Get notifications about new promotions and special offers by email.

 I agree to the processing of personal data