Cloud servers with Tesla T4

Deep learning of neural networks and working with videos at a high level

Graphics servers with Tesla T4

All graphics servers with Tesla T4 are based on two Intel® Xeon® Gold 2nd generation 6240R CPUs with a base clock speed of 2.4 GHz and a maximum clock speed with Turbo Boost technology of 4.0 GHz.

Each processor contains two Intel® AVX-512 units and supports Intel® AVX-512 Deep Learning Boost functions. This set of instructions speeds up multiplication and addition operations with reduced accuracy, which are used in many internal cycles of the deep learning algorithm.

Each server has up to 3072 GB of DDR4 ECC Reg 2993 MHz RAM. Local storage with a total capacity of 1920 GB is organized on Intel® solid-state drives, designed specifically for data centers.

GPU Tesla T4

Tesla® T4 with tensor and RT cores is one of the most advanced and energy-efficient graphics accelerator for deep learning of neural networks and inference, video transcoding, streaming, and remote desktops. Each accelerator has 320 tensor cores, 2560 CUDA cores, and 16 GB of memory.

Video memory capacity 16 GB
Type of video memory GDDR6
Memory bandwidth 320 Gb/s

GPU performance benchmarks

Performance benchmarks results in a virtual environment for 1 Tesla T4 graphics card.
  • OctaneBench 2020

    up to
    170
    pts
  • Matrix multiply example

    380
    GFlop/s
  • Hashcat bcrypt

    13 700
    H/s

Basic configurations with Tesla T4 16 GB

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.4.8.60 4 8192 60 1 Launch
teslat4-1.4.8.120 4 8192 120 1 Launch
teslat4-1.4.16.60 4 16384 60 1 Launch
teslat4-1.4.16.120 4 16384 120 1 Launch
teslat4-1.8.16.120 8 16384 120 1 Launch
teslat4-1.8.16.160 8 16384 160 1 Launch
teslat4-1.8.32.80 8 32768 80 1 Launch
teslat4-1.8.32.120 8 32768 120 1 Launch
teslat4-1.16.16.160 16 16384 160 1 Launch
teslat4-1.16.64.160 16 65536 160 1 Launch
teslat4-1.32.64.120 32 65536 120 1 Launch
teslat4-1.32.128.160 32 131072 160 1 Launch
teslat4-1.48.256.160 48 262144 160 1 Launch
teslat4-2.4.16.160 4 16384 160 2 Launch
teslat4-2.16.32.160 16 32768 160 2 Launch
teslat4-2.16.64.120 16 65536 120 2 Launch
teslat4-2.32.64.120 32 65536 120 2 Launch
teslat4-2.32.128.160 32 131072 160 2 Launch
teslat4-2.32.192.160 32 196608 160 2 Launch
teslat4-3.32.64.160 32 65536 160 3 Launch
teslat4-3.32.128.160 32 131072 160 3 Launch
teslat4-3.32.256.160 32 262144 160 3 Launch
teslat4-4.16.64.40.custom.6240R 16 65536 40 4 Launch
teslat4-4.16.64.160 16 65536 160 4 Launch
teslat4-4.16.96.160 16 98304 160 4 Launch
teslat4-4.16.128.160 16 131072 160 4 Launch
teslat4-4.32.64.160 32 65536 160 4 Launch
teslat4-4.32.128.160 32 131072 160 4 Launch
teslat4-4.48.192.160 48 196608 160 4 Launch
teslat4-4.48.256.160 48 262144 160 4 Launch

100% performance

Each physical core or GPU adapter is assigned only to a single client.
It means that:

  • Available vCPU time is 100%;
  • Physical pass-through of GPU inside a virtual server;
  • Less storage and network load on hypervisors, more storage and network performance for a client.

Up to 75 000 IOPS1 for the RANDOM READ and up to 20 000 IOPS for the RANDOM WRITE for the Virtual Servers with local SSDs.

Up to 70 000 IOPS1 for the RANDOM READ and up to 60 000 IOPS for the RANDOM WRITE for the Virtual Servers with block storage Volumes.

You can be sure that Virtual Servers are not sharing vCPU or GPU between each other.

  1. IOPS — Input/Output Operations Per Second.

Answers to frequently asked questions

You can rent a virtual server for any period. Make a payment for any amount starting from 1.1 $ and work within the prepaid balance. When the work is completed, delete the server to stop spending money.

You can create GPU-servers yourself under the control panel, choosing the hardware configuration and operating system. The ordered capacities are available for use within a few minutes.

If something went wrong - write to our tech support. We are available 24/7: https://t.me/immerscloudsupport.

You can choose from basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, OpenSUSE. Or use a pre-configured image from the Marketplace.

All operating systems are installed automatically when the GPU-server is created.

By default, we provide connection to Windows-based servers via RDP, and for Linux-based servers-via SSH.

You can configure any connection method that is convenient to you yourself.

Yes, it is possible. Contact our round-the-clock support service (https://t.me/immerscloudsupport) and tell us what configuration you need.

A bit more about us

  • Per-second billing

    and free VM pause (shelve). You pay for the actual use of your VMs
  • 24/7/365

    Tech support is always available via chat and responds within minutes

  • Free traffic

    Speed up to 2 Gb/s without extra charge for incoming and outgoing traffic

  • Our data centers

    Built according to the TIER III standard
  • 100% of power is yours

    We do not share resources you purchased with other users
  • 20 000+

    Clients trust us with their data and tasks
Sign up

Ready-made OS images with the required software

Create virtual servers by utilizing our pre-configured OS images with either Windows or Linux, along with specialized pre-installed software.
  • Ubuntu
     
  • Debian
     
  • CentOS
     
  • Fedora
     
  • OpenSUSE
     
  • MS Windows Server
     
  • 3ds Max
     
  • Cinema 4D
     
  • Corona
     
  • Deadline
     
  • Blender
     
  • Archicad
     
  • Ubuntu
    Graphics drivers, CUDA, cuDNN
  • MS Windows Server
    Graphics drivers, CUDA, cuDNN
  • Nginx
     
  • Apache
     
  • Git
     
  • Jupyter
     
  • Django
     
  • MySQL
     
View all the pre-installed images in the Marketplace.

Pure OpenStack API

Developers and system administrators can manage the cloud using the full OpenStack API.
Authenticate ninja_user example: $ curl -g -i -X POST https://api.immers.cloud:5000/v3/auth/tokens \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-d '{"auth": {"identity": {"methods": ["password"], "password": {"user": { "name": "ninja_user", "password": "ninja_password", "domain": {"id": "default"}}}}, "scope": {"project": {"name": "ninja_user", "domain": {"id": "default"}}}}}'
Create ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"server": {"name": "ninja_vm", "imageRef": "8b85e210-d2c8-490a-a0ba-dc17183c0223", "key_name": "mykey01", "flavorRef": "8f9a148d-b258-42f7-bcc2-32581d86e1f1", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cc5f6f4a-2c44-44a4-af9a-f8534e34d2b7"}]}}'
STOP ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-stop" : null}'
START ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"os-start" : null}'
SHELVE ninja_vm example: $ curl -g -i -X POST https://api.immers.cloud:8774/v2.1/servers/{server_id}/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN" \
-d '{"shelve" : null}'
DELETE ninja_vm example: $ curl -g -i -X DELETE https://api.immers.cloud:8774/v2.1/servers/{server_id} \
-H "User-Agent: YOUR-USER-AGENT" \
-H "X-Auth-Token: YOUR-API-TOKEN"
Documentation
Signup

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport
support@immers.cloud
+7 499 110-44-94

Any questions?

Write to us via live chat, email, or call by phone:
@immerscloudsupport support@immers.cloud +7 499 110-44-94
Sign up

Subscribe to our newsletter

Get notifications about new promotions and special offers by email.

 I agree to the processing of personal data