Deep learning of neural networks and working with videos at a high level
Graphics servers with Tesla T4
All graphics servers with Tesla T4 are based on two Intel® Xeon® Gold 2nd generation 6240R CPUs with a base clock speed of 2.4 GHz and a maximum clock speed with Turbo Boost technology of 4.0 GHz.
Each processor contains two Intel® AVX-512 units and supports Intel® AVX-512 Deep Learning Boost functions. This set of instructions speeds up multiplication and addition operations with reduced accuracy, which are used in many internal cycles of the deep learning algorithm.
Each server has up to 3072 GB of DDR4 ECC Reg 2993 MHz RAM. Local storage with a total capacity of 1920 GB is organized on Intel® solid-state drives, designed specifically for data centers.
GPU Tesla T4
Tesla® T4 with tensor and RT cores is one of the most advanced and energy-efficient graphics accelerator for inference, video transcoding, streaming, and remote desktops. Each accelerator has 320 tensor cores, 2560 CUDA cores, and 16 GB of memory.
Video memory capacity
16 GB
Type of video memory
GDDR6
Memory bandwidth
320 Gb/s
Tensor cores
320
CUDA cores
2560
GPU performance benchmarks
Performance benchmarks results in a virtual environment for 1 Tesla T4 graphics card.
OctaneBench 2020
up to
170
pts
Matrix multiply example
380
GFlop/s
Hashcat bcrypt
13 700
H/s
Basic configurations with Tesla T4 16 GB
Prices:
Subscribe to the availability notification
Specify the number of required flavors . When they become available, you will receive a notification by email.
OK
Cancel
You successfully subscribed on notification.
You already subscribed on notification.
You already have reached the limit of TeslaT4 flavors.
Each physical core or GPU adapter is dedicated to a single client. This means:
100% vCPU time is available;
Physical pass-through of GPUs inside virtual servers;
Reduced storage and network load on hypervisors, delivering more storage and network performance to clients.
Up to 75,000 IOPS1 for RANDOM READ and up to 20,000 IOPS for RANDOM WRITE on Virtual Servers with local SSDs
Up to 70 000 IOPS1 for RANDOM READ and up to 60 000 IOPS for RANDOM WRITE on Virtual Servers with block storage volumes
You can be confident that Virtual Servers do not share vCPU or GPU resources with one another.
IOPS — Input/Output Operations Per Second.
Answers to frequently asked questions
What is the minimum hosting period for a virtual GPU-server?
You can host a virtual server for any duration. Simply make a payment starting from 1.1 $ and work within the prepaid balance. When you're finished, delete the server to stop incurring charges.
Can I create GPU-servers myself?
Yes, you can create GPU-servers through the control panel by choosing the hardware configuration and operating system. The ordered resources will be available for use within a few minutes.
What operating systems can be installed on a virtual GPU-server?
You can choose from the following basic images: Windows Server 2019, Windows Server 2022, Ubuntu, Debian, CentOS, Fedora, and OpenSUSE, or use a pre-configured image from the Marketplace.
All operating systems are installed automatically when the GPU-server is created.
How to connect to a virtual GPU-server?
By default, we provide RDP access for Windows-based servers and SSH access for Linux-based servers.
You can configure any connection method that is convenient for you.
Is it possible to rent a virtual GPU-server with an custom configuration?
Yes, it is possible. Contact our 24/7 support service (at https://t.me/immerscloudsupport} and tell us your desired configuration.
A Bit More About Us
Pay-as-you-go billing
and free VM pause (shelve). You only pay for the actual use of your VMs
24/7/365 Tech Support
Tech support is always available via chat and responds within minutes
Free traffic
Speeds up to 20 Gb/s with no extra charge for incoming or outgoing traffic
Our Data Centers
Built to TIER III standards
100% of power is yours
We do not share resources you purchased with other users