Llama-3.1-8B-Instruct

Meta Llama 3.1 8B is an impressive and incredibly popular model within the community, featuring 8 billion parameters. It is part of the innovative Llama 3.1 series, which demonstrates revolutionary capabilities in the fields of natural language processing and intelligent dialogue. Architecturally, the model is built on an optimized transformer architecture using advanced Grouped-Query Attention (GQA) technology, providing exceptional scalability for inference. The model was trained on more than 15 trillion tokens from publicly available sources, enabling it to demonstrate deep contextual understanding and generate high-quality responses.

Officially, the model supports a context window of up to 128 thousand tokens and works with eight languages (Russian is not listed, although the model understands it very well). The instruction-tuned version of the model underwent supervised fine-tuning (SFT) and reinforcement learning based on human feedback (RLHF), ensuring a high level of safety and alignment with human preferences.

The potential applications of the model are truly **boundless**: from building intelligent dialogue systems and personal assistants to developing tools for programming and processing large volumes of text. The model supports native function calling (tool calling), making it possible to integrate with external services and create powerful agent-based systems.Thanks to its permissive Llama 3.1 Community License, the model is available for both commercial and research use, making Llama 3.1 8B an ideal choice for developers, researchers, and businesses looking to implement cutting-edge AI technologies into their products and services.


Announce Date: 23.07.2024
Parameters: 9B
Context: 132K
Layers: 32
Attention Type: Full Attention
Developer: Meta AI
Transformers Version: 4.42.3
License: LLAMA 3.1

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Llama-3.1-8B-Instruct capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Llama-3.1-8B-Instruct

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-2.16.32.160
131,072.0
tensor
2 $0.57 16.580 1.154 Launch
rtx2080ti-2.16.64.160
131,072.0
tensor
2 $0.71 0.591 Launch
teslat4-3.32.64.200
131,072.0
pipeline
3 $0.88 1.897 Launch
rtx3080-2.16.64.160
131,072.0
tensor
2 $1.03 0.479 Launch
rtx4090-1.32.64.160
131,072.0
1 $1.18 0.860 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 2.054 Launch
teslat4-4.48.192.320
131,072.0
tensor
4 $1.43 2.641 Launch
rtx5090-1.32.64.160
131,072.0
1 $1.69 1.310 Launch
teslaa10-4.16.128.160
131,072.0
tensor
4 $1.75 4.441 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 4.010 Launch
rtx3090-4.16.128.160
131,072.0
tensor
4 $3.01 4.441 Launch
h100-1.16.64.160
131,072.0
1 $3.83 101.660 4.010 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 4.797 Launch
teslaa100-2.24.256.160.nvlink
131,072.0
tensor
2 $4.93 8.354 Launch
h200-2.24.256.160.nvlink
131,072.0
tensor
2 $9.40 15.216 Launch
h200-4.32.768.480
131,072.0
tensor
4 $19.23 30.766 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-2.16.32.160
131,072.0
tensor
2 $0.57 0.959 Launch
rtx2080ti-2.16.64.160
131,072.0
tensor
2 $0.71 0.396 Launch
teslat4-3.32.64.200
131,072.0
pipeline
3 $0.88 1.703 Launch
rtx3080-2.16.64.160
131,072.0
tensor
2 $1.03 0.284 Launch
rtx4090-1.32.64.160
131,072.0
1 $1.18 0.665 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 1.859 Launch
teslat4-4.48.192.320
131,072.0
tensor
4 $1.43 2.446 Launch
rtx5090-1.32.64.160
131,072.0
1 $1.69 1.115 Launch
teslaa10-4.16.128.160
131,072.0
tensor
4 $1.75 4.246 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 3.815 Launch
rtx3090-4.16.128.160
131,072.0
tensor
4 $3.01 4.246 Launch
h100-1.16.64.160
131,072.0
1 $3.83 3.815 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 4.603 Launch
teslaa100-2.24.256.160.nvlink
131,072.0
tensor
2 $4.93 8.159 Launch
h200-2.24.256.160.nvlink
131,072.0
tensor
2 $9.40 15.021 Launch
h200-4.32.768.480
131,072.0
tensor
4 $19.23 30.571 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-2.16.32.160
131,072.0
tensor
2 $0.57 15.390 0.483 Launch
rtx2080ti-2.16.64.160
131,072.0
tensor
2 $0.71 -0.079 Launch
teslat4-3.32.64.200
131,072.0
pipeline
3 $0.88 1.227 Launch
rtx3080-2.16.64.160
131,072.0
tensor
2 $1.03 -0.192 Launch
rtx4090-1.32.64.160
131,072.0
1 $1.18 0.189 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 1.383 Launch
teslat4-4.48.192.320
131,072.0
tensor
4 $1.43 1.971 Launch
rtx5090-1.32.64.160
131,072.0
1 $1.69 0.639 Launch
teslaa10-4.16.128.160
131,072.0
tensor
4 $1.75 3.771 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 76.950 3.339 Launch
rtx3090-4.16.128.160
131,072.0
tensor
4 $3.01 3.771 Launch
h100-1.16.64.160
131,072.0
1 $3.83 83.790 3.339 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 128.540 4.127 Launch
teslaa100-2.24.256.160.nvlink
131,072.0
tensor
2 $4.93 7.683 Launch
h200-2.24.256.160.nvlink
131,072.0
tensor
2 $9.40 14.546 Launch
h200-4.32.768.480
131,072.0
tensor
4 $19.23 30.096 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.