Llama-3-8B

The Llama-3-8B model is rightfully considered one of the most famous and popular models in the history of open-source artificial intelligence. Released on April 18, 2024, it marked a turning point, making technologies used in cutting-edge AI models accessible to a wide community of researchers and developers. It was Llama 3 that became the catalyst for the explosive growth of open-source projects and startups in the AI field, proving that open models can successfully compete with commercial counterparts, not only in terms of customization flexibility and fine-tuning but also in terms of performance quality.

At the core of the model lies a transformer architecture utilizing the Grouped-Query Attention (GQA) mechanism. Unlike standard MHA (Multi-Head Attention), GQA significantly reduces memory load and accelerates generation. The model works with a context of 8,192 tokens and uses a tokenizer with a vocabulary of 128,256 tokens, allowing it to efficiently process complex multilingual queries.

The uniqueness of Llama-3-8B is largely due to the unprecedented scale and quality of its training at the time of its release. The base model was pre-trained on over 15 trillion tokens — seven times more than its predecessor, Llama 2. The data, collected from publicly available sources, was carefully filtered to ensure high quality. To obtain the instruction-tuned version (-Instruct), a two-stage procedure was used: first, Supervised Fine-Tuning (SFT) on millions of examples, followed by alignment with human preferences using Reinforcement Learning from Human Feedback (RLHF). This made the model not only extremely helpful but also significantly reduced the number of false refusals.

Thanks to its characteristics, Llama-3-8B still has a wide range of practical applications to this day. A key property is the ease of its fine-tuning and adaptation. This feature allows the model to be used both as a foundation for creating dialogue systems and text analysis tools, and as an ideal "sandbox" for research experiments with fine-tuning for highly specialized domains such as law or medicine. Moreover, the combination of an open commercial license, low hardware requirements, and ease of customization made it the number one choice for startups and rapid prototyping, enabling the efficient creation of MVPs in a wide variety of subject areas.


Announce Date: 17.04.2024
Parameters: 9B
Context: 9K
Layers: 32
Attention Type: Full Attention
Developer: Meta AI
Transformers Version: 4.40.0.dev0
License: META LLAMA 3

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Llama-3-8B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Llama-3-8B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
8,192.0
1 $0.33 6.561 Launch
rtx2080ti-1.10.16.500
8,192.0
1 $0.38 2.061 Launch
teslaa2-1.16.32.160
8,192.0
1 $0.38 6.561 Launch
teslaa10-1.16.32.160
8,192.0
1 $0.53 13.761 Launch
rtx3080-1.16.32.160
8,192.0
1 $0.57 1.161 Launch
rtx3090-1.16.24.160
8,192.0
1 $0.83 13.761 Launch
rtx4090-1.16.32.160
8,192.0
1 $1.02 13.761 Launch
teslav100-1.12.64.160
8,192.0
1 $1.20 20.961 Launch
rtxa5000-2.16.64.160.nvlink
8,192.0
tensor
2 $1.23 32.861 Launch
rtx5090-1.16.64.160
8,192.0
1 $1.59 20.961 Launch
teslaa100-1.16.64.160
8,192.0
1 $2.37 64.161 Launch
h100-1.16.64.160
8,192.0
1 $3.83 64.161 Launch
h100nvl-1.16.96.160
8,192.0
1 $4.11 76.761 Launch
h200-1.16.128.160
8,192.0
1 $4.74 119.061 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
8,192.0
1 $0.33 3.442 Launch
teslaa2-1.16.32.160
8,192.0
1 $0.38 3.442 Launch
teslaa10-1.16.32.160
8,192.0
1 $0.53 10.642 Launch
rtx2080ti-2.12.64.160
8,192.0
tensor
2 $0.69 6.342 Launch
rtx3090-1.16.24.160
8,192.0
1 $0.83 10.642 Launch
rtx3080-2.16.32.160
8,192.0
tensor
2 $0.97 4.542 Launch
rtx4090-1.16.32.160
8,192.0
1 $1.02 10.642 Launch
teslav100-1.12.64.160
8,192.0
1 $1.20 17.842 Launch
rtxa5000-2.16.64.160.nvlink
8,192.0
tensor
2 $1.23 29.742 Launch
rtx5090-1.16.64.160
8,192.0
1 $1.59 17.842 Launch
teslaa100-1.16.64.160
8,192.0
1 $2.37 61.042 Launch
h100-1.16.64.160
8,192.0
1 $3.83 61.042 Launch
h100nvl-1.16.96.160
8,192.0
1 $4.11 73.642 Launch
h200-1.16.128.160
8,192.0
1 $4.74 115.942 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-1.16.32.160
8,192.0
1 $0.53 4.142 Launch
teslat4-2.16.32.160
8,192.0
tensor
2 $0.54 8.842 Launch
teslaa2-2.16.32.160
8,192.0
tensor
2 $0.57 8.842 Launch
rtx3090-1.16.24.160
8,192.0
1 $0.83 4.142 Launch
rtx2080ti-3.12.24.120
8,192.0
pipeline
3 $0.84 7.242 Launch
rtx4090-1.16.32.160
8,192.0
1 $1.02 4.142 Launch
rtx2080ti-4.16.32.160
8,192.0
tensor
4 $1.12 14.642 Launch
teslav100-1.12.64.160
8,192.0
1 $1.20 11.342 Launch
rtxa5000-2.16.64.160.nvlink
8,192.0
tensor
2 $1.23 23.242 Launch
rtx3080-3.16.64.160
8,192.0
pipeline
3 $1.43 4.542 Launch
rtx5090-1.16.64.160
8,192.0
1 $1.59 11.342 Launch
rtx3080-4.16.64.160
8,192.0
tensor
4 $1.82 11.042 Launch
teslaa100-1.16.64.160
8,192.0
1 $2.37 54.542 Launch
h100-1.16.64.160
8,192.0
1 $3.83 54.542 Launch
h100nvl-1.16.96.160
8,192.0
1 $4.11 67.142 Launch
h200-1.16.128.160
8,192.0
1 $4.74 109.442 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.