Llama-3.1-8B-Instruct

Meta Llama 3.1 8B is an impressive and incredibly popular model within the community, featuring 8 billion parameters. It is part of the innovative Llama 3.1 series, which demonstrates revolutionary capabilities in the fields of natural language processing and intelligent dialogue. Architecturally, the model is built on an optimized transformer architecture using advanced Grouped-Query Attention (GQA) technology, providing exceptional scalability for inference. The model was trained on more than 15 trillion tokens from publicly available sources, enabling it to demonstrate deep contextual understanding and generate high-quality responses.

Officially, the model supports a context window of up to 128 thousand tokens and works with eight languages (Russian is not listed, although the model understands it very well). The instruction-tuned version of the model underwent supervised fine-tuning (SFT) and reinforcement learning based on human feedback (RLHF), ensuring a high level of safety and alignment with human preferences.

The potential applications of the model are truly **boundless**: from building intelligent dialogue systems and personal assistants to developing tools for programming and processing large volumes of text. The model supports native function calling (tool calling), making it possible to integrate with external services and create powerful agent-based systems.Thanks to its permissive Llama 3.1 Community License, the model is available for both commercial and research use, making Llama 3.1 8B an ideal choice for developers, researchers, and businesses looking to implement cutting-edge AI technologies into their products and services.


Announce Date: 23.07.2024
Parameters: 9B
Context: 132K
Layers: 32
Attention Type: Full Attention
Developer: Meta AI
Transformers Version: 4.42.3
License: LLAMA 3.1

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Llama-3.1-8B-Instruct capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Tooling Status Link
ruslandev/llama-3-8b-gpt-4o-ru1.0 32,768.0 Public RTX4090 54.00 AVAILABLE chat

API access to Llama-3.1-8B-Instruct endpoints

curl https://chat.immers.cloud/v1/endpoints/Llama-3-8B-GPT-4o-RU/generate/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer USER_API_KEY" \
-d '{"model": "Llama-3-8B-GPT-4o-RU", "messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"}
], "temperature": 0, "max_tokens": 150}'
$response = Invoke-WebRequest https://chat.immers.cloud/v1/endpoints/Llama-3-8B-GPT-4o-RU/generate/chat/completions `
-Method POST `
-Headers @{
"Authorization" = "Bearer USER_API_KEY"
"Content-Type" = "application/json"
} `
-Body (@{
model = "Llama-3-8B-GPT-4o-RU"
messages = @(
@{ role = "system"; content = "You are a helpful assistant." },
@{ role = "user"; content = "Say this is a test" }
)
} | ConvertTo-Json)
($response.Content | ConvertFrom-Json).choices[0].message.content
#!pip install OpenAI --upgrade

from openai import OpenAI

client = OpenAI(
api_key="USER_API_KEY",
base_url="https://chat.immers.cloud/v1/endpoints/Llama-3-8B-GPT-4o-RU/generate/",
)

chat_response = client.chat.completions.create(
model="Llama-3-8B-GPT-4o-RU",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"},
]
)
print(chat_response.choices[0].message.content)

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Llama-3.1-8B-Instruct

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-2.16.32.160
131,072.0
tensor
2 $0.54 1.154 Launch
teslaa2-2.16.32.160
131,072.0
tensor
2 $0.57 16.580 1.154 Launch
rtx2080ti-3.12.24.120
131,072.0
pipeline
3 $0.84 1.054 Launch
teslaa10-2.16.64.160
131,072.0
tensor
2 $0.93 57.830 2.054 Launch
rtx2080ti-4.16.32.160
131,072.0
tensor
4 $1.12 1.516 Launch
teslav100-1.12.64.160
131,072.0
1 $1.20 1.310 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 72.580 2.054 Launch
rtx3090-2.16.64.160
131,072.0
tensor
2 $1.56 64.660 2.054 Launch
rtx5090-1.16.64.160
131,072.0
1 $1.59 124.490 1.310 Launch
rtx3080-4.16.64.160
131,072.0
tensor
4 $1.82 1.291 Launch
rtx4090-2.16.64.160
131,072.0
tensor
2 $1.92 98.610 2.054 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 4.010 Launch
h100-1.16.64.160
131,072.0
1 $3.83 101.660 4.010 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 4.797 Launch
h200-1.16.128.160
131,072.0
1 $4.74 7.441 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-2.16.32.160
131,072.0
tensor
2 $0.54 0.959 Launch
teslaa2-2.16.32.160
131,072.0
tensor
2 $0.57 0.959 Launch
teslaa10-2.16.64.160
131,072.0
tensor
2 $0.93 1.859 Launch
rtx2080ti-4.16.32.160
131,072.0
tensor
4 $1.12 1.321 Launch
teslav100-1.12.64.160
131,072.0
1 $1.20 1.115 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 1.859 Launch
rtx3090-2.16.64.160
131,072.0
tensor
2 $1.56 1.859 Launch
rtx5090-1.16.64.160
131,072.0
1 $1.59 1.115 Launch
rtx3080-4.16.64.160
131,072.0
tensor
4 $1.82 1.096 Launch
rtx4090-2.16.64.160
131,072.0
tensor
2 $1.92 1.859 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 3.815 Launch
h100-1.16.64.160
131,072.0
1 $3.83 3.815 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 4.603 Launch
h200-1.16.128.160
131,072.0
1 $4.74 7.246 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.160
131,072.0
pipeline
3 $0.88 1.227 Launch
teslaa10-2.16.64.160
131,072.0
tensor
2 $0.93 1.383 Launch
teslat4-4.16.64.160
131,072.0
tensor
4 $0.96 1.971 Launch
teslaa2-3.32.128.160
131,072.0
pipeline
3 $1.06 1.227 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 1.383 Launch
teslaa2-4.32.128.160
131,072.0
tensor
4 $1.26 1.971 Launch
rtx3090-2.16.64.160
131,072.0
tensor
2 $1.56 1.383 Launch
rtx4090-2.16.64.160
131,072.0
tensor
2 $1.92 1.383 Launch
teslav100-2.16.64.240
131,072.0
tensor
2 $2.22 2.283 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 76.950 3.339 Launch
rtx5090-2.16.64.160
131,072.0
tensor
2 $2.93 2.283 Launch
h100-1.16.64.160
131,072.0
1 $3.83 83.790 3.339 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 128.540 4.127 Launch
h200-1.16.128.160
131,072.0
1 $4.74 6.771 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.