Phi-4

Phi-4 is a modern open-source language model with 14 billion parameters. Although the architectural changes compared to previous versions are minimal, the model demonstrates significant progress in tasks requiring logical thinking and analytical skills, made possible by an innovative training approach. Unlike traditional language model training methods, Phi-4 focuses not on the quantity of data but on its quality. The training of Phi-4 utilized diverse sources, including synthetic data specifically designed to develop reasoning skills, filtered documents from public sources, as well as acquired academic books and question-answer knowledge bases. This allows the model to achieve high performance even with a relatively small size.  

Phi-4 works exclusively with textual data. Its context window is relatively small at 16K tokens, but it supports more than 50 languages, including Russian.  

Overall, Phi-4 is a versatile lightweight model, but according to its developers, it is particularly effective in environments with limited memory and computational resources, as well as for tasks requiring instant response.


Announce Date: 12.12.2024
Parameters: 15B
Context: 17K
Layers: 40
Attention Type: Full Attention
Developer: Microsoft
Transformers Version: 4.47.0
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Phi-4 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
phi-4 16,000.0 Public RTX3090 31.50 AVAILABLE chat

API access to Phi-4 endpoints

curl https://chat.immers.cloud/v1/endpoints/phi-4/generate/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer USER_API_KEY" \
-d '{"model": "phi-4", "messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"}
], "temperature": 0, "max_tokens": 150}'
$response = Invoke-WebRequest https://chat.immers.cloud/v1/endpoints/phi-4/generate/chat/completions `
-Method POST `
-Headers @{
"Authorization" = "Bearer USER_API_KEY"
"Content-Type" = "application/json"
} `
-Body (@{
model = "phi-4"
messages = @(
@{ role = "system"; content = "You are a helpful assistant." },
@{ role = "user"; content = "Say this is a test" }
)
} | ConvertTo-Json)
($response.Content | ConvertFrom-Json).choices[0].message.content
#!pip install OpenAI --upgrade

from openai import OpenAI

client = OpenAI(
api_key="USER_API_KEY",
base_url="https://chat.immers.cloud/v1/endpoints/phi-4/generate/",
)

chat_response = client.chat.completions.create(
model="phi-4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"},
]
)
print(chat_response.choices[0].message.content)

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Phi-4

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-1.16.32.160
16,384.0
1 $0.53 32.100 2.787 Launch
teslat4-2.16.32.160
16,384.0
tensor
2 $0.54 4.291 Launch
teslaa2-2.16.32.160
16,384.0
tensor
2 $0.57 9.150 4.291 Launch
rtx2080ti-2.12.64.160
16,384.0
tensor
2 $0.69 1.411 Launch
rtx3090-1.16.24.160
16,384.0
1 $0.83 50.090 2.787 Launch
rtx4090-1.16.32.160
16,384.0
1 $1.02 69.850 2.787 Launch
teslav100-1.12.64.160
16,384.0
1 $1.20 5.091 Launch
rtxa5000-2.16.64.160.nvlink
16,384.0
tensor
2 $1.23 8.899 Launch
rtx3080-3.16.64.160
16,384.0
pipeline
3 $1.43 2.915 Launch
rtx5090-1.16.64.160
16,384.0
1 $1.59 83.030 5.091 Launch
rtx3080-4.16.64.160
16,384.0
tensor
4 $1.82 4.995 Launch
teslaa100-1.16.64.160
16,384.0
1 $2.37 53.030 18.915 Launch
h100-1.16.64.160
16,384.0
1 $3.83 63.450 18.915 Launch
h100nvl-1.16.96.160
16,384.0
1 $4.11 22.947 Launch
h200-1.16.128.160
16,384.0
1 $4.74 36.483 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-1.16.32.160
16,384.0
1 $0.53 1.731 Launch
teslat4-2.16.32.160
16,384.0
tensor
2 $0.54 3.235 Launch
teslaa2-2.16.32.160
16,384.0
tensor
2 $0.57 3.235 Launch
rtx3090-1.16.24.160
16,384.0
1 $0.83 1.731 Launch
rtx2080ti-3.12.24.120
16,384.0
pipeline
3 $0.84 2.723 Launch
rtx4090-1.16.32.160
16,384.0
1 $1.02 1.731 Launch
rtx2080ti-4.16.32.160
16,384.0
tensor
4 $1.12 5.091 Launch
teslav100-1.12.64.160
16,384.0
1 $1.20 4.035 Launch
rtxa5000-2.16.64.160.nvlink
16,384.0
tensor
2 $1.23 7.843 Launch
rtx3080-3.16.64.160
16,384.0
pipeline
3 $1.43 1.859 Launch
rtx5090-1.16.64.160
16,384.0
1 $1.59 4.035 Launch
rtx3080-4.16.64.160
16,384.0
tensor
4 $1.82 3.939 Launch
teslaa100-1.16.64.160
16,384.0
1 $2.37 17.859 Launch
h100-1.16.64.160
16,384.0
1 $3.83 17.859 Launch
h100nvl-1.16.96.160
16,384.0
1 $4.11 21.891 Launch
h200-1.16.128.160
16,384.0
1 $4.74 35.427 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.160
16,384.0
pipeline
3 $0.88 2.045 Launch
teslaa10-2.16.64.160
16,384.0
tensor
2 $0.93 29.000 2.845 Launch
teslat4-4.16.64.160
16,384.0
tensor
4 $0.96 5.853 Launch
teslaa2-3.32.128.160
16,384.0
pipeline
3 $1.06 2.045 Launch
rtxa5000-2.16.64.160.nvlink
16,384.0
tensor
2 $1.23 2.845 Launch
teslaa2-4.32.128.160
16,384.0
tensor
4 $1.26 5.853 Launch
rtx3090-2.16.64.160
16,384.0
tensor
2 $1.56 46.560 2.845 Launch
rtx4090-2.16.64.160
16,384.0
tensor
2 $1.92 55.000 2.845 Launch
teslav100-2.16.64.240
16,384.0
tensor
2 $2.22 7.453 Launch
teslaa100-1.16.64.160
16,384.0
1 $2.37 45.550 12.861 Launch
rtx5090-2.16.64.160
16,384.0
tensor
2 $2.93 73.390 7.453 Launch
h100-1.16.64.160
16,384.0
1 $3.83 53.090 12.861 Launch
h100nvl-1.16.96.160
16,384.0
1 $4.11 87.210 16.893 Launch
h200-1.16.128.160
16,384.0
1 $4.74 30.429 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.