DeepSeek R1 Distill Llama 70B is a distilled variant of the LLaMA 70B model, designed by DeepSeek to balance performance and efficiency. This model retains the advanced capabilities of its larger predecessor while reducing computational overhead, making it more accessible for deployment. Trained on a rich and diverse dataset, DeepSeek-R1-Distill excels in code understanding and generation, mathematical reasoning, and general-purpose language tasks. Its compactness and versatility make it a strong candidate for AI agents, virtual assistants, and other real-world applications requiring fast, intelligent responses.
Provider
Context Size
Max Output
Cost
Speed
nebius_fast
128K
128K
€NaN/M
155.00 tps
nebius_fdt
128K
128K
€NaN/M
155.00 tps
nebius_slow
128K
128K
€NaN/M
155.00 tps
nebiusf
128K
128K
€NaN/M
155.00 tps
API Usage
Seamlessly integrate our API into your project by following these simple steps: