E5 Mistral 7B

Embedding
The e5-mistral-7b-instruct is a 7-billion-parameter text embedding model built upon Mistral-7B-v0.1 and fine-tuned using instruction-based learning on a diverse set of multilingual datasets. Optimized primarily for English, it supports natural language task instructions to generate semantically rich embeddings suitable for tasks like information retrieval, semantic search, and question answering. The model features a 32-layer transformer architecture with an embedding size of 4096 and supports input sequences up to 32,000 tokens, enabling it to handle long-form content efficiently. It has demonstrated strong performance across various benchmarks, including the Massive Text Embedding Benchmark (MTEB), where it ranks competitively among large-scale embedding models.
Provider
Context Size
Max Output
Latency
Speed
Cost

Data reflects historical performance over the past days.

API Usage

Seamlessly integrate our API into your project by following these simple steps:

  1. Generate your API key from your profile.
  2. Copy the example code and replace the placeholder with your API key or see our documentation.

You can choose from three automatic provider selection preferences:

  • speed – Prioritizes the provider with the fastest response time.
  • cost – Selects the most cost-efficient provider.
  • balanced – Offers an optimal mix of speed and cost.

API Usage