The e5-mistral-7b-instruct is a 7-billion-parameter text embedding model built upon Mistral-7B-v0.1 and fine-tuned using instruction-based learning on a diverse set of multilingual datasets. Optimized primarily for English, it supports natural language task instructions to generate semantically rich embeddings suitable for tasks like information retrieval, semantic search, and question answering. The model features a 32-layer transformer architecture with an embedding size of 4096 and supports input sequences up to 32,000 tokens, enabling it to handle long-form content efficiently. It has demonstrated strong performance across various benchmarks, including the Massive Text Embedding Benchmark (MTEB), where it ranks competitively among large-scale embedding models.
Provider
Context Size
Max Output
Latency
Speed
Cost
Data reflects historical performance over the past days.
API Usage
Seamlessly integrate our API into your project by following these simple steps: