BGE-M3 is a versatile embedding model developed by BAAI, distinguished by its capabilities in Multi-Functionality, Multi-Linguality, and Multi-Granularity. It uniquely supports three retrieval methods—dense retrieval, multi-vector retrieval, and sparse retrieval—within a single framework, enabling flexible information retrieval strategies. The model is trained to handle over 100 languages, facilitating robust multilingual and cross-lingual retrieval. Additionally, BGE-M3 can process inputs ranging from short sentences to long documents of up to 8,192 tokens, accommodating various text granularities. Its training incorporates a novel self-knowledge distillation approach, integrating relevance scores from different retrieval functionalities to enhance embedding quality.
Provider
Context Size
Max Output
Cost
Speed
nebius_fast
128K
128K
€NaN/M
155.00 tps
nebius_fdt
128K
128K
€NaN/M
155.00 tps
nebius_slow
128K
128K
€NaN/M
155.00 tps
nebiusf
128K
128K
€NaN/M
155.00 tps
API Usage
Seamlessly integrate our API into your project by following these simple steps: