Qwen2 is the new series of Qwen large language models, featuring a variety of base and instruction-tuned models with sizes ranging from 0.5 to 72 billion parameters. Among these, the instruction-tuned 7B Qwen2 model stands out for its remarkable performance. Compared to state-of-the-art open-source models and its predecessor, Qwen1.5, Qwen2 has achieved significant advancements, surpassing most open-source models and rivaling proprietary models in benchmarks assessing language understanding, generation, multilingual capabilities, coding, mathematics, and reasoning.
For instructions on accessing this model or initializing it via API, please refer to our docs.