Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance

In this report, we introduce Falcon-H1, a new series of large language models(LLMs) featuring hybrid architecture designs optimized for both highperformance and efficiency across diverse use cases. Unlike earlier Falconmodels built solely on Transformer or Mamba architectures, Falcon-H1 adopts aparallel hybrid approach that combines Transformer-based attention with StateSpace Models (SSMs), known for superior long-context memory and computationalefficiency. We systematically revisited model design, data strategy, andtraining dynamics, challenging conventional practices in the field. Falcon-H1is released in multiple configurations, including base and instruction-tunedvariants at 0.5B, 1.5B, 1.5B-deep, 3B, 7B, and 34B parameters. Quantizedinstruction-tuned models are also available, totaling over 30 checkpoints onHugging Face Hub. Falcon-H1 models demonstrate state-of-the-art performance andexceptional parameter and training efficiency. The flagship Falcon-H1-34Bmatches or outperforms models up to 70B scale, such as Qwen3-32B, Qwen2.5-72B,and Llama3.3-70B, while using fewer parameters and less data. Smaller modelsshow similar trends: the Falcon-H1-1.5B-Deep rivals current leading 7B-10Bmodels, and Falcon-H1-0.5B performs comparably to typical 7B models from 2024.These models excel across reasoning, mathematics, multilingual tasks,instruction following, and scientific knowledge. With support for up to 256Kcontext tokens and 18 languages, Falcon-H1 is suitable for a wide range ofapplications. All models are released under a permissive open-source license,underscoring our commitment to accessible and impactful AI research.