Abu Dhabi’s Falcon H1R 7B raises bar for compact AI
Abu Dhabi’s Technology Innovation Institute has released Falcon H1R 7B, an open-source artificial intelligence reasoning model that it says matches or exceeds the performance of much larger systems from global technology groups while running on substantially less computing power.
The 7 billion parameter model is the latest in the Falcon large language model family developed under the emirate’s Advanced Technology Research Council. TII is positioning the system as a compact option for maths, coding and so‑called agentic tasks that typically rely on larger and more computationally intensive models.
Falcon H1R 7B targets a segment of the AI market that seeks advanced reasoning in smaller models that run on standard hardware. The model uses a hybrid architecture that combines Transformer neural networks with elements from the Mamba sequence model, which TII says improves throughput and latency.
The institute claims the system outperforms Microsoft’s Phi 4 Reasoning Plus 14B, Alibaba’s Qwen3 32B and Nvidia’s Nemotron H 47B on a range of specialist benchmarks. Those models contain between roughly twice and almost seven times the number of parameters.
His Excellency Faisal al Bannai, Adviser to the UAE President and Secretary General of the Advanced Technology Research Council, linked the launch with wider national digital ambitions.
“Falcon H1R reflects the UAE’s commitment to building open and responsible AI that delivers real national and global value. By bringing world-class reasoning into a compact, efficient model, we are expanding access to advanced AI in a way that supports economic growth, research leadership, and long-term technological resilience,” said His Excellency Faisal al Bannai, Adviser to the UAE President and Secretary General of the Advanced Technology Research Council.
The new system builds on TII’s earlier Falcon H1‑7B foundation model. Researchers at the institute have applied a specialised training approach that focuses on what they describe as test-time reasoning, which emphasises performance when the model is used rather than during training alone.
Falcon H1R 7B targets high accuracy on problem-solving tasks while keeping memory and energy requirements low. TII says this aligns the model with use in constrained environments, such as smaller data centres or on-premise deployments where operators seek lower operating costs.
“Falcon H1R 7B marks a leap forward in the reasoning capabilities of compact AI systems,” said Dr Najwa Aaraj, CEO of TII. “It achieves near-perfect scores on elite benchmarks while keeping memory and energy use exceptionally low, critical criteria for real-world deployment and sustainability.”
The institute describes the design as an attempt to reach a new balance between speed and quality. Researchers frame this as a “Pareto frontier”, in which the model aims for improvements on one dimension without a corresponding decline in the other.
Benchmark scoresOn maths reasoning, Falcon H1R 7B scored 88.1 per cent on the AIME‑24 benchmark. TII says the result exceeds that of ServiceNow AI’s Apriel 1.5 model with 15 billion parameters, which it reports at 86.2 per cent.
On coding and agentic benchmarks, the new model achieved an accuracy score of 68.6 per cent. TII describes this as the strongest performance reported among models under 8 billion parameters. The institute cites results on LCB v6, SciCode Sub and TB Hard tests, where Falcon H1R 7B reportedly scored 34 per cent, compared with 26.9 per cent for China’s DeepSeek R1‑0528 Qwen 3 8B model.
TII also reports that Falcon H1R 7B scored above Alibaba’s larger Qwen3‑32B model on those same specialist code and tool-use benchmarks, with Qwen3‑32B said to reach 33.4 per cent. On broader reasoning and instruction-following tests, TII says Falcon H1R 7B matches or approaches Microsoft’s Phi 4 Reasoning Plus 14B despite using half as many parameters.
On raw speed, the institute reports throughput of up to 1,500 tokens per second per GPU at a batch size of 64. TII says this is close to twice the speed of Alibaba’s Qwen3‑8B model under similar conditions.
Open releaseTII has released Falcon H1R 7B as open source under its Falcon TII Licence. The model weights are available through the Hugging Face platform, along with a technical report that describes the training procedure and benchmark outcomes.
The institute is targeting developers, research groups and public bodies that want to run advanced reasoning models without depending on proprietary services. The compact size of Falcon H1R 7B is likely to draw interest from organisations that want to deploy AI on their own infrastructure.
“This model is the result of world-class research and engineering. It shows how scientific precision and scalable design can go hand in hand,” said Dr Hakim Hacid, Chief Researcher at TII’s Artificial Intelligence and Digital Research Centre. “We are proud to deliver a model that enables the community to build smarter, faster, and more accessible AI systems,” said Hacid.
Falcon H1R 7B extends a series of Falcon-branded models that Abu Dhabi has released over the past two years. The programme has formed part of the UAE’s push to expand its advanced research base and increase its profile in global AI development.
TII plans further additions to the Falcon family and expects new research on compact reasoning models based on Falcon H1R 7B’s architecture and training methods.