DeepSeek's New Distilled AI Model Delivers Impressive Performance
- Jermy Johnson
- May 29
- 2 min read

DeepSeek, the prominent Chinese AI research lab, has made waves in the AI community with the release of its updated R1 reasoning model. However, the company has also unveiled a smaller, "distilled" version of this powerful AI system that is worth taking a closer look at.
The new model, dubbed DeepSeek-R1-0528-Qwen3-8B, was built using Alibaba's Qwen3-8B model as a foundation. Despite its reduced size and computational requirements, this distilled R1 model is able to outperform comparably sized competitors on several key benchmarks.
For example, DeepSeek-R1-0528-Qwen3-8B bested Google's Gemini 2.5 Flash on the AIME 2025 math question challenge. It also nearly matched the performance of Microsoft's recently released Phi 4 reasoning model on the HMMT math skills test.
This is an impressive feat, as distilled models are generally less capable than their full-sized counterparts. The tradeoff, however, is that they require far less computational power to run. While the original R1 model needs around a dozen 80GB GPUs, DeepSeek-R1-0528-Qwen3-8B can run on a single GPU with 40-80GB of RAM, such as an Nvidia H100.
DeepSeek trained the distilled model by fine-tuning the Qwen3-8B foundation model using text generated by the updated R1. This allowed the company to create a more compact version of the powerful R1 reasoning system.
According to DeepSeek, the DeepSeek-R1-0528-Qwen3-8B model is well-suited for both academic research on reasoning models and industrial development focused on small-scale AI systems. The model is available under a permissive MIT license, meaning it can be used commercially without restriction.
The release of this distilled R1 model is a testament to DeepSeek's commitment to innovation and its ability to create powerful AI systems that can run on more accessible hardware. As the AI landscape continues to evolve, it will be exciting to see what other advancements emerge from this leading Chinese research lab.
Comments