NIX Solutions: DeepSeek Releases New Distilled AI Model

Chinese startup DeepSeek has introduced a distilled version of its 685 billion-parameter R1 model: DeepSeek-R1-0528-Qwen3-8B. This lighter version is based on Alibaba’s Qwen3-8B model, which was released in May this year, and has already demonstrated strong performance in a range of benchmarks.

According to the developers, DeepSeek-R1-0528-Qwen3-8B surpassed Google’s Gemini 2.5 Flash in the AIME 2025 math benchmark. It also “almost matches” Microsoft’s Phi 4 Plus model in another math reasoning test, the HMMT. While distilled models typically offer lower performance compared to their full-size counterparts, they are much more accessible for smaller-scale use due to reduced computational requirements.

NIX Solutions

Cloud platform NodeShift reports that Qwen3-8B, the base for DeepSeek’s new model, can run on a single GPU with 40–80 GB of RAM, such as the Nvidia H100. In contrast, the upgraded full-size R1 model requires around a dozen GPUs with 80 GB of RAM each, making the distilled version far more accessible to smaller labs and developers.

Trained for Efficiency and Broad Use

To create DeepSeek-R1-0528-Qwen3-8B, the startup fine-tuned the Qwen3-8B model using output generated by the upgraded R1. This approach allowed DeepSeek to compress advanced reasoning capabilities into a more compact and efficient model. On the AI development platform Hugging Face, the company describes it as suitable “for both academic research and industrial development focused on small-scale models.”

Importantly, DeepSeek-R1-0528-Qwen3-8B is distributed under the permissive MIT license, allowing for unrestricted commercial use, adds NIX Solutions. Several platforms have already integrated the model via API, including LM Studio. This level of accessibility may encourage more widespread experimentation and adoption.

As smaller-scale AI continues to gain traction, especially among independent developers and startups, models like DeepSeek-R1-0528-Qwen3-8B are likely to play an increasingly important role. We’ll keep you updated as more integrations become available and the AI ecosystem continues to evolve.