Bitmala

5 Insane Reasons DeepSeek’s R2 Delay Shows Huawei Chips Aren’t Ready Yet

China’s AI scene just hit a speed bump. DeepSeek, the AI company behind the R1 model that dropped in January, had been hyping its next-gen R2 — but now it’s officially delayed. The reason? Huawei’s chips couldn’t handle the full training process.

Beijing wanted DeepSeek to use Huawei’s Ascend processors instead of Nvidia’s GPUs to cut U.S. tech reliance. Sounds good on paper, but in practice, training R2 on Ascend hit technical walls — instability, slow inter-chip connections, and weaker software compared to Nvidia’s gear.

So, DeepSeek had to pivot: Nvidia chips for training, Huawei chips for inference (the part where AI actually answers questions). This workaround meant pushing the R2 launch from its original May target.

DeepSeek’s Tough Reality Check

Huawei even sent engineers to help make Ascend work, but the model still wouldn’t train properly. On top of that, labeling the massive dataset for R2 took longer than expected. Meanwhile, rivals like Alibaba’s Qwen3 are already shipping powerful new models — and ironically, Qwen3’s training methods borrow ideas from DeepSeek itself.

AI experts say it’s only a matter of time before Chinese chips can compete for training tasks, but for now, U.S. GPUs still rule. Nvidia even struck a deal with the U.S. government to share China profits in exchange for selling its H20 chips there again.

DeepSeek might still drop R2 in the coming weeks, but the delay shows one thing loud and clear — in the AI arms race, hardware bottlenecks can be just as critical as algorithms.

YOU MIGHT ALSO LIKE: U.S. PPI Surges 0.9% in July, Bitcoin Drops Below $119K on Fed Rate Cut Fears

Exit mobile version