Wow, so get this: Huawei’s diving headfirst into the AI pool with this CloudMatrix 384 thing, and big shots over in China are all about it. It’s like, NVIDIA who? Or something like that.
Alright, picture this: Huawei’s out here, not holding back, with their AI toys. The word is, they’re giving NVIDIA a run for its money, at least in some corners of China. So yeah, Huawei’s got this CloudMatrix 384 AI cluster. Supposedly, it’s all their own stuff—kinda like a DIY powerhouse. And just like that, ten big clients are on board, according to Financial Times. Boom, right?
Names? Hah, not a clue who these clients are. Feeling left out here, but apparently, they’re top shelf users of Huawei’s gizmos. We’ve chatted about CloudMatrix 384 before, so if you missed it, here’s the gist: China might just be moving away (sort of) from needing outside help for their computing cravings.
Oh, and check this out—Huawei’s beast packs in 384 Ascend 910C chips. Got them all snug in this “all-to-all topology” thingamajig. Sounds way techier than it should. And compared to NVIDIA’s GB200? Huawei went for quantity, sticking in five times the chips. Impressive attack plan or overkill? You tell me.
Specs! Geek alert: CloudMatrix 384 cranks out 300 PetaFLOPS of this fancy BF16 computing. Roughly double what NVIDIA’s GB200 can pull off. But oh boy, the power: it guzzles like a thirsty marathon runner, using nearly 4 times what GB200 does.
Okay, here’s the kicker: a single CloudMatrix 384 cluster clocks in at $8 million. That’s like, triple NVIDIA’s GB200. These things aren’t cheap at all. It’s got this “DIY with a fat price tag” vibe. Forget budget choices—Huawei’s flexing with in-house muscle, squaring up to Western tech titans.