Tachyum, the company that created the universal ‘Prodigy processor’ that aims to replace both GPUs and CPUs in AI workloads has revealed a little more information regarding its upcoming chip this week.
It has been confirmed that the new processor will support industry-standard development frameworks for AI applications and a massive amount of RAM.
The company claims that its 128 cores Tachyum Prodigy can run ARM, RISC-V, and even x86 binaries without modification by way of software emulation.
In native mode the Prodigy outperforms Intel’s Xeon processors by an order of magnitude while using less power, and it leaves nVidia’s A100 Graphics Processor in the dust in AI training, HPC, and inference workloads.
This Platform Supports up to 8TB of RAM
Software developers will be able to use commonly available IDEs (Integrated Development Environments) such as PyTorch and TensorFlow to develop machine learning and artificial intelligence applications on the new platform. Tachyum’s proprietary compiler will, however, be required to make full use of the chip.
Each Prodigy chip will be able to support 8TB of RAM, which is on par with what is expected from Intel and AMD for their upcoming CPUs. This is, though, considerably higher than GPU solutions like the nVidia A100 that max out in the tens of gigabytes.
We aren’t 100% sure what type of memory Tachyum plans to use with Prodigy, but it’s more than likely conventional RAM like DDR4 and DDR5. Things like HBM and GDDR aren’t expected to be supported by the platform.
This chip should be rather scaleable in regards to performance and power consumption as Tachyum has claimed that Prodigy will be able to serve data center and edge applications.
Expect the new chip to enter volume production some time next year.
Prodigy isn’t the only competition that CPUs and GPUs face. There is another chip that is 56 times larger than the nVidia A100. It’s called the WSE2 (Wafer Scale Engine 2) and it’s the world’s largest processor.
Featured Image Credit: [Tachyum]