Apple’s 2021 chip strategy will create a massively parallel universe

After years of rumors, Apple in November unveiled M1, a hybrid CPU, GPU, and AI processor designed to directly challenge Intel’s laptop and desktop chips, promising that three initial Mac computers would be faster than 98% of recent PC laptops. Apple chip lead Johny Srouji explained the design in an interesting way: M1’s four low-powered CPU cores alone rivaled prior Intel MacBook Air laptops in performance, while four high-powered cores added the juice professional desktop computers, laptops, and servers demand.

At this point, Apple’s gamble is clear: M1’s least powerful CPU cores are table stakes, and its most powerful cores will be used to guarantee either parity or superiority to various types of Intel PCs. Bloomberg reports that Apple is now preparing to up the ante by multiples in 2021, equipping the next round of Macs with four to eight times as many high-powered CPU cores — between 16 and 32 — in addition to the low-powered table-stakes cores. Additionally, Apple will multiply the GPU footprint two or four times with 16 to 32 cores, then offer 64- and 128-core GPU options for the most demanding applications. Like rivals Nvidia and AMD, Apple is gambling on massively parallel processing as a means to surpass Intel, and there’s every reason to expect its bets will pay off.

Apple’s shift is significant to technical decision makers because it signals how the coming war for both datacenters and high-end professional computers will be fought, using vast arrays of tiny processing cores to deliver combinations of power and performance Intel’s Xeon and Core chips could struggle to match. The Cupertino company’s obsession with chips that consume little energy and run cool could redefine conventional server arrays, where towers of racked machines powered by thirsty processors currently consume untold quantities of power, and where adding more processors has historically meant adding more racks and power drain. Cloud companies are already fitting four Mac minis in a single 1U rack shelf, and the number of Macs in racks is about to skyrocket.

Leading cloud infrastructure provider Amazon is already planning for Macs to take a greater role in the cloud space. Last week, the company announced that it is now offering Elastic Compute Cloud instances on Mac minis — initially Apple’s last Intel-based machines, but with plans to adopt Apple Silicon in early 2021. Though Amazon is offering the Macs to developers at premium prices compared with smaller cloud hosting rivals, it’s touting full integration with AWS services, rapid onboarding, and the ability to scale up to multiple machines as reasons to prefer its offering.

Amazon is notably starting its Mac rollout with six-core Intel Core i7 chips, which means a shift to first-generation M1 Macs wouldn’t have the multiplicative effect described above. Even without massive core count increases, M1 offers a roughly 70% single-core bump and 30% multi-core bump over the i7 — no small feat for a less expensive, cooler-running machine — yet if the report is correct, a second-generation Apple Silicon Mac mini could have not only a much faster aggregated speed, but also the ability to handle a much larger number of tasks in parallel. The tiny server could move from eight total cores today to 20 next year without any change in form factor, or Apple could make it smaller, enabling more Mac minis to fit on a shelf.

None of the first three M1-powered Macs has a new design, but that’s going to change in 2021. Rumors of an iMac with an iPad Pro-inspired chassis are almost certainly true, reflecting the tablet-like thinness that Apple Silicon will soon enable across other Macs. Similar rumors suggest that Apple will release a Mac Pro that requires only one-fourth of the current model’s volume, which could shrink the current 5U-sized rack enclosure down to a 2U frame. Though that’s more speculation than concrete at this stage, it could mean that datacenters will be able to fit five Mac Pros with Apple Silicon into a rack that only held two before.

It bears brief mention that two long-gestating trends have enabled Apple Silicon to deliver such compelling performance. One has been the continued march of power-efficient, ARM technology-based RISC processors into smaller chip manufacturing nodes, most notably the 5-nanometer process perfected this year by Apple’s fabrication partner TSMC. The other was Apple’s creation of powerful multi-thread OS technologies such as Grand Central Dispatch, which efficiently route app tasks to multiple cores without requiring constant developer or user vigilance. As a direct consequence of these innovations, Apple’s chips and OS now scale seamlessly to as many cores as can fit on a die, leading to tangible speed and capability gains with each new generation.

The transformation will have impacts beyond the multiplicative effects on CPU performance. Even Apple’s initial M1 chips ship with 16 dedicated AI cores, promising 11 TOPS of assistance for everything from general purpose machine learning tasks to computer vision. Just as Qualcomm’s new Snapdragon 888 system-on-chip raised the mobile AI bar to 26 TOPS this month, Apple will follow suit with more performant AI hardware across both its mobile and Mac chips, equipping each of its devices with AI capabilities that would have been unthinkable only a decade ago.

Apple’s GPU strategy appears to be similar to Nvidia’s, which has multiplied the number of processing cores in its graphics cards to meet the needs of demanding gamers and imaging professionals. But while Nvidia is now shipping graphics cards with over 10,000 graphics cores and Tensor AI cores, demanding up to 750 watts of total system power, Apple’s greatest ambition in graphics is around 1/100 that, assuming it actually releases a 128-core GPU for professional applications in 2021. Even if it does, Nvidia will remain safely ahead of the curve, and probably for the foreseeable future. Moreover, Nvidia holds the keys to the ARM architecture Apple is relying upon for its own chips, though Apple creates its own CPU and GPU cores riffing on ARM technologies, rather than relying on off-the-shelf ARM chip designs.

Putting everything together, the key takeaway from all of these developments is that we’re about to see an explosion in the number of Apple processors everywhere — and for the first time, not just in consumer applications. Once Apple Silicon makes its way into datacenters, each Mac will be fueled by multiple Apple CPU, GPU, and AI cores; each rack could hold multiple Macs; and each server farm will be able to handle many multiples of prior workloads, powering everything from B2B apps to enterprise data warehousing and cloud gaming. It’s unclear at this stage whether AMD, Nvidia, and Intel see Apple’s growth as an existential threat or merely one to be managed through traditional annual competitive improvements, but it’s obvious that the future of processing will be even more massively parallel than it is today, and that Apple is building its own universe to capture as many datacenters, developers, and dollars as it can for itself.

By VentureBeat Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here