
In October of this year, Apple officially launched the new M5 chip, simultaneously updating multiple product lines including the MacBook Pro, iPad Pro, and Apple Vision Pro.
On the surface, this seems like a routine, incremental hardware iteration, as these new products offer no significant upgrades beyond the chip itself. However, a deeper signal lies hidden beneath the architecture of this M5 chip. This year, Apple integrated a neural network accelerator responsible for enhancing AI capabilities into the GPU for the first time. This clearly reveals Apple's new thinking in the AI era: the M5 chip may be the "vanguard" of Apple's AI strategy.
A New Architecture Makes its Debut
From a conventional performance perspective, the M5 chip can only be considered a steady iteration within the Apple ecosystem. In the MacBook Pro, this chip achieved a single-core score of 4339 and a multi-core score of 17787, a certain improvement over the previous generation M4 chip.
However, behind the conventional performance figures, the real highlight of the M5 lies in its GPU. This time, the M5's GPU not only introduces a new generation of 10-core architecture, but more importantly, each core integrates a dedicated neural network accelerator, achieving a leap in AI computing power.
Apple officially claims that the M5 chip's graphics processor delivers more than 4 times the peak AI computing performance of the M4, especially in "LLM cue word processing," where the first denominator response speed is significantly improved.
However, because the M5 chip does not have higher specifications of RAM, if you are a heavy user of local AI tools, you may still not be able to deploy a large-scale model like the M4 Pro/M4 Max with its large RAM capacity. But when running things like Drawthings, deploying small local models, or even building text-to-image workflows using ComfyUI, you will likely feel a sense of "one step faster."
Why the M5 first?
An interesting phenomenon is that Apple only launched the M5 chip this time, instead of releasing the "whole family" of M5 Pro, M5 Max, and M5 Ultra as usual. I prefer to interpret this unconventional approach as Apple's strategy of steady and iterative development in the field of AI.
Currently, performance growth for both CPUs and GPUs largely depends on improvements in chip manufacturing processes, making leapfrog growth extremely difficult. With "conventional performance" no longer driving large-scale upgrades, "AI" is becoming the new breakthrough. Apple's decision this year to integrate a neural network accelerator into the GPU of its chip precisely reflects its view of AI as the next new direction for development.
Within Apple's vast ecosystem, the M5 chip serves as a "test run" for the new AI era. This chip remains lightweight, and Apple may hope to leverage its relatively low price point within the ecosystem to attract more users interested in AI but who don't require Pro/Max level performance to upgrade. This will both expand the market share of the new architecture and accumulate valuable market feedback for the AI capabilities of future higher-end chips.
However, in terms of truly "universal AI capabilities," the MacBook Pro equipped with the M5 chip seems insufficient. In comparison, a Mac mini equipped with the M5 chip is clearly a more cost-effective entry-level option. Of course, Apple's current "shelving" of the Mac mini might be preparation for the launch of the M5 Pro version, so we'll likely have to wait until next year to see this more affordable AI host.
Focusing on AI, Following Industry Mainstream
In fact, adding neural network accelerators to GPUs is a mainstream practice in the industry.
Take Nvidia as an example. In 2017, Nvidia added the new Tensor Cores to its Volta architecture. Tensor Cores are dedicated AI computing units for GPUs. While traditional GPUs are still using general-purpose computing units to handle AI tasks, Tensor Cores have begun to specialize in and efficiently perform AI calculations. This has given Nvidia GPUs an overwhelming advantage in AI training and inference, firmly securing their throne in the AI era.
Now, Apple is doing something similar with the M5 chip. Apple realizes that future competition will no longer be about simple CPU performance benchmarks, but about who can handle AI tasks more efficiently. Therefore, the M5 chip no longer relies solely on the ANE (Neural Engine), but follows mainstream practice by integrating a dedicated neural network accelerator into the GPU, resulting in stronger overall AI capabilities.
From the design of the M5 chip, we can infer that Apple's future AI computing power division will be more clearly defined: lightweight AI functions such as photo categorization and Siri response will continue to be handled by the highly energy-efficient Neural Engine; while heavy-load tasks requiring high computing power and high precision, such as the soon-to-be-released Apple Intelligence, complex text-to-image processing, and AI video enhancement rendering, will be handled by the next-generation GPU with its powerful integrated AI acceleration unit.
This division of labor balances energy efficiency for everyday use with peak performance required for professional applications. We have reason to believe that the upcoming M5 Pro and M5 Max chips will likely follow a similar upgrade strategy, leveraging the neural network accelerator in the GPU to achieve a significant leap in AI performance.
In conclusion, the M5 chip is a prelude to Apple's foray into the AI field. The arrival of this chip will pave a path for Apple's entire product line to achieve a balance between performance, energy efficiency, and cost in AI development.
Similarly, I hope that when the AI era truly arrives, Apple can genuinely bring down storage prices. Currently, both AI models themselves and AI-generated content consume a staggering amount of storage space. With the industry now widely adopting 16GB RAM + 512GB storage, I eagerly anticipate Apple further promoting the adoption of larger capacity storage versions in the future, allowing users to enjoy the convenience of AI without experiencing storage anxiety.