X
Innovation

Apple's AI extravaganza left out 2 key advances - maybe next time?

At WWDC, we saw a lot of 'Apple Intelligence' - on-device training was one of the more glaring omissions.
Written by Tiernan Ray, Senior Contributing Writer
apple-intelligence-2024.png
Apple

The unveiling of Apple's over-arching strategy for AI on Mac, iPhone, and iPad on Monday contained numerous intriguing features under the rubric "Apple Intelligence," a clever re-branding of the ubiquitous acronym.

ZDNET's Sabrina Ortiz has the details, which include many ways in which Apple software can enhance the device experiences, and also tap into OpenAI's ChatGPT. There was also a big play for security and privacy in Private Cloud Compute.

One glaring omission, however, is the lack of what's called "on-device" training. 

Also: Everything Apple announced at WWDC 2024, including iOS 18, Siri, AI, and more

AI models -- groupings of neural networks such as GPT-4o and Gemini -- are developed during an initial phase in the lab known as training. The neural net is given numerous examples of success and its results are tweaked until they produce optimal answers. That training becomes the basis of the neural network's question-answering, known as inference.

While Apple didn't disclose any technical details of what Gen AI it is using, the descriptions suggest the on-board capabilities -- the capabilities on the iPhone, iPad, and Mac -- do not include training of the neural networks, even though that's an area where Apple has offered original research. 

Instead, what is offered appears to be simply a form of "retrieval-augmented generation," or, RAG, a growing movement to performance inference -- the making of predictions -- by tapping into a database. Apple refers to the approach as the "Semantic Index," which knows about the user's personal data. 

That's no small thing: augmenting Gen AI with on-board, personal data is itself quite an accomplishment for inference "at the edge" rather than in the cloud. 

Also: Apple staged the AI comeback we've been hoping for - but here's where it still needs work

But it's not the same as on-board training. Apple's most interesting research work to date (at least, what's publicly disclosed) is to conduct some training on the client device itself

What can you do if you train the neural net on a person's constantly updated device data? 

A simple example is to boost image categorization by giving the neural net more context about what's in the image. This is not "a cat" in the photo you're looking at but your cat, similar to the many others you have taken, presented to you as an instant album of your cat, similar to what Apple does today when it recognizes faces in portraits.

Walking through an art gallery, if you snap a pic of a painting, your phone might recall connections between that artist and something you've snapped in a museum last month.

Also: Apple Intelligence FAQ: Every new feature, what models support it, and privacy concerns

Apple may be doing some re-training of neural nets in the cloud, via the Private Cloud Compute, given that it takes a lot of computing power -- more than most client devices possess -- to train neural nets

While ZDNET noted earlier this year that 2024 could be the year AI learns "in the palm of your hand," Monday's event suggests it could take a couple more iPhone generations before training on the device is possible.

There was another glaring omission: Apple's announcements dealt mostly with data already on the device, not with leveraging the device's sensors, especially the cameras, to enhance the world around you. 

Apple could, for example, apply Gen AI to how the camera is used as an AI companion, such as letting the assistant help the user pick the best frames when taking a multi-exposure "live" photo. Even better, "Tell me what's wrong with this composition" is the kind of photography-for-dummies advice some people might want in real-time -- before they press the shutter button. 

Also: 2024 may be the year AI learns in the palm of your hand

Apple instead showed off some modest AI enhancements for post-production, such as fixing an already snapped photo by later removing background objects. That's not the same as a live camera agent that helps you while you are using the camera.

It seems likely Apple will get to both on-device training and applying Gen AI to the sensors at some point. Both approaches play to Apple's integrated control of hardware and software. 

Editorial standards