AI was all over Apple’s WWDC. It was just running in the background


Jason Hiner/ZDNET

Ahead of WWDC, people were eager to know if Apple was finally going to make an announcement to propel the company into the AI race. 

To many people’s disappointment, the event came and went without a generative AI project unveiling. 

AlsoI tried Apple Vision Pro and it’s far ahead of where I expected

Despite not openly participating in the generative AI boom initiated by ChatGPT last November, Apple still made its presence known in the AI scene, though in a more subtle manner.

Behind Apple’s many software and hardware announcements at WWDC, AI is what made the many advanced features possible. 

In case it flew under your radar, here is a roundup of all of the different ways AI made an appearance at WWDC. 

Vision Pro features 

Apple Vision Pro

The Apple Vision Pro was just announced on June 5, 2023.

Jason Hiner/ZDNET

Apple’s first mixed reality headset, the Vision Pro, was the biggest announcement of WWDC. 

The headset was designed to blend your current reality with a virtual one to transform ordinary activities, such as viewing your laptop or watching movies, into an immersive one. 

Also: Wear glasses? Apple’s already expensive Vision Pro headset will cost you even more

The Vision Pro introduced a major transformation to the FaceTime call experience.

While wearing the headset, users will be able to see “life-sized” tiles of each person on the call and each person’s audio will be coming from their individual tile position. This allows for more natural conversation. 

Meanwhile, people on the call will see the Vision Pro wearer’s “Digital Persona“, and that’s where AI comes in. 

Also: Apple Vision Pro first take: 3 reasons this changes everything

The headset uses Apple’s advanced machine learning technology to create a realistic, virtual avatar that reflects the wearer’s face and hand movements in real time from an initial face scan. 

Close-up of a man using the Apple Vision Pro headset to scan his face and create a Digital Persona


Avatars in video-conferencing are a key feature of the metaverse, and Apple’s take relies on artificial intelligence to make its avatars happen. 

AlsoInside VisionOS: 17 things developers need to know right now

In addition, the headset is controlled by hand and eye-tracking technology, as well as voice commands. Voice recognition technology, such as Siri, is a prime and early example of artificial intelligence.  

AirPods’s Personalized Volume

AirPods Pro are seen with books here.

Jason Cipriani/ZDNET

Although there were no new AirPods announcements in terms of hardware, Apple did announce some pretty cool new features that will elevate your listening experience. 

Also: Adaptive Audio listening mode is coming to AirPods Pro 2

Personalized Volume is a new feature that will fine-tune users’ volume experience by using their environmental conditions and listening preferences over time. 

To understand a user’s listening preferences and act on them, this feature is powered by machine learning. 

iPadOS’s Enhanced AutoFill


Apple/Screenshot by Jason Cipriani/ZDNET

The iPad got its own share of new features through iPadOS 17, including a new Health app and the ability to control a smart home item from the home screen. 

Also: Every software features Apple just announced at WWDC

A standout feature was the iPad’s new PDF enhancements.

Through a feature called “Enhanced AutoFill”, your iPad will be able to identify fields in a PDF to autofill personal information with saved information from Contacts.

To accurately identify the spots on a page that need to be filled out and to pull information from your own contacts, the app likely relies on AI. 

Journal app

Journal app


As part of its iOS announcements, Apple unveiled a brand new Journal app, with which users can log their everyday activities to improve their mental and physical health. 

Also: Apple’s new Journal app is coming to iPhone: Everything you need to know 

One of the highlights of the app is its ability to make personalized suggestions based on your activities on other apps, such as Photos, Music, or even your location. 

The app’s ability to analyze a user’s behavior and then produce natural language prompts based on its findings involves AI. 

Improved Autocorrect


Apple/Screenshot by Jason Cipriani/ZDNET

If you’ve ever utilized Autocorrect, it’s likely that it has mistakenly changed your text to something embarrassing or inaccurate. To remedy that issue, iOS 17 will feature a more accurate Autocorrect and word prediction. 

Also5 useful iOS 17 features Apple quietly released at WWDC 2023

Apple says that the iOS 17 keyboard will leverage a “Transformer Language Model” for these improved text features, confirming the involvement of AI. 


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button