At the WWDC conference, which attracted the most attention in more than ten years, Apple did not go beyond market expectations with one more thing, releasing its new computing platform after Mac, iPhone, iPad, and Watch-Hybrid Reality headset Vision Pro.
Judging from the on-site demonstration at the press conference and subsequent media trial evaluations, as the first hardware terminal developed entirely under Cook’s leadership, Vision Pro has indeed lived up to expectations, integrating Apple’s industrial design and engineering capabilities accumulated over more than ten years. In one, it can be called "the most advanced consumer electronics device in the world" and also shows the potential that Cook expects to become the next iPhone.
However, despite the Vision Pro’s stunning performance at the press conference, Apple’s stock price ended slightly lower in the U.S. overnight.
Well-known Apple analyst Ming-Chi Kuo believes that this is mainly due to the high price of Vision Pro (as high as US$3,499, which is not suitable for the mass market) and the distant release date (it will be launched in the US market early next year) It is not conducive to short-term investment sentiment, and in the demonstration session, did not clearly demonstrate the necessity for daily use.
To put it simply, although the head-mounted display is cool and has a sci-fi feel, it lacks the same necessity as a mobile phone or computer. Wall Street Insights mentioned in a previous article that the market generally questions why consumers should transfer things that can be easily done on traditional devices to a head-mounted display that costs up to $3,000 and makes interactions more cumbersome.
Despite this, Apple and Cook are also the undoubted winners. Through this "Technology Spring Festival Gala" that lasted for more than 2 hours and was full of content, Apple not only demonstrated its unshakable strength in software and hardware research and development, but also demonstrated its unshakable strength in new technology fields such as AI and brain-computer interfaces. Show off your muscles in a low-key and pragmatic way.Vision Pro: What metaverse? Don't touch it!
Judging from the demonstration at the press conference, the Vision Pro product is indeed full of science fiction.
In terms of interaction, this head-mounted display that looks like ski goggles can be operated only by eyes, voice and gestures. There is no bulky external handle or smart bracelet as previously speculated by the market.
After putting on the headset and unlocking it through Optic ID (iris ID), you can experience a scene similar to the Iron Man movie:
What your eyes see is still the world around you, but the entire Vision operating system will appear before your eyes, including hundreds of thousands of apps in Apple’s ecosystem, as well as Microsoft office software, Apple’s own photo albums, memos, etc. , you only need to look naturally and pinch with two fingers to open the app you need.
In terms of entertainment experience, it even surpasses similar competing products. In the live demonstration at the press conference, Apple demonstrated the "spatial computing" capabilities of Vision Pro. Just turn the knob on the headset to infinitely magnify the movie playing in front of you, allowing the surrounding environment to blend with the content you are watching. Imagine that on a high-speed train, on an airplane, or in any room, you can have a private theater as long as you turn on Vision Pro. Apple also directly invited Disney CEO Bob Iger to the press conference and officially announced that Disney content will be available on Vision Pro.
In addition, Apple also emphasizes that Vision Pro is not a device that isolates users from the environment. When someone approaches you, the front glass of the headset will become transparent, allowing others to see your eyes and communicate with you; when you are engaging in an immersive experience, the headset will appear opaque to remind others You can't see them.
Compared to pure VR devices like the Meta Oculus family, Vision Pro can seamlessly connect the real and virtual worlds. It is worthy of being the best and most expensive head display today. At least in terms of look and feel, it far surpasses all VR devices on the market that are anchored in the "Metaverse".
Wall Street News previously mentioned that Cook dislikes the concept of the Metaverse very much. During the entire press conference, Apple did not mention the "Metaverse" once.
After the WWDC press conference, good-hearted netizens made the following comparison chart:
User digital persona (Digital Persona) generated by Apple through AI technology and more than ten cameras
Zuckerberg Virtual Person
Low-key and pragmatic AI strategy
Although the newly launched mixed reality head-mounted device Vision Pro has attracted almost all people's attention, Apple has not talked about its technological progress in AI. Instead, it "speaks with facts" and tells people about AI technology. What can be achieved by implementation, so as to improve the user experience more realistically.
In fact, As a product company, Apple usually does not like to talk about "artificial intelligence" itself, but prefers the more academic term "machine learning", or just talks about what the implementation of technology can bring. of change. This allows Apple to focus more on developing and displaying the product itself, as well as how to optimize the user experience.
Specifically, at the conference, Apple announced an improved iPhone automatic error correction function based on machine learning programs, which uses a transformation language model, the same technology that supports ChatGPT. Apple says it will even learn from users' text and typing patterns to optimize the user's experience.
In addition, Apple has also improved the user experience of AirPods Pro, which can automatically turn off the noise reduction function when the user is having a conversation. While Apple doesn't define this improvement as a "machine learning" feature, it's a difficult problem to solve, and the solution is based on AI models.
Another very practical idea from Apple is to use the new Digital Persona function to 3D scan the user’s face and body. When the user wears the Vision Pro headset for a video conference with other people, they can virtually to recreate their appearance; Apple also mentioned several other new features that take advantage of the company's advances in neural networks, such as the ability to identify fields that need to be filled out in PDFs.
Does the head-mounted display device hide brain-computer interface technology?
On Tuesday, Sterling Crispin, a netizen who has worked in the AR/VR field for ten years and is an Apple AR neurotechnology researcher, tweeted to introduce Apple’s AR neurotechnology research and development process.
According to Crispin, his work at Apple includes supporting the basic development of Vision Pro, mindfulness experience, and more challenging neurotechnology research. In general, much of his work involves detecting users’ mental states using data from their bodies and brains during immersive experiences.
In a mixed reality or virtual reality experience, the AI model attempts to predict whether the user is curious, wandering, afraid, paying attention, recalling past experiences, or some other cognitive state.
These can be inferred through eye tracking, electrical activity in the brain, heart beat and rhythm, muscle activity, blood density, blood pressure, skin conductance and other measurements, making it possible to predict behavior.
According to Apple’s patent description and Crispin’s introduction, Apple’s neural technology can predict user behavior and adjust the virtual environment according to the user’s status.
The coolest result is predicting what users will click before they actually click. The reason why people's pupils tend to react before clicking is because people expect what will happen after clicking.
Biofeedback can be created by monitoring the user’s eye behavior and redesigning the user interface in real time to create more expected pupil responses. It's a rough "brain-computer interface" implemented through the eyes, without the user having to undergo invasive brain surgery.
Other techniques for inferring cognitive states include flashing sights or sounds to users quickly in a way that they may not be aware of, and then measuring their reactions.
Another patent details the use of machine learning and signals from the body and brain to predict how focused, or relaxed, or learning a user is, and then updating the virtual environment to enhance those states.
Imagine adaptive immersive environments that help you study, or work, or relax by changing what you see and hear in the background.
The above is the detailed content of The Apple Spring Festival Gala is plentiful and filling! What the market wants to hear, 'AI, MR, and brain-computer' are all available at once?. For more information, please follow other related articles on the PHP Chinese website!