iOS 18 and Apple intelligence? Is the future for accessibility really here?

By MR.TheBlind, 10 June, 2024

Forum
iOS and iPadOS

Hello guys! So thought my first post on this side would be about the new release of iOS 18.
I just wanted to provide some of my thoughts about today’s Keynote.
So after hearing the new changes in general that are coming to iOS 18. Which the first half of the event, really presented good features. Nothing really changed that much in my personal opinion. Aside from the whole mirroring the iPhone into our Mac, which I’m really curious to see how would that work exactly with voiceover since you have the voice on the Mac and you also have the voiceover on the iPhone.
I wonder how is that gonna work but I wonder if the voiceover on the iPhone will take over the Mac when you’re using that feature.
But the real jewel is the apple intelligence.
It’s really impressive. what they presented today and definitely it sounds like it requires a lot of power too.
And although they did improve Siri, for those of you who saw the Keynote, did you guys noticed that the Siri voice did sounded pretty much similar to what we have already?
I thought it was gonna sound a little more natural like Chat Gpt 4o. but then again, that was just a preview of the Keynote.
We don’t know yet. And since they made that new deal with Chat Gpt which by the way, there’s definitely a lot of money in between those two. lol. And also wondering if that’s why they haven’t released the new Chat Gpt 4o to their regular app. Wonder if they were holding until this event.
let’s see what they bring to the table with that
It sounds like everything they did today, there’s a lot of visual elements with this new AI that Apple presented so I’m really curious to see how they incorporate this for Accessibility and for the voiceover.
They didn’t really described for example how exactly it could work for the camera, if it’s able to describe images, although that would be awesome.
So really curious to see what you guys think about today’s announcement. Overall, I think it’s a really good approach that they did. Wish they would explain a little more what are the other capabilities that you could do but other than that I’m pretty hopeful that this will be accessible for us.
I also wonder if when we create new emojis or new ai generated images like in the xample they presented today, if we’ll be able to read it with voiceover with no problems and if also when we use the Chat Gpt incorporation voiceover is also able to read it correctly.

So interested to see what you guys think did you guys liked it, did not liked it… please vent in the comments. Lol.

Options

Comments

By Kevin Shaw on Sunday, June 9, 2024 - 07:13

Did you really need 19 GPUs in your Mac Mini? Yes, yes youdid.

The announcements today struck the right tone. I paid attention to the on-device processing and private cloud integration into the OS. We probably didn't hear a lot from Siri because that video just kept going, but rest assured, the voice quality will improve, especially if Chat GPT 4.0 is being integrated.

We'll see what WWDC cooks up for us this week.

By Holger Fiallo on Sunday, June 9, 2024 - 07:13

You know what they say over there. Show me.

By OldBear on Sunday, June 9, 2024 - 07:13

It will be a couple of years before I need to upgrade my phone, so I'm just watching from the outside. I've just started fooling with the LLM chat bots, and it's difficult to get an idea of what is currently possible. Keep posting lots and lots of threads with your explorations though!

By Matthew on Sunday, June 9, 2024 - 07:13

This looks great, but we'll see how many more bugs inevitably get introduced with this update.

By Chris on Sunday, June 9, 2024 - 07:13

Does this mean the old unsupported devices will still be stuck with the garbage version of Siri?

By Holger Fiallo on Sunday, June 9, 2024 - 07:13

Unless you have 15 pro or pro max, no AI for you.

By Chris on Sunday, June 9, 2024 - 07:13

I wonder if older devices will be able to take advantage of these features using Apple servers? I'm a little confused why they started talking about servers, when the whole point was to locally run as much as possible.

By Gokul on Sunday, June 9, 2024 - 07:13

What did they exactly say about the supported devices? Did they say anything at all?

By glassheart on Sunday, June 9, 2024 - 07:13

I'm thinking what's probably going on here, Chris, is, yeah, it does store things locally on your phone, as far as your data goes. That much, does Apple actually really not have access to it? (Shruggs) I'm not touching that one with a 10 ft poll! I have my feelings on the matter but I'll leave them for now to myself. That however said regardless, I think the deal is, while the data it processes itself remains we'll just for argument sake say, on your device locally, keep in mind that a lot of these AI abilities probably use API's that have to call back to a server to get tokens to process the requests. It's kind of like with Chat GPT. You have to send your request, notice I said request, not data, to the server, then a token handshake has to be sent back to the device. So, that's probably where the "server" aspect came into play, if I had to guess.

By TJT 2001 on Sunday, June 9, 2024 - 07:13

Per a page on the Apple website:

"Apple Intelligence is designed to protect your privacy at every step. It’s integrated into the core of your iPhone, iPad, and Mac through on device processing. So it’s aware of your personal information without collecting your personal information. And with groundbreaking Private Cloud Compute, Apple Intelligence can draw on larger server-based models, running on Apple silicon, to handle more complex requests for you while protecting your privacy.">

By blindpk on Sunday, June 9, 2024 - 07:13

It of course remains to be seen how this works out in reality, but what Apple says makes me feel that they at least takes the privacy more seriously than other companies do. The "on device" processing is not really different to what they already do with their "machine-learning", just more complex. The challenge is of course the server-side stuff, but it has been shown that you can send private queries to AI (via e.g. Brave and DuckDuckGo), so of course Apple can do it as well. They would of course very much want to have our data, but that is a different story.
What concerns me more about all this on-device processing is the battery life, especially on an iPhone 15 Pro. This is going to take both processing power and RAM to accomplish, so how will that affect battery life? Will be interesting to see when people have been able to test it out more.
Accessibility-wise I really hope we get integrated possibilities to get images or the screen described with AI, that would be a huge step forward having it OS-integrated instead of relying on apps to do it. The other AI features, like image generation, text summarizing etc. does not really interest me. A more natural Siri would be great, especially if I can ask questions like, "take a picture, what does the weather look like here?", or asking it to change a number of settings in one go etc..

By kool_turk on Sunday, June 9, 2024 - 07:13

A couple of days ago, on the Double Tap podcast, they said this event will either be a 'wow' moment or a 'meh' moment, depending on one's situation. For me, it's a 'meh' moment. I'm on an iPhone 12, so the chances of me getting to use these new AI features are pretty slim.

If the older devices are stuck with the older Siri, then I'll just move on and look forward to playing with the BSI features that sound interesting.

By glassheart on Sunday, June 9, 2024 - 07:13

So, I don't wanna take us down a warpath here, so I won't go into detail, but will leave it to anyone curious to do their own research, but Duck Duck Go is actually not as private as you might think. They actually a while back got into some heat. I'll just leave it there for curious minds.

As for getting Siri to do multiple things at once, you already to a large degree can do this through use of the shortcuts app. You just make a shortcut, then assign a Siri command to it. I understand fully that Shortcuts is a learning curve and isn't for everyone, so that's fair, but my point isn't the easability, as much as it's just to say that technically speaking, while it may have been a bit round about of a way, all be it, it has been doable for a while now, if you know what you're doing.

By blindpk on Sunday, June 9, 2024 - 07:13

Yes, both DuckDuckGo and Brave have had their share of controversies, there are very few alternatives indeed if you want something privacy-focused that hasn't had some doubt cast upon them, so sometimes you have to stick with the "more private" rather than the "absolute private". Anyway, the point is, there are ways to channel AI requests in a more privacy-preserving manner than others do (not having a login is a nice start).
Yes, shortcuts are great, but they're rather tedious to set up, especially if you want a lot of info in them, a lot of variables and such. Asking Siri to do these steps instead of building a shortcut for hours with testing and such will at least for me be much more efficient. Another alternative is of course to ask Siri to build a shortcut :)

By glassheart on Sunday, June 9, 2024 - 07:13

That's fair. I totally get what you're saying. It's all good. Just was making sure it wasn't overlooked, but you're right. Shortcuts can indeed be a bit tricky for those who just wanna get it done, and get it done quick. So trust me. I hear ya.

By Patrick Hurst on Sunday, June 9, 2024 - 07:13

As a VO user I totally agree on this article. Many new features (including customizable icons, etc.) are visual effects, so I wonder how this will interact with VO / what bugs in VO are there to expect?

By Patrick Hurst on Sunday, June 9, 2024 - 07:13

An interesting feature in conjunction with shortcuts would indeed be having GenAI set up customized shortcuts such as "When I arrive at my office, read the latest Email from my boss if any"...

By Laurent on Sunday, June 9, 2024 - 07:13

Like many, I was not at all impressed by this year’s WWDC. However, I must be honest; it has been quite some time since Apple has inspired me. As an owner of an iPhone 13 Pro Max, I was hoping that this year I would finally benefit from a Siri with an IQ higher than that of a protozoan. Some seem to indicate that this will not be the case. In other words, only a tiny minority of iPhone users will have access to an assistant worthy of 2024. Bravo, Apple, and thank you again for taking such good care of your users. I thus simply pose the question: why does transferring a request made to Siri to ChatGPT require additional computing power in the iPhone? Finally, I would add that the argument of personal data protection is somewhat superficial. 90% of the requests made to Siri are general in nature and do not involve any personal data.

By Maldalain on Sunday, June 9, 2024 - 07:13

So majority of new features are injected into accessibility, mention me a big upgrade for VoiceOver on the mac that we can all agree it has positivley changed the way we use our macs.

By Ann Marie B on Sunday, June 9, 2024 - 07:13

I am also curious how the revamped Siri will interact with my iPhone 12. If my phone isn't supported by the on-device AI hopefully it will at least be supported by the new and improved Siri capabilities. It would be cool to just tell Siri to empty junk emails from both accounts... This is why I've been using the outlook mail app for both accounts because it is more efficient than the native mail app,. (I digress). I'm also glad to see that you can finally customize the lock screen according to personal preference. That should have been implemented long ago. I can't wait to replace the camera and flash light with something like weather and trivia crack. :) As far as the visual appeal of customizing the home screen into something other than a grid format; I can take that or leave it. I depend on VO for everything so I'll have a play with the home screen.

By PaulMartz on Sunday, June 9, 2024 - 07:13

The ability to automatically do things when you reach a specific location was one of the decades-old promises of Bluetooth technology.

By PaulMartz on Sunday, June 9, 2024 - 07:13

I was hoping Apple would really leapfrog the rest of the AI industry. But the more I think about what was actually announced, the less impressed I become.

I think it's great that I'll be able to tell SIRI to copy an address out of a text into contacts. And while the presentations made a lot of noise about the App Intents interface and how many options will be available, it smacks of the typical "rules" API that Apple has relied on for decades. I can almost see the code making a call into the App Intents API with the parameter, CopyAddressToContacts.

Copy an address into Contacts? Great. But how about: "Book me a flight to Detroit. Check multiple airline websites and find me the best deal leaving July 31 and returning August 4. Ask me if you need more info." What I would expect, and what I've been hoping for, is an AI that understands links and buttons and text fields - that interfaces with my computer in much the same way that I do. Such an AI would navigate airline websites with ease.

Sifting through texts and emails to find answers sounds cool, and will be truly useful. But the strong implication from WWDC is that SIRI will be limited to searching through data stored by Apple apps, and maybe the handful of apps that are customized to use Apple's proprietary API. What if the information I'm looking for is in a Scrivener file? Or a PDF? Am I just screwed?

Instead, how about monitoring which documents I open and keeping track of the data I've looked at and where it's stored? Maybe SIRI still can't interface with Scrivener, but at least when I ask "Where did I put that note about CRISPR babies in China?" SIRI would be able to point me to the Scrivener file where I stashed all those gene editing notes.

What Apple has done here is quite nice. But in typical Apple fashion, they have created a proprietary API with a limited set of operations, rather than providing a general solution capable of performing a wide variety of tasks. The result feels dated, like what SIRI should have been if it were done correctly 13 years ago. Apple has taken generative AI and made it feel heuristic.

My comments are based on the WWDC marketing vids. I haven't tried it yet because it's unavailable even in the iOS 18 beta. Once I give it a spin, I'll be happy to be pleasantly surprised, and watch for my post here at AppleVis where I eat my own words.

By TheBllindGuy07 on Sunday, June 9, 2024 - 07:13

Can't test it on my iphone 14 which is my main device, but for the moment only the bsi feature makes me really excited so I can dream of buying an ipad as my notebook and do whatever voiceover's god on my mac permits.
Or just iphone miroring to mac? I will always feel so sad when I do that because I'll know that I am using my overpowered mac as a headless terminal for my iphone because of how bad voiceover is on the mac.
As for homescreen reorganization, it couldn't be worst than the widget on sonoma or the widget rotor element requires you to already be on a widget for it to eventually focus your cursor to the next/previous widget (if any), couldn't it? :)