Hi everyone,
I am a totally blind user from Poland. As you probably know, Apple Intelligence is not available in my country yet, but I am following all the updates very closely because I can't wait to see how it works for us. I am reaching out to those of you who already have access to it and use VoiceOver every single day.
I really want to know your honest opinions on how this works in real life:
1. The new Siri. Is it actually better for someone who can't see the screen? How does it handle everyday tasks now?
2. Notification summaries and writing tools. Does VoiceOver read the summaries correctly? Is the text editing easy to use when you type?
3. Visual Intelligence. For those who use the camera features, is it better or faster than things like Seeing AI or Be My AI?
I am personally so excited and I really can't wait for 2027, hoping that by then we will finally get full access to all of this here in Poland.
Let me know what you think and thanks for any answers!
By Kuba, 15 May, 2026
Forum
iOS and iPadOS
Comments
Greetings
Hi! Greetings from Indonesia. As a daily VoiceOver user, here is my honest, real-life take on Apple Intelligence right now, and how it compares to the tools we already rely on.
### 1. The New Siri
To be completely blunt, the "new" Siri is still very far behind dedicated LLMs like ChatGPT or Gemini. While the animation looks cool and it is slightly better at understanding context or stumbles in speech, its ability to handle complex everyday tasks smoothly with VoiceOver isn't a massive leap forward yet. For deep queries or advanced tasks, you will still find yourself opening standalone AI apps.
### 2. Notification Summaries & Writing Tools
VoiceOver does a decent job reading the notification summaries, and it can save you some time scanning through a clutter of messages. However, there are still occasional focus bugs where VoiceOver doesn't automatically land on the right text block or gets stuck on the summary buttons. As for the Writing Tools, they work, but formatting or editing text via VoiceOver feels a bit clunky compared to just pasting your text into ChatGPT or Gemini and asking them to clean it up.
### 3. Visual Intelligence vs. Seeing AI / Be My AI
This is where the biggest gap is. Visual Intelligence is nowhere near Be My AI or even Seeing AI at the moment. It feels like a basic visual search (mostly just pulling Google results or quick tags) rather than a true descriptive assistant for the blind. For actual daily independence—like describing a room, reading a complex layout, or detailed object identification—**Be My AI and Seeing AI remain lightyears ahead.**
### Conclusion
For now, my best advice is to keep relying heavily on **Gemini, ChatGPT, Be My AI, and Seeing AI**. They are much more powerful and mature tools for our specific needs. Let's hope that by the time iOS 27 rolls around, Apple will have truly integrated these features to be genuinely useful and accessible for us!
Greetings, and I hope you get to try it out soon in Poland!
TLDR
Apple Intelligence for blind user is as bad or good as you think Apple Intelligence is bad or good for everybody. :)