third party screen readers on macOS

By paras shah, 14 August, 2023

Forum
macOS and Mac Apps

Hi all,
I heard that third-party screen readers will be able to be installed on mac? Is this true?
Has apple opened the API?
Thanks

Options

Comments

By Devin Prater on Saturday, August 26, 2023 - 06:13

Maybe third-party tts engines, but not screen readers.

By Jimmy on Saturday, August 26, 2023 - 06:13

I always thought such third-party apps would be practically feasible, especially with the introduction of the accessibility permission a couple of updates ago?
With VOCR for example, as I understand it, it can still access content on the screen, perform OCR and control the keyboard and mouse pointer to move those two cursors to the where you are while you use the Control+Command+Arrow Keys.
I just thought the biggest reason that prevents someone from writing such an independent screen reader for Mac is that it's just too cumbersome of a task, which might not result in a very guaranteed return.
It's just my own thoughts. I don't know anything much about coding, so please help to correct me if I'm wrong.

By Enes Deniz on Saturday, August 26, 2023 - 06:13

Look at Narrator for Windows. So what if third-parties decide to make their own screen readers for MacOS and possibly other operating systems like iOS, and Apple claims to have yet another reason to ignore all bug reports and feature requests, treating VoiceOver as a built-in screen reader that is supposed to have basic features? Could this also be why TalkBack is not as functional as VoiceOver on iOS and iPadOS? You know, there are other screen readers like Jieshuo/Commentary for Android.

By paras shah on Saturday, August 26, 2023 - 06:13

I heard on one of the forms that might happen, but I’m not even sure. Just wondering.

By Ash Rein on Saturday, August 26, 2023 - 06:13

I wouldn’t mind using Jaws or NVDA on Mac or iOS. It’s probably inevitable really

By OldBear on Saturday, August 26, 2023 - 06:13

Many here seem to say MacOS is not designed to be used with a keyboard, rather graphically with eyes and a mouse, and VO is basically tacked on as an afterthought.
So why would a third party screen reader be much better than Voice Over? Kind of like how the Web Browsers on IOS are all using the same underlying Web Kit, so if there's a fundamental problem or irritant with Web Kit there's a problem with all browsers on IOS.

By peter on Saturday, August 26, 2023 - 06:13

It is always good for a user to have options and choices to meet their individual needs and work styles. Also, competition is usually a good thing.

Also, as I pointed out in another forum thread, many accessibility bugs can't be identified until they are identified after a general update to iOS or Mac OS. Then, after such bugs are identified, the users won't see a fix until the next general update of either OS.

With a third party screen reader, however, the fixing of such bugs doesn't have to be tied to waiting for the next general update to the OS. Of course, Apple could take this approach with Voiceover, but it doesn't. Why should they? there aren't any other third party screen readers that will fix the issues before Apple does!

--Pete

--Pete

By Ash Rein on Saturday, August 26, 2023 - 06:13

It bothers me when people say something so silly as competition makes things better. Did the competition between android and iOS make things better? Has competition between Microsoft and Apple made things better? Did competition between NVDA and amd make things better?

This is capitalism. Despite what people are reading, competition under these circumstances don’t make things better. Technology has stagnated. It’s generally stalled. Everything is coming out in trickles. Everything is expensive and getting more expensive. Instead of competing with each other, they are literally working together to set prices.

A third party screen reader isn’t about competition. It’s about choice. If there is a screen reader that I have to pay for that improves accessibility, I will use it over voiceover. That is all. You will too. It won’t incentivize apple to make improvements. They have plans that extend to the next 10 years. They won’t change because a new product comes into market. For example, Chat GPT has been in market for months. Apple’s response has been to do almost nothing about it. Maybe some investment to eventually bring something out. But competition didn’t phase them. iOS 17 is coming with no Generative AI. iOS 18 will probably come out without any generative ai.

Please stop making this about buzz words. Keep it about the benefits of a having a third party screen reader. At this point, it’s not about competition. It is about benign able to use our products in a consistent and meaningful way. Voiceover is problematic. Jaws is problematic. NVDA is problematic. They each have good points and I would rather use all if it means having greater use out of everything.

By peter on Saturday, August 26, 2023 - 06:13

We may not have seen anything about large language models yet from Apple, but there is no doubt that users will see enhancements to Apple's eco-system along those lines. That type of response from developers *is* driven by competition.

Plus, as you say, with more companies looking for more and more market share, users do wind up with more choices.

--Pete

By Ash Rein on Saturday, August 26, 2023 - 06:13

Apple isn’t competing. The iPhones in our hands are made in the same factory with the same processes as the android phones. They use the same materials generally. They both are based on Unix. They are essentially priced the same with even the same discounts. Market share is an illusion. It hasn’t really changed in the past 15 years or so. Maybe a few points up and down.

I want competition. I think it would be cool. But there is no incentive to actually have it under this economic system. I don’t want a third party screen reader because it will make apple improve their screeen reader. These companies goal isn’t to outdo each other. Their goal is to see what people want, repackage it, annd sell it ant a major profit. Apple isn’t a software or hardware company. It is literally a marketing company. Its goal is to convince you that you want something. Voiceover wasn’t created to make things accessible. They do not care about us. We are not human beings to them. We are consumers. They did it because it gave us incentive to purchase a $1200+ phone that is no different from three years ago. Microsoft followed that, google followed it. None of of them did anything to outdo the other. And they have so much money that it would be a cakewalk to outdo each other every year. But it would cut into profits.

GPT will come out for apple at some. And it will literally be the same as what we see Microsoft and google are doing. Like I said, technology is purposely staggered. It is even stalled to squeeze out every last penny they can before moving a little bit forward.

China on the other hand. Well, they are starting to actually innovate and create products that make ours look archaic. I wouldn’t mind a phone that can charge 100% in 8 minutes. Or a device that auto translates on the fly.

I hear that a Chinese developer is actually creating a new kind of screen reader. I don’t remember where I read it. And I do not have details. It was just a blurb I read. And I have faith that the Chinese developer will actually create something that changes our lives for the better.

By OldBear on Saturday, August 26, 2023 - 06:13

My point had nothing to do with competition or economics. Some comments on this forum seem to suggest the accessibility API of MacOS is... hairbrained because the Mac is a graphical system and is meant to be used with a mouse not a keyboard. I have no idea, not being a MacOS user, but if that is the case, it doesn't matter how many screen readers there are, they will all have to use the same underlying, hairbrained accessibility API.
There is only one graphical screen reader on the OS I use, though I can also go command-line with a few other screen readers, and it is functional without having to compete with other graphical screen readers. I adjusted the way I used a computer and my mindset toward it when I left Windows. I would be interested in MacOs, except I keep reading that MacOS has some kind of core issue with half-baked accessibility, and it's not just that Voice Over has issues, but that MacOS has an interface issue that is hostile to screen reader accessibility.
Other than some odd bugs that seem to only afflict specific devices, IOS seems very accessible to me with only VO as the screen reader.

By Chamomile on Saturday, August 26, 2023 - 06:13

Call me cynical towards MacOS but I don't know if it'll really help. I don't know the behind-the-scenes of it all, but Windows screen readers and VoiceOver operate very differently. VoiceOver requires a lot of jumping around. I really don't think having NVDA and JAWS would make accessibility on a Mac any better, but who knows?

By Tim on Saturday, August 26, 2023 - 06:13

It is unlikely we will see any additional screen readers for Mac OS. Creating a screen reader involves a major commitment in time and money. JAWS is unlikely to make that commitment because they are a commercial product and would need to believe they could sell enough on Mac OS to make it worth their time. That is unlikely to happen on an OS with a well-established free builtin screen reader that works reasonably well for most people. NVDA is a free OpenSource screen reader. However, they would need to acquire major funding before they could do something like this. I suspect if we ever see any additional screen readers on Mac OS, they will need to start as OpenSource projects that will unfortunately need to rely heavily on volunteers. But time will tell if my thoughts on this topic end up being correct.

By Dominic on Saturday, August 26, 2023 - 06:13

It might take a while for The the devs courage and work their arse off to make the screen reader then it could possibly work, Maybe 2025 2026AAt thet earliest
It would take awhile for the code though
But I definitely think this could be a really good possibility specially with AI and the more open source Apple slowly gets, Not to mention more code Apple releases, speech API, you could make a screen reader out of that even if it’s just a basic screen reader just run command line processing

By Enes Deniz on Saturday, August 26, 2023 - 06:13

Ash, could that Chinese screen reader you mentioned possibly be ZDSR, also known as Zhengdu Screen Reader?

By Ash Rein on Saturday, August 26, 2023 - 06:13

I wish I could provide more information. I don’t remember. I just remember late night. Read about a possible new type of screen reader coming out. And it was a Chinese developer. I honestly wish I bookmarked the page but you might be right.

By Jimmy on Saturday, August 26, 2023 - 06:13

No, the Chinese-origin screen reader that I am using at the moment are only known by either Jieshuo, or its English equivalence Commentary Screen Reader. Far as I know, it is not known as Zhengdu.

As to how a third-party screen reader will be better than Apple's own built Voice Over, I believe commitment and flexibility are some of the key advantages. Take Jieshuo as an example, because it has the developer's dedication to work exclusively for the app's improvement, much of which is fed directly from user's daily experience, it has many many extremely streamlined and useful functions. One typical example is the ability to assign app-specific gestures to quickly activate certain element. For example, when using YouTube, I can just assign the three-finger swipe right to quickly skip forward 10 seconds, and left to backward. Even better still, I can assign the three-finger double-tapping to first activate the More Option Button, and then activate the Speed, and then activate the 1.75x, all with just one single gesture.
That level of flexibility is something I don't expect Voice Over, responsible by a team burdened with already too much to be able to build, test, and debug, at least that quickly. Yes, some far distance way down the road, they may eventually decide to do that. However, with the shortcut customisation not even available for Voice Over on Mac, despite it being already available on iOS, I still have so much more to hope for haha.

By Hmc on Tuesday, December 26, 2023 - 06:13

This was something discussed a long time ago, if FS would port JFW to other platforms, EG JFM (JAWS for Mac) but seriously, Apple hasn't truly cared about the MacOS voiceover in a long, long time.
Microsoft does many things wrong. But their work on Narrator is pretty impressive, esp compared to where it was five years, ten years back.
Apple has done this but only with their Mobile Platforms. The VO for Mac updates are gimmicks but not QOL (quality of life) updates that expand its power and scope as a screen reader. In other words, Mac VO is the prototype and IOS VO is the true king here.
If MacOS has shotty access work why would changing the SR do much? Sure, they might be able to make it faster for key press to TTS, trim all the fat. But as far as functionality and behavior, I don't know how much it would matter. Unless you start doing screen scraping, OCR everywhere, or actually develop a more cohesive approach to app layouts and element information onscreen. Unless that happens, we might have another VO clone that uses different verbiage but generally the same crappy experience.

By kchro3 on Friday, January 5, 2024 - 06:13

Hi, I know this thread is a bit old, but I wanted to reach out and say that I'm working on Typeahead, a 3rd party screen reader that uses AI to help navigate apps and websites. It is very early in its development, but I'm looking for beta testers to try it out and give me feedback.

If you are looking for:

  • a smoother, natural interface to navigate your computer
  • fast product iterations
  • an open line of communication to the developer (me)
  • an experienced, hard-working software engineer ;) (linkedin)

I would love for you to try it out (download link here. If you'd like you can email me at jeff@typeahead.ai to schedule an onboarding session with me, and please don't hesitate to ping with any feedback or questions.

I think that I can deliver a better product experience than VoiceOver with your help.

By Hmc on Monday, February 12, 2024 - 06:13

Hello,

Typeahead sounds like it could fill the gaps in MacOS. Especially where VO is utterly silent or just says unknown, etc.
Are Intel versions coming for High Sierra, or will a newer OS be required? I'd love to test on M1 but haven't lit money on fire to buy one yet lol.

From the little video saving the PDF and opening Mail, this sounds more like a TTS narating what's going on, but not the interactions to get there. EG, I know mail is opening, but what's the main screen have on it? Inbox. One message, one unread. Bla bla, etc. Can Typeahead speak that info as well? That's pretty integral to the screen reader. :D

I'l most likely reach out via email to discuss further. There's no doubt I'd love to see a flexible screen reader that could bridge gaps that VO doesn't know how to cross.

Some potential brainstormy ideas or food for thought:

Macros sound like a good idea. Recording steps like "click the green button and then wait 200ms, then read some part of the screen." I'd say screen reading is more dynamic than a bunch of rigid steps, but that's where the AI thing comes in. Being able to know with certainty the conditions of an app, or if not, falling back to a perhaps more verbose but still accurate system until the new interfaces being shown can be trained or learned from.

Would Typeahead be able to do OCR of a screen and interact with its controls? EG: scan the screen for text and images, provide that info in a meaningful way, and click various controls? Using only the keyboard of course.
The system should be able to make certain assumptions based upon UI element positions, colors or shapes of buttons, etc. So even though a control has no textual info or access label it could still be read correctly. So X buttons would just say "close button", perhaps a clickable logo could say menu" and possible text in the logo like brand names etc.

Good keyboard commands are necessary for screen readers. That's a touchy subject with people, because we all come from various platforms and there's no one keymap to rule them all, so to speak. :D

It's mentioned there's also a windows version coming. While this is great in its own right, I'm wondering how much underlying info Typeahead can discern from the OS itself. Windows and Mac are different beasts with how screen readers get their info. Windows uses a combination of UI Automation and IAccessible2, and probs many more API's that are newer that I've fallen out of touch with. MacOS uses their own different API. That said there are basic similarities between them. EG, all controls can have a type, state and value as well as a text label describing what it is.
So I guess the question is: Is Typeahead strictly looking at screen information and building its control set for an app? Or is there a possibility of delving deeper into what an app exposes through various accessibility API's?

Thanks for reading, and apologies for the lengthy post. :)