just a message to let all of those on the dev channel that beta 2 is out. Enjoy and report your findings.
By Dennis Long, 21 June, 2023
Forum
Apple Beta Releases
Your Feedback is Important to Improving Accessibility on Apple platforms
Don't assume that Apple is aware of a bug or that your report won't make a difference - submitting bug reports directly to the company helps them reproduce and resolve the issue faster. Your report may provide crucial information. The more reports they receive, the higher the priority they give to fixing the bug.
If you're using a beta version, use the Feedback Assistant on your device to submit feedback and bug reports. If you're using a public release, follow the instructions on this page.
Comments
eloquence Dictionary is now in there!!!
Eloquence Dictionary is in there!!!!!!!!!!!!!!!!!!
how! how does that work?
how does that work?
anything else new?
eloquence high sample rate
Hi. Does the eloquence high sample rate still sound like it's got no front teeth?
Yes the high sample rate is…
Yes the high sample rate is still missing teeth. I’m going to report it.
Say goodbye to image descriptions
Image descriptions no longer work in beta two. Going into the settings reveals that image descriptions are using zero KB. Turning off and on the descriptions does not fix the issue. On the bright side, screen recognition and text recognition still work.
First that is the wrong…
First that is the wrong attitude to have!! I reported it. The feedback id is
FB12411669
eloquence dictionary is on by default
It is under per voice settings. the thing you need is on by default. This is awesome.
Eloquence
What is the eloquence dictionary?
Is the audio message problom fixt
I hope so if it is ima update
Image descriptions
There is a workaround for the image descriptions issue. I went into the voice settings and turned off everything under voice recognition. I then restarted my device and turned everything back on and it now says that image descriptions are using 43 MB.
Sending a text with braille screen input remains broken
Hi,
In iOS 17 beta one, it became impossible to send a text message using a three finger swipe up when in Braille screen input, which is how I mainly type on my phone these days. As of beta two, this issue is still not fixed. I will definitely be reporting this.
Has anyone else seen this?
Sending messages with braille screen input
Yes, I have noticed this behavior. Braille screen input doesn't work as well as it used tin in IOS 16. Not that I'm complaining, I understand that bugs will happen and that they are expected, just stating my observations. :) Have a great day.
Regarding: image descriptions
I tried your method of turning off all of the voiceover recognition features and restarting the device. Although this did show that a download was required, once it was actually downloaded, it was only performing text recognition like an iPhone 7 or eight without the neural engine would.
I will probably report this later today because I’m probably going to be busy all day today.
So as of now, you only have text recognition.
More on dictionary support in Eloquence
So the big news of the day is dictionary support, or dictionary inclusion, in iOS 17 Beta 2. As many of you know, the Eloquence/IBMTTS community dictionary is at https://github.com/thunderdrop/IBMTTSDictionaries
Apple is now using this dictionary as part of its Eloquence offering with iOS 17 Beta 2, and the major dictionary support can't be turned off. However, what can be turned off/on is support for the Abbreviation dictionary - for stuff like AM, WHO, NJ, LA, etc. Another point worth mentioning is that Apple is *not* using the latest release of the community dictionary on Github. So I urge you to contact Apple to encourage them to upgrade the dictionary to the latest available release.
Image description
I'm about to submit feedback regarding the image description bug. I thought I'd fixed it by redownloading the files, but that doesn't seem to help. I'll submit feedback and comment with the number.
Eloquence Dictionary is awesome!
the Eloquence Dictionary is awesome! I'm glad Apple has finally added it in. Now it pronounces things correctly such as Gmail and YouTube. there is another option I found called Phrase Prediction, which is off by default, but can be turned on if desired. very cool.
what does phrase prediction do with eloquence
Hi all, can't we just have the eloquence that android has? also when reading are the apostrophes taken away when reading quotes? it just seems as said before that no actual propper user testing was done or if it was, not listened to by Apple during the ios 16 beta cycle. The voice in ios 16 just sounds tinny and just not as punchy as eloquence on the Pc with Jaws or android. Just my thoughts and hope by the time 17 comes around, it will be worth using eloquence in my case, to read Kindle.
Sound Curtain
This is new under audio. The description says the following:
"Sound Curtain ensures that your iPhone does not play sound from music or sound effects. Emergency alerts will still play sounds."
What happens is sounds, including music and VoiceOver are completely silenced. It appears this is useful for deaf or deaf-blind users who do not need sound at all and uses a braille display. It is located next to audio ducking. If you do need it, you can assign a touch gesture or keystroke. There is no setting for a braille display to be used for sound curtain unfortunately.
Different problem with image descriptions in iOS 17 beta
Hi everyone, long time lurker and first time poster here. I have an interesting problem with image descriptions that has not been discussed here yet. I’m wondering if anyone else is experiencing this. My descriptions are working, but now instead of hearing full sentences describing the photo, I’m hearing one word descriptions of all the objects in the picture. For instance, there are a few photos of me petting a dog on the couch. In iOS 16, I would’ve heard a description like “a man sitting on a couch next to a dog “. Since the upgrade, I’m now hearing a description that sounds like this. “Couch, drapes, window, room, man “. This behavior is consistent across all photographs, and not just the specific ones I mentioned here. Interestingly, image descriptions do seem to do a better job of recognizing and speaking text. What are your thoughts?
@JC
I completely agree with you.
This isn’t full image descriptions
What you’re describing isn’t image descriptions.
That is object detection. Remember before they released that feature, in iOS 12 or 13, they were like that as well? That’s why. There’s a problem with the data of image descriptions themselves. You are more likely to get one word descriptions rather than full sentences for this reason.
As of now, there is no fix, so have fun waiting the next two weeks for a fix in beta three.
Yes, I know it sucks, but that’s a risk you might take when testing iOS beta software.
I honestly don’t mind.
If I find a fix, I will let you know.
Please keep reporting bugs to Apple, otherwise they may not fix your issues.
If you find any other features or bugs, please put them here for all others to see.
Eloquence phrase prediction
What is phrase prediction?
Exactly that, phrase prediction.
It predicts how phrases should sound, and may add pauses, or inflection where it thinks they should go. For example, say you had the phrase,
The quick Brown fox jumped over the lazy dog, and then the dog chased a cat.
Eloquence will read that correctly regardless of whether phrase prediction is turned on or off. If that phrase had no punctuation marks like this,
the quick brown fox jumped over the lazy dog and then the dog chased the cat
Eloquence would try to predict where the pauses in that sentence would go.
From my testing, it’s not 100% accurate, but it is there if that floats your boat. Instead of getting the sentence correct, it might try to do this,
The quick brown fox jumped over the lazy dog and then, chased a cat.
it may get it right, but I have not done hardly any testing with eloquence in iOS 17 because I have been using the new neural engine Siri voices, which I honestly like.
Fun fact, this was in a very early beta of iOS 16, I think it was iOS 16.0 beta two, and there was no way to turn it on or off. It was just on. it was since removed I think in iOS 16 beta three, and has now arrived, just one year later.
Hope this clears up any confusion.
P.S., who remembers an iOS 16 beta one when eloquence would say question Mark instead of the apostrophe character? So instead of saying John’s iPhone, it would say John question Mark S iPhone. Or who remembers when eloquence wasn’t speaking emojis?
`[1imo. jis
Eloquence still can’t pronounce the word emoji. If you’re using eloquence, you might be able to make a pronunciation for the word as, ihmo je.
It’s not perfect, but it’s better than `[1imo. jis.
I’m going to report that to Apple.
Sometimes it’s very important to reset settings
Because of the nature of beta software, it can be very important to reset all settings. Sometimes you even have to go as far as to set everything back to factory settings. I’m sorry about the frustration that that might cause. But it is how things sometimes work. something might’ve gotten screwy in the update. A file might’ve gotten corrupted. This is not the proper forum for comparing bugs. It’s simply meant as a website to discuss basic accessibility. Many of the users on this website have varying levels of skill and awareness regarding VoiceOver and other accessibility features. Generally speaking, people are supposed to be going on a developer website and discussing these issues on the forms there.
I would recommend following basic, troubleshooting steps, such as resetting all settings or going as far as actually resetting the phone to factory settings. This is not meant to be for anything other than testing. And testing involves doing very tedious things. Simply filing a bug report on something that requires multiple troubleshooting steps can actually hurt accessibility being improved.
Has anyone reported how the software keyboard is acting?
Updated to the beta and whenI started a text, I heard every letter spoken which became annoying fast causing me to make more mistakes then I usually do. anyone else have this? Messages is the only place i've tried software keyboard for now.
Software keyboard issue
Yes, I see the same behavior on my test environment.
same here, and it has been reported
Same here, and it has been reported. Hoping it gets fixed in future updates. FB12356834
The Eloquence features…
The Eloquence features appear to be good, but does Flo still speak faster than the other voices?
speech rate has been fixed
The speech rate has been fixed. All of the voices can go in smaller chunks. In iOS 16, the speech rate would stay stuck at 50%, which, to me, is slow, and could only go up to 60% and it would be too fast. the only workaround I found was to set the rate inside the VoiceOver rotor.. setting it inside settings, accessibility, VoiceOver, would cause the rate to remain slow, and cannot be adjusted further. now, it has been fixed. I have my speech rate set at 55%, which is the default rate for me. on the windows side, Jaws spoked at the same rate, accept at 15%. 25% is the fastest, and for VoiceOver on iOS 17, it could be set at 60%. I always leave the speech rate in the rotor for quick access. Good job apple for fixing the speech rate issue so quickly.
Personalized Spatial Audio improvements I think
Hello everybody, and for those who use personalized Spatial Audio, there are some greatly appreciated improvements to the sound.
For some background, personalized spatial audio is a feature where your iPhone can scan what your head looks like to be used to make the audio sound more personalized for you using HRTF’s (Head Related Transfer Functions). Normal spatial audio and any other virtual surround app will use a generalized HRTF.
The problem with this approach is that it may work extremely great for some people, while it may not work at all for others. That is what Personalized spatial audio tries to fix by creating an HRTF on your iPhone.
One slight improvement from previous versions of iOS was the virtual room.
For those who don’t know, whenever you use Spatial Audio or any other virtual surround sound like Boom3D on the iPhone or any other apple device, there is a virtual room to make the sound sound like it’s coming from speakers within the room rather than coming from your headphones.
Throughout the various versions of iOS 15 and 16 (without personalized spatial audio), the room has ever so slightly changed. In iOS 15, the room sounded like a small living room/home theater type room. In later versions of iOS 15 and iOS 16 without personalized spatial audio turned on, the room sounded like a smaller room with acoustic sound dampening padding, as well as the reverb mix (dry wet mix) being adjusted for less of the dry mix (the unaltered sound) and more of the reverb mix (virtual room sound for spatial audio). This means that the reverb was louder than the actual audio being spatialized. When personalized spatial audio was turned on, the dry wet mix returned to more reasonable levels, so you could hear less of the reverb and more of the audio being spatialized.
In iOS 17 beta 1, I noticed that the room sounded like a hybrid between iOS 15.0 and iOS 16 with personalized spatial audio turned on. In iOS 17 beta 2, the room sounded even more like iOS 15.0 and previous betas, while the virtual room without personalized spatial audio turned on was the same as iOS 16, with a lot more reverb.
This is one of 2 changes that I have noticed to this feature.
The other nice change had to do with the frequency response (range of frequencies from 20 to 20000 Hz) of personalized spatial audio. In previous versions of iOS 16, when enabling this feature, there was a boost which made high frequencies and especially S sounds very sharp.
For the more technically inclined, it sounded like a boost of 8 through 12 kHz with a more pronounced spike at 8000Hz. All those frequencies in that range were being boosted in one way or another.
I deleted personalized spacial audio profile to see if there was a difference to the regular Spatial Audio profile. I don’t think there was.
After I reset up personalized Spatial Audio, I was pleasantly surprised to hear that the 8000 to 12,000 Hz boost was mostly eliminated. It was still there, ever so slightly, but much more pleasant to listen to, like someone reducing the gain of the boost. It’s still there but if you don’t even pay attention to it you don’t even notice it.
This is the best I’ve heard spatial audio of any kind on iOS for music and spatialized stereo for apps like YouTube ever since iOS 15 beta when I first tried the feature after its release.
The earliest versions of iOS 15 by far had the best implementation of spatial audio and spatialized stereo for 2 channel content.
When iOS 16 introduced personalized spacial audio, I loved the feature, because of the way it improved the audio quality. Although it had that boost between 8 to 12 kHz, I was honestly fine with it being there because on older AirPods it wasn’t so prominent unlike the AirPods Pro 2. In recent versions of iOS 16, it seemed like the boost got a lot worse
Glad to see Apple is listening to User feedback and making features I love even better!
I really do think iOS 17 is gonna be the best update we’ve seen in about two or three years.
I feel like iOS 15 was the best up until iOS 17.
I know we’re only on iOS 17 beta two, mac OS 14 beta two, watchOS 10 Beta two, etc., but I already feel like this could be iOS 17.0 or iOS 17 point one beta, especially on Mac.
Although there are a few different bugs (braille screen input bugs, software keyboard bugs), I still feel like if those features were fixed these would be great releases.
Sorry for the 2 page article, but I wanted to write my thoughts about this feature.
I will be writing an article on how to use personalized spatial audio and spatialized stereo in the up coming days so keep an eye out for that.
Hope you enjoyed and feel free to ask questions in the comments and I will respond as quickly as I can.
Pronounciations
Have voiceover pronounciations been fixed yet? They haven't worked at all for me since IOS 16 came out.
Pronunciation
Has anyone else noticed that in the latest betas of both iOS 17 and macOS Sonoma, Alex is pronouncing the word "Politics" strangely? There may be other mispronunciations of common words, but that's one that I've recently discovered.
The way Samantha pronounces seconds
Sometimes Samantha will say seconds strangely
Like five minutes and two seconds ago it might say it’s strangely
Will personalized spacial audio work with other apps as well?
Will personalized spacial audio work with other apps as well? for example, FaceTime or zoom?
It depends on the app.
FaceTime has spatial audio built in, so it will use personalized spatial audio whenever it is turned on. On the iPhone, zoom and other third-party VOIP apps can not use personalized Spatial Audio because it is a conference calling app that is third-party.
Whenever AirPods are connected, you can use personalized spatial audio with pretty much all media playing apps such as TikTok, YouTube, Instagram, Music, Files, etc.
To enable this, make sure you have AirPods three and later, AirPods Pro first generation or later, or AirPods max connected to your iPhone.
When media is playing from your preferred app, double tap and hold on the volume slider in control center. You will see your noise control modes, and below that, either Spatial Audio or Spatialized stereo. If you’re playing in an app like YouTube, double tap on spacialized stereo to reveal a menu where you can select between off, fixed, and head tracked.
Head tracked will make the audio sound like it’s moving whenever you move your head whenever AirPods are connected. The feature tries to make it sound like the Spatial Audio follows your iPhone rather than your head movement. For example, if you set your iPhone on a table and turn 90 degrees to the right, the sound will sound like it’s 90° to the left of you. It’s a neat gimmick, but not that useful. To avoid this, select the fixed option. If you turn off follow iPhone in accessibility settings, double tapping spatialized stereo willsimply toggle it rather than bring up a menu.
Be on the lookout for an article showing how to set up personalized Spatial Audio, and how to use it with spacialized stereo with third-party media playing apps and Apple Music.
If anyone has questions about it, please feel free to reach out.
I don't have AirPods anymore, but I do have beats flex
I don't have AirPods anymore, but I do have Beats Flex. I'm guessing it doesn't work with personalized spacial audio, does it? also, you said that apps such as FaceTime uses personalized spacial audio, so can it be used with the built-in speakers on the iPhone?
Built-in speakers?
Can you ever make use of just a pair of iPhone speakers to get spatial audio?
Yes, but it depends on which iPhone you have.
You can find a list of supported devices here: https://developer.dolby.com/platforms/apple/ios/device-support/
In order for you to play spacial audio content through your iPhone speakers, you will need one of the devices listed at the website above.
Okay, but how?
To be honest, I don't know much about any of these devices and how they work, but I don't quite get how we can hear 3D/spatial audio through a pair of little stereo speakers. You know, it's an iPhone we're talking about, and the speakers are already and only intended to let you hear stereo content, as far as I know. If you place your iPhone on a desk in front of you, then you should be able to listen to stereo content and know what's on the left and what's on the right, but since the iPhone with its speakers are in front of you, everything you hear should also come from the front left or right. Am I wrong?
It’s not exactly 3-D like with AirPods or headphones.
For Apple speaker virtualization, it is only applied when watching multi channel streams with multi channel surroundsound formats, such as Dolby Atmos on Apple Music, or movies on Apple Tv.
For speakers, it’s not so realistic where you can tell that a helicopter is above you, or that someone shot a gun really far behind you. It is mainly to give stereo signals and expanded type feeling, like they are slightly wider than your iPhone speakers. There may be just a bit of front back style processing, but it’s hard to hear.
If you want to know what I’m talking about, go subscribe for a free trial of Apple Music, and listen to their spacial audio playlists on speakers versus your AirPods or any headphones. Hope this helps. In order to have this feature, you must have an iPhone 11 or later.
the definitive guide to apple spatial audio.
Hi.
as promised, I wrote a definitive guide to apples spatial audio, detailing everything about it.
You can go to it here: https://applevis.com/forum/ios-ipados/definitive-guide-apple-spatial-audio-including-personalized-spatial-audio
It's really long, 3 pages long, so hope you enjoy.
Kinda got it
Well, guess I'm not interested to mess around with Apple Music, but I guess I kinda got what you meant. I don't have AirPods already.
Eloquence with a lower tone
It is notorious that, after this update, Eloquence's Reed voice is using a lower speech tone, I proven this by comparing it directly with the existing version in iOS 16.5
Elequense Higher ssample rate distorting
Am I the only one who is noticing this? When you use some elequense higher sample rates, reed, as an example, the voice clips? I mean clips as in, is so loud that you hear it a little distorted? With other varients this doesn't happen but with reed, it is particularly noticeable for some reason, anyone else noticed this?
What exactly are you referring to as high sampling rate?
Do the Eloquence variants differ in their sampling rate or do we have separate options for each with different sampling rates like 16 KHz or 8 KHz? Are there multiple options like "Reed (16 KHz)", "Shelley (8 KHZ)" etc.?
In response to the comment on speech rates
No, I didn't mean that. Go to the Speech section in VoiceOver settings and find Eloquence there, and then pick the rotor action 'Speek Sample' on all the Eloquence variants to listen to a short sample. You'll hear that Flo speaks faster than the other variants, at least on iOS versions up to 16.5.1. Not only that, select Flo as the VoiceOver voice in use and it will speak faster than any other built-in voice without you changing the speech rate. Switch back and forth between Flo and some other variant and you'll get my point. So my question is whether this has been fixed and Flo also speaks at the same rate as the other variants.
it has been fixed
it has been fixed. it's speaking at the same rate as with all of the other voices.
Thanks, got it. So does…
Thanks, got it. So does anyone have an answer to my question on sample rates?
The deal with sample rates for eloquence
There is no specific variance of eloquence with a determined sample rate. iOS 17 brings an option to VoiceOver called per Voice settings, where you can tweak different parts of the speech synthesizer. For example, with eloquence you can adjust the breathiness, head size, roughness, pitch base, and inflection. At the bottom of that is a switch titled higher sample rate. Turning that on gives you the new improve higher sample rate, which I think doesn’t sound as good as the lower sample rate version.
The lower sample rate uses a sample rate of about 11 kHz, while the higher sample rate uses a sample rate of about 16 kHz. I know for a fact the lower sample rate is in fact 11 kHz, but don’t quote me about the higher sample rate option.
Wow, cool! So do we also…
Wow, cool! So do we also have per-voice pitch parameters?