Hello Brad, Oliver, Patrick (and Stoo because it was his article),
I am mirroring this message to you all to hit all the questions at once as to not to tie up the comments section in Stoo’s article.
Brad’s TestFlight Question.
I never thought about putting the App on TestFlight because I wanted to do controlled/formal user testing. The last thing I wanted was for someone to get all DareDevil and walk out into traffic (lawsuits and personal guilt can be pesky things). However, strange times call for strange methods, so maybe using TestFlight should be considered now. As long as I can ensure people will test it responsibly (and absolve me of responsibly for dumbass use on their part).
If I did put it on TestFlight, the testers would still need to procure stereo earphones and some sort of head mount for their phones. A head mount can be something like Google Cardboard with straps and the camera view cutout. That is what I used for about a year developing version 2. The system is meant to be worn like a VISOR (like Jordi in Star Trek: NTG). I have entertained the idea of the an electric cane method, but that is a future revision if that happens.
Oliver’s How does it work question.
The experience is like an 'instant spatial vOICe'. One hears a blob of sound representing the approximate volume, color, and location of an object. It is an instantaneous representation; you do not have to wait for the Left-to-Right sweep found in vOICe or EyeMusic. Only objects recognized by ARKit’s Scene Recognition will get a unique sound.
Stairs, curbs, and ledges are not detected (I have an idea on how to detect and represent those and that was supposed to be the next stage of work. Unfortunately, a man has got to eat, so everything is on hold now.)
Patrick’s Criteria Question
There is basic text detection, but it is turned off by default.
Also, there is better control over the audio in app (and still could be improved some more). I had to overdrive the audio for the video to make it apparent what was happening. The irony I discovered from past demonstrations is I have to amp up the audio so normally sighted people can understand what is happening.
P.S. Unrelated, how do you get apps listed in the Apps Section of AppleVis? I did all the accessibility work for AudioKit’s SynthOne, I just have never seen app talked about on AppleVis.
Stanley ‘Staque’ Rosenbaum
Comments
Thanks for answering my question.
Ah, I'm not a fan of putting my Iphone on my face so will give that a misss.
Understandable...
Understandable. That is part of the reason I am very curious to what Apple Glass is going to be like.
Is it giont to be an actual thing?
Or is it just guesswork for now?
It is going to be an actual thing.
It is going to be an actual thing. But that is why building the system on an iPhone is helpful.
Using Head mounts might suck, but it gives you a jump start on development.
Chest mount would probably work.
A chest mount would probably work. But someone would need to design it, that is outside my knowledge.
Preferably, a the mount should hold the iPhone in the landscape orientation. The cameras have a wider field of view that way.
The only concern with a chest mount is the user's arms and hands could be detected, when a person holds them up. Not an impossible issue to overcome but it would have to be dealt with.
Staque