Call to action! What do you want from Meta Ray Bans to make the ideal device?

By Andy Lane, 1 May, 2024

Forum
Apple Hardware and Compatible Accessories

Hi Everyone.

My question is simple, If you could design features and improvements to the Meta Ray ban smart glasses, what would you design?

How would you customise what Meta have already delivered to be your perfect device?

Given the hardware that currently exists, what would you do in software?

How do you imagine the hardware improving in future versions to offer the best experience possible for blind and low vision users?

If you were writing your answers directly to Meta, what would you ask them for?

Please be as thoughtful as possible and consider what you already know about how the device functions as an access aid and how improvements could be made.

It’s very early days for this technology and having our requirements and wishes documented is going to be an important part of where the technology goes.

Please don’t make the assumption that Meta won’t see or care about suggestions made here.

Note to moderators; I put this post in the apple section because it relates to and works with Apple devices, I’d rather it stays here for greater visibility and more contributions but understand if it needs moving.

Options

Comments

By Brooke on Friday, May 3, 2024 - 10:09

Mine are ordered but not here yet, so I'm only going on what I've read. I'd love to be able to make video calls via FaceTime, similar to how I've heard they work with WhatsApp and Messenger calls. I'll come back here once I've gotten mine and explored them.

By CrazyEyez on Friday, May 3, 2024 - 10:09

if I could change anything, it would be the summarization of written text.
I can not get mine to read an entire page no matter how I phrase it.
I'd like it to be able to describe people just a little.
I also wish it had a continuous scene description like seleste glasses supposedly have.
As for hardware, maybe i'd change the placement of the camera to be in the center of the device, or put a second camera on the right side where the LED is.
I feel like my items aren't fully in the picture sometimes and that leads to inaccurate results.

Despite all of that, the glasses are pretty good.
Its early days. Like others have said in other posts, its only going to get better from here.

By Brian on Friday, May 3, 2024 - 10:09

Disclaimer: I do not have these, but I am writing here based on what I know about the device.

The following is a haphazard list of things I believe would help any Smart Glasses device, be it Meta Raybands or whatever's clever. These are in no particular order. 🙂

1. Multiple cameras working in concert. Because no matter what, you can never truly line up something for a single camera if you cannot see. Period.
2. An incorporation of the Point and Speak feature on later iPhone models. Obviously it could not necessarily be called that, but you get the idea.
3. As Lottie mentioned above, the ability to simply face the direction of printed material and the device just reads it. This is an over glorified simplification of what is actually needed, but the end result would be the ability to 'look' at a street sight, or bus stop, or a parking meter, or a newspaper, or whatever and the device would be able to read the print.
4. World exploring. I believe Seeing AI has this. Ideally it is meant to describe random objects in your cameras line of sight; whether its a chair, a cat, a shoe, or the personification of that voice in your head. Point is, if the camera can see it, it should be able to describe it.
5. People identification by face. Seeing AI also has this, if I recall. The ability to take a portrait pic of someone and add a name to it, then the next time the cameras line of sight is pointing in their direction, it could say that person's name.
6. More fun with Seeing AI. Honestly I think a lot of the features in Seeing AI could be incorporated into Smart Glasses. Maybe not all of them, or rather, not as hit or miss as some of the features are currently.

Ok I am done rambling. Let the trolling begin. . . . 😝

By Gokul on Friday, May 3, 2024 - 10:09

The reading feature already mentioned should eventually cover larger number of languages, as many vernaculars as possible, making tech accessible to less priviliged among the blind. the people ditection in the line of sight feature can also include friends/ followers in the connected meta account IE if I'm connected through my facebook account, it should recognize and anounce names of any of my facebook friends if they appear in the line of sight. This particular feature can be limited as an accessibility-only feature if there are privacy concerns surrounding it.

By mr grieves on Friday, May 3, 2024 - 10:09

I've really struggled to get the glasses to read any text at all. I tried a magazine and it only described the imagery. I tried it on sign posts and I couldn't get it to tell me what text was on the sign or what direction they were pointing.

I think the image descriptions are fine but sometimes it takes a bit of work to get the right answer. For example, I was walking through a forest and asked it to describe the scene and it didn't even mention the trees until I asked if there were any. I can ask it to give me a lot of detail and it's a bit better. I think it's maybe a bit too brief in any case. It does seem to help if you know what you are looking at.

It might also be handy to be able to query the last photo you took on the camera. So now it only queries the one it's taking for the purpose of the query, but if I wanted to save a photo and also query it that would be good. I can see this being useful for the sighted too - e.g. you go on a walk, take a photo of the map or something and then want to double check something later without having to download the photo onto your phone and bring it up Or imagine if it could understand the map and you could ask "what direction is the car park?" from it.

What would be cool is if it could attach an AI description of the image to the photo as it saves it. A bit like you can do with Pixiebot now, but just automatically as you take the photos. Then when you browse in the Photos app it just tells you what it is.

I'd like to be able to query the GPS information. For example, ask where am I? Or what direction am I facing?

In an ideal world I'd love some sort of mash up with Voice Vista. For example, if I could ask it for directions to a building and it could direct me but then also help me locate the door, and it could maybe use the camera to help alleviate the inaccuracy with GPS. Maybe I could set a marker for where I am and later on ask it to help get me back there.

Going on from this, indoor navigation. If I could ask it to help me find the reception desk, or a hotel room or whatever.

I was also thinking what would be cool is if you could ask it to remember stuff for you. Either an audio notes type feature or it being able to create text notes from what you say.

Being able to call a AIRA or Be My Eyes volunteer would be great for a lot of people.

As has been mentioned being able to describe people would be incredibly helpful.

Another nice feature would be if you could say "what can you hear?" and it could use the microphone and identify the noises - whether they be bird noises or whatever.

I think a lot of the above isn't really anything to do with accessibility.

By mr grieves on Friday, May 3, 2024 - 10:09

I think it was me that shared the link earlier - lol. I need to try again with some text. I think most things are possible wit the glasses so maybe my suggestion is to perhaps make it a bit more intuitive. I'm probably repeating myself , but I think maybe it doesn't always understand what the important details are. Like how can you not immediately notice the trees in the forest? I was literally right in the middle of it. If somehow I hadn't known I was there, then I wouldn't have known the follow-up question. Similarly if I am looking at a sign-post I shouldn't have to keep giving it clues. I wouldn't have known it was there if not for my wife (who I think was even more excited about the glasses than me). Maybe with more persistence I could have got it to tell me what was there, but if I hadn't known there was a sign at all then maybe I would have just gone past it. As it happened I just ended up feeling a bit embarrassed and walking on.

I think we have different requirements to someone who can see. But then if I could see I probably wouldn't say "describe this scene" I would be saying "what kind of tree is this?" or something more specific. But I think for us, text or sign posts or shop names or things like that are going to be really useful.

Anyway I need to grab some sample text to practice with and have a bit of a session. I just think AI promises us the ability to talk to it without having to really think too hard about it, whereas I think right now the glasses are maybe requiring me to think quite hard about how to specifically phrase a question to get the right answer. The fact I am going to have a practice session means it's not quite fulfilling its promise. (I don't mean that as a big criticism, I'm so excited by what it can do already.)

I wonder if my problem with text is because I have hooked up the messages. It did seem to constantly think I was asking it to read messages and it kept telling me it was something it can't do.

By mr grieves on Friday, May 3, 2024 - 10:09

I'll put most of my comments elsewhere but I tried a few other things with the glasses today and it was reading things a lot better so maybe it was just the particular things I was using before.

I think it would be good if it could OCR text as well as using AI. For example, I asked it to read the ingredients of a packet of Thai green curry powder and it basically just told me what type of thing tends to be in a Thai green curry powder, not the specific one I was holding.

Anyway I'll leave it there for now and come back when I am better informed.

By Holger Fiallo on Friday, May 3, 2024 - 10:09

What about a bill? Phone or electric bill. Even a card that you might get for holiday by someone? Heard that meta wants other to use their OS, if so, they need to let third party apps in and use their glasses.

By Ollie on Friday, May 3, 2024 - 10:09

Meta will only let 3rd party in if meta have access to the data. They are not trying to sell a product here, they are trying to collect data through us sharing content. No doubt, the meta glasses are sold at cost as the data they collect will be monetised in the future.

Sorry to sound really cynical, but that's what many of these companies are. They are rarely what they seem to be. Netflix is a data analysis service that uses watch data to know what to make. Apple, once making technology, is designed to make money. Occado, here in the UK, the company behind waitrose, the groceries store, are a robotics company with a highly advanced picking system. The customer facing aspect of a company is very rarely what the compnay is actually doing. I would appreciate more transparency in this, and I do think we get it more these days, but meta wants people wearing their glasses, not for the sale of each unit, but for the heaps of data that will be collected with an ai on your face, what you are asking, where you are looking, and then leveraging that data to extract money with people advertising to us in the future or having subscription based services.

Apple, on the other hand, if they do bring out glasses, are likely to open up the platform, with certain rules in place, but I can't imagine that happening for at least another two years, and who knows what state they'll be in then after the walls of their garden have been pulled down.

By Holger Fiallo on Friday, May 3, 2024 - 10:09

If is true that they want others to use their OS they will need to do so. It can not be an close OS without getting in trouble with time as Apple is getting in Europe. Will see. Sound nice the glasses but privacy is important to me. Will see what apple does with this. Apple glasses sounds nice.

By Louise on Friday, May 3, 2024 - 10:09

For the Meta glasses to be perfect for me, there would be a few changes.

1. I should be able to easily request that it read the entire content of what I put in front of it.
"2. I could assign a 2 or 3 finger gesture to text recognition, so I could discretely trigger it without looking like a dork in public or in a meeting.
3. I could queery the AI by typing a request for the same reason as above.

Seriously, if it did those things, it would be perfect for me, since it does the things it does so well.

By Gokul on Friday, May 3, 2024 - 10:09

Is that unlike a BeMyAI or a Seeing AI, the Meta Glasses are geared towards the general sighted user. that's probably why when in a forest it doesn't feel like giving a detailed describtion of the forest or recognizing a signpost while we pass by it. Same reason why it isn't reading the full content of a page when it's shown. Basically, someone forgot to think about the potential these glasses could have on the blindness industry while designing the software part. Not a big problem as I see it. Meta can easily remedy this if they're made aware of the issue. What we would want is an accessibility option which if it is turned on, will give access to all the discussed features including detailed describtions. That way, it doesn't inconvenience a general user with unnecessary long-winded describtions.
Oh and also, like Louise said, an option to type in text requests. One cannot call out Hey Meta! all the time, especially in a professional setting.

By OldBear on Friday, May 3, 2024 - 10:09

Louise and others have expressed them though. I'm not so tech-ish...
Meta's collection of data, doesn't bother me at all because I've always thought of this as how one is purchasing the services, along with attention/time and content production, bla, bla, bla. I tell myself every time I encounter an advertisement, it is a lie as counter-preconditioning, bla, bla, bla.
I would like to know more, however, about what Meta is doing to monitor what their employees are poking around in, images or otherwise, and that is my main concern.

By CrazyEyez on Friday, May 3, 2024 - 10:09

you can tap and hold on the touch pad to wake the glasses.
From the help in the metaview app:
Wake, Tap and hold or 3-finger tap and hold (if Spotify Tap is on)

You still have to speak to it, but you don't have to walk around yelling: hey meta!
Hope that helps a little.