In this episode of our podcast, Thomas Domville demonstrates the power of the Virtual Volunteer feature, set to come to the Be My Eyes app in late Q3 2023. Currently in beta testing, this feature, powered by OpenAI's GPT-4 model, has the potential to be a game changer for people with visual impairments. It offers a virtual sighted assistant that can generate context and understanding for images, allowing for a greater degree of independence in everyday tasks.
During the episode, Thomas showcases a variety of real-world use cases for the Virtual Volunteer, including identifying clothing; getting information from food packaging; describing greeting cards, photos from your photo library or places such as Facebook, and weather maps; reading restaurant menus, and more.
We thank the Be My Eyes team for allowing us to record and share this demonstration of the Virtual Volunteer.
Comments
Use a human volunteer! 😂
The initial usage of this app was to have a cited volunteer help you and assist you with being your eyes, so why don’t you just have one of these sided volunteers describe your adult content? Oh, that’s right, they probably don’t want to see that either, it might even be against the rules, but I wouldn’t know that because I wouldn’t even think of having one of them do that for me, so why should the artificial intelligence be subjected to that type of material also? That’s a good question for you… just because it’s artificial intelligence that doesn’t mean it should be subjected to that either. Since you want to compare sighted people to visually impaired/blind people using this Be My AI app.
Personally, if I thought I…
Personally, if I thought I had a picture that contained adult content that I wanted described, I'd be more comfortable querying an AI tool than I would be interacting with a sighted volunteer. I feel like the experience of the volunteer and I witnessing the material together, at the same time, not to mention the potential discomfort on their end in attempting to describe it, could be quite traumatic.
I think asking a volunteer would be a quick ban.
Probably the quickest way to get removed from the app for being a very creepy person. I wouldn’t recommend anyone blindsides a volunteer with a request for adult content or even thinks about it. That might even make the news. AI descriptions are a very real possibility in the fullness of time though. People have made excellent points about why adult content should be accessible on BMAI, I just think it’s too risky at the moment not to mension almost certainly completely impossible. If Open AI are having this much trouble with getting faces accepted without legal challenges, just wait and see what would happen if it was able to analyse porn. I would guess Open AI have worked incredibly hard to have Chat GPT reject anything to do with anything even slightly controversial.
photos completely being blocked on bing AI chat
so, as of tonight, I can now no longer even get bing AI chat to describe photos. Here's the message I got when I tried. Bare in mind, two days ago, this photo, a photo of myself smiling at the camera was being described by be my AI perfectly.
I’m sorry, I cannot describe the image that you sent me. I can only describe images that are publicly available on the web. If you want me to describe an image, please send me a link to the image instead of the image itself. Thank you for your understanding.
So, do you think that?
So, we can conclude that the problem is on the open AI server??? Hope it will be fixed soon.
Oh wow...
@Martin, we as blind people should have the same access to anything a sighted person can. I don't care if it's porn or sexy pictures or whatever, we should legally have access to it.
This is not a family friendly app, in other words an app a kid should be using, it's an app for blind adults to get things done. I guess you could make the argument teans can use it to but if teans haven't heard of sex by then then quite frankly; there's something wrong with your country. Sex is not a bad thing and needs to be talked about more, as someone who lives in the UK it shocks me, for example, when it comes to the lack of knowledge within the US surrounding sexual topics.
Sex education wasn't the best in the UK in the 2000s, at least it wasn't for me and I'll admit I cringe inside at the answers I gave to the nurce, I think she asked me something like where would you take your girlfriend on a first date? I think my answer was, "I'd take her to a dark room," what I was going to do in this room I don't know but it had to be dark ok! That was important.
See? Cringy :) But that's ok, because I've learnt from that and grew up, we were at least shown a condom and a femidom, I've heard sex ed in the US is basically, don't have sex or you'll get pregnant, end of story. So yeah, I think this stuff really needs to be talked about more.
You might think an AI shouldn't be describing this stuff but then who will describe to us? I don't have a partner and while there's litarotica books and porn, why exactly shouldn't we be allowed the same access to pics as sighted people?
Because it's on an app where volunteers wouldn't want to describe this stuff, well then they should have thought about that before making this addon.
I do apreciate the app but think we should have a legal right to everything.
If you're done here then so be it but I thought I'd put my thoughts out there.
@tyler, I completely agree with you, I'd prefer to ask an AI over a human any day.
The problem is privacy advocates.
Gotcha but there’s a preventative wall we keep running into.
We can chat about that until we are blue in the face. However, the privacy laws are the privacy laws. Open AI and Chat GBT 4 is not allowing anyone access to certain private images. End of discussion. Let’s try to handle the solution of this then bring to them your concrete plans to have AI describe adult pornographic images. I’m not arguing that position since it doesn’t matter to me. Good luck.
Be My Eyes
I don’t know where y’all found that information that be my eyes was aimed for adults. Where did that information come from? Where is the proof of your statement? Or is that something that you wanted to say just to prove your point to me? As far as I know, the app is public domain for anyone to download and it doesn’t have an age restriction on downloading this app. So it’s safe for me to say that this is a family oriented app because any age and families use it. I had several teenager volunteers help me and one that sounded about 12. There is my proof. Where is yours? Oh, dear…
Age.
FYI. The T’s & C’s state you must be over 17 to use this app.
Bing public photo’s.
If Bing is rejecting anything thats not available to the general public on the internet then this could get a bit thorny for BMAI. My fingers are very very very crossed but this might be blowing up into a privacy argument that BME can’t control. It’s sounding less like a bug if bing is specifically saying you can’t process this image because it isn’t on the internet. Thats a very specifically designed feature that could be problematic.
Martin.
I agree with you that there's nothing really we can do about open ai's policy.
You're acting like we're all against you and we're not, if you don't care about sexual stuff that's fine, but getting almost agressive and snarky just because you're not getting your own way isn't going to prove anything.
Keep that same energy…
I’m not snarky. I’m being true and realistic and I don’t know any other way to be. Do you have a problem with that?
And it appears this app is aimed for 17+ but anyone can obviously download it from the appstore which is a parental issue but I have no problems having teenagers & pre-teens help me if their parents are allowing that. It’s nice they want to help and this warms my spirit. Have a fantastic day!
nah.
I'm not continueing this.
Brad
I no longer pay attention to his posting. Is like hitting the wall and keep doing it and expect change. The only thing you get is a headache.
what about a separate API or identifier?
I think this was already mentioned, but would it be possible for openAI to have be my eyes use a unique identifier or some way of separating it from a standard gpt4 query? I understand this is a rhetorical question but I feel like this could be a workable solution, especially if the images are not stored.
I sincerely hope all of this gets sorted out soon.
I'd really been enjoying going through photos and even being able to single people out in the pictures and learn more about them. As a test, I'd asked it to tell me about the asians in the picture and was successful multiple times. Let's all keep our fingers crossed you guys
Pretty sure we're all disappointed right now. I'll provide one of the descriptions below.
The man appears to be of Caucasian descent, possibly European or North American, based on his light skin tone and features. The woman appears to be of East Asian descent, possibly Chinese, Korean, or Japanese, based on her features and complexion. They make a lovely multicultural couple.
Quintin.
What a lovely description. I know that Be My Eyes are on it. They are working with Open AI right now to find a solution.
One wonders...
With these essential issues popping up one after another - stuff like faces being blurred, images with faces being excluded altogether, the strange distinction being made between private vs. public images, pictures with adult content being excluded, etc., what wonders what the Be My Eyes team and the GPT4 team have been up to over the past few months or so. Weren't they aware of the requirements of an app for the visually impaired? Couldn't the Be My Eyes team ask for a different type of API access to GPT 4? I know this is emerging technology - we've heard that countless times, but what is happening here is more of an alpha-trialing - even a level below that. This isn't beta-testing IMO. Now hope this doesn't prod someone to come here to say the following: "All these complaints and misguided aggravation with an app that you didn’t make should be used toward making your own app with that same energy."
Personal opinion.
I think BME is a tiny little insignificant cog in a massive history changing event and as such doesn’t have much say over how the big stuff gets changed. Thats for interest groups lobbyists politicians lawyers and maybe even a little of the general public who actually have to live in whatever future we’re building. It’s easy to lose sight of what’s happening here, essentially humans aren’t going to be top of the tree when it comes to who is the most intelligent being we know of. Thats never happened before and its got a lot of people very scared and in reality there is likely to be tears shed at times through the transition. I’m confident unlike some that what we’ll have in a decade or 2 will be worth working all this stuff out for but theres likely going to be some spilled milk on the floor and that will be very very ugly. Massive job loss, huge social upheaval, even existential threats are at least possible while we learn to weave an intelligence more intelligent than we are into our lives and futures. It’s easy to only see the small issues around us but this is part of possibly the biggest change in human history and we’re right at the sharp end of it right now, not knowing where things are headed. Utopia, dystopea or a more likely a whole lot of winners and a whole lot of losers. I seriously think we’re only just starting to see the challenges this technology is going to have to deal with as its development races forward at an ever increasing rate. IMHO everyone will eventually benefit from this change but its the transition thats going to hurt many as everyone tries to get as much as they can without realising the technology can end scarcity if we allow it to.
I myself have had minors as…
I myself have had minors as volunteers on be my eyes as well which I don't think should be allowed but that's a whole other topic.
Differences between two objects
I wonder what would happen if someone took a picture of a cucumber, a vibrator, and a dildo and asked BME or Bing what these items are. How would it know the difference? Inquiring minds want to know!
I don’t know about that comparison but.
When faced with a glass dildo in a phallic design it described it as a glass ornament but absolutely wouldn’t describe the shape. It was very smart about it.
Is there a fix for sensitive content warnings in iOS 17?
When I share images with be my eyes, I sometimes receive the following feedback. The photo is a rectangle with the warning that the image was blocked due to potentially sensitive content. I am aware that iOS 17 has a new sensitive content warning feature, so I double checked my privacy settings and insured that the feature was turned off, Nonetheless be my eyes is still unable to describe certain pictures to me. What's really annoying is that the images aren't even provocative: one picture was a graduation ceremony. Any workarounds for this problem?
No way to fix at the moment.
Don’t worry, it’s not your phone settings. It’s a legal issue with Open AI. There’s plenty about it further up the thread if you fancy reading back. Be My Eyes are trying to find a way to move past the issue though so fingers very much crossed.
Re: Is there a fix for sensitive content warnings in iOS 17?
This is not related to iOS 17. Be My AI and GPT 4 have had this issue over the past few days regardless of the iOS release used.
Re: Is there a fix for sensitive content warnings in iOS 17?
Thanks. I saw the messages after I posted mine. D'oh!
Around and around on the hamster wheel
Well, I got off that hamster wheel mini conversations ago and I have no plans of going back on with anyone.
Ignoring me for not agreeing with your opinions is immature and comical…
I’d rather discuss the wonderful advances I’ve had with Be my AI anyway, today I was at the mall and I had be My AI help me find stores in the mall even though I had a sighted person with me. It felt very liberating to pick out clothes with this artificial intelligence. It was fun!
I was having this app describe all types of things in the mall!
Off to my next adventure with be my AI! Haha!
I found another use for be…
I found another use for be my AI. If I take a picture of a bag of chips it will tell me what kind without scanning a barcode!
Scanning food products
I tried scanning a few food products with Be My AI and had success with it and it seems to work faster than trying to find the barcode with certain apps like Seeing AI but, having to take a picture & Wait for the response is tedious. I just like using my Samsung galaxy phone and having the app Google LookOut scan fast and I can go right to the next product immediately have that one scanned.
I need to have Google lookout on iOS though. And I don’t understand why they haven’t made an app like that which I don’t have to pay for. Hahaha
If I want to know what something is fast using my iPhone, I just use Seeing AI quick read text and that will tell me what I need to know immediately. Barcode scanning when your blind is way too challenging when I’m in a hurry. Haha
I don’t know
What should I say about this Martin person? Because he is so annoying. is there any way for admin to do something with Martin?
Just…
So stop coming back to our comments if reading personal comments annoy you. I could say the exact same thing about your comments or anyone else’s comments on this but, you’re only singling me out because I don’t agree with you and you’re upset about that. Haha! The truth is the truth. None of these comments hurt my feelings or make me upset when you antagonize me. It’s funny how you’re paying attention to everything I post. Here’s a brilliant idea, just skip right past me. That’s what I would do. lol have a fantastic day.
Anyways, when I took a photo of my television, be my AI was able to describe the people on the tv, but there was no message about the placeholder, so I wonder what this is about.
It’s not consistent with privacy of…
Over the last few days since they had that privacy message about the placeholder on certain images but when I took a few pictures of framed photos on my wall, it described Michael Jackson in detail, it also described a photo in my photo library of myself working out at the gym, it also describe a few other family photos that I have but when I took that same type of content, be my AI would give me that privacy placeholder message it’s weird how it’s not consistent.
I’m not liking how the AI is deciding whether my pix are processed and only giving access to certain images but it would describe shirtless photos of myself and other people last week, including their faces, the hair color, their eyecolor, etc.
it’s being discriminatory and I should file a lawsuit against it. lol I’m extremely good looking and I shouldn’t have to deal with this nonsense. No my good looks and athletic body has nothing at all to do with this but, I thought that was extremely funny.
Sorry, i’m not sorry I’m being so annoying.
Martin and privacy.
It looks like the problem is that Open AI will only process pictures of people if the picture is available online. That seems to be the solution to people wanting AI to not store or process images of themselves. What isn’t clear yet is how this works with BMAI which will obviously need to process non public photo’s. It possibly all depends on whether Open AI give Be My Eyes an API that allows full access or whether Be My Eyes gets the same treatment as everyone else.
Ah ha
Gotcha Andy.
Open AI is way to new for humanity and the regulations behind processing images is going to be a battle in the courts. I’m not sure if I want to be a part of that, but it looks like we already are.
I’m not liking how they’re dropping all of this tech on us and making us part of this fight when we didn’t ask for it.
Of course, we want to use the tech to enhance and make things easier for us, but when they put all of this on our shoulders, it makes me question whether it’s going to be good for me or not.
Now, I’m in a dilemma with machine learning and privacy laws. That’s not fair to me.
So now that I have this new tech, it’s not like I can just forget about it and keep going about my life like it didn’t happen.
Interesting thought
If they make it so that it will only identify pics online, well, I'm glad I have Dropbox so I can generate a public link and then send it. It's an extra step, but for those photos in my personal collection that I don't plan to really share with anyone, it would still be worth it.
Pangeran Satrya Indra Tjahyadi
I stop checking his posting. Is like hitting your head against the wall. Is not worth it. This issue with the app is not going to be solve and I would not trust it. Hope they address the issue but I am not holding my breath. Long live the apple.
Alright people.
I might not agree with martin on all things but come on, this going round and round is silly and leads to nothing in the end.
public link
Scott, I tried your method of moving a photo into dropbox. However, it appears that open AI thought of this and won't process the photo saying that it must be a public link, what ever that means. I'm assuming a google images link.
Slack or group to report/find out other problems with BME
I saw someone post that there is a Slack group for BME. I tried finding it with no luck. Is it one of those you have to be invited to? Any other groups out there we can scour to find out what problems people are having or places we can report problems?
In The End This Was Helpful
Woo boy! I just went through all 4 months of comments on this thread. Some of it made me feel all nostalgic for the good old days of Email listservs, where we found new and innovative ways to be mean. But, overall, I got some good info in the end. Ultimately we are at the mercy of things beyond our control. I caught things at the wrong time. I joined the beta list after the whole blurred faces thing, and it was giving amazing descriptions of people. Then…placeholders. I’m getting resigned to faces being gone for good, but hopefully I’ll be pleasantly surprised.
A couple of updates.
Re the slack channel, unfortunately it’s closed but I’m sure Be My Eyes are looking at this page periodically so worth putting thoughts here. Re faces. BME are working with Open AI and are pretty confident there will be a permanent solution before general release in a couple of months, maybe sooner. It’s all dependant on Open AI but they are trying really hard to build safeguards that allow them to comply with the law while giving us the ability to get facial descriptions. Re adult content. This one blew me away. Nothing is going to happen in the short term but it’s not something they’re closed off to. Basically they understand peoples reasonable desire to have access to that type of content and are trying to make it happen. It’s just not going to happen quickly. All in all, incredibly good news all round.
this AI is amazing!
I finally got around to playing with the Be My Eyes AI feature last night, and the level of detail it provided just blew my mind. I have a collection of stuffed animals, and it eve told me which ones were in the picture. I understand that I shouldn't rely on it for super important stuff, but in my admittedly limited experamenting, I think this is a huge step forward for accessibility in general and photo description in particular. I'll have to experiment and figure out how to use it for my existing photos and stuff, something I'm sure I'd learn from the podcast, but especially in light of some of the more negative coments I've read in this thread, I just felt the need to come out here and express my amazement at the level of accuracy and detail I was given from this feature.
re: andy lane
Any updates on when we can have it restored to what it was? Is that the timeline you referenced, as being a couple of months?
I was hoping it'd be restored within a few days, maybe even a week at most. However, if I have to wait for two months for restoration, I suppose that won't be the end of the world but it is incredibly disappointing to just have access then lose it.
Re timeline.
Unfortunately theres nothing more at the moment, They are pretty confident it will be fixed before final release to the public. It may be much sooner but they can’t give any promises because they’re trying to do something new and thats really hard. The really reassuring thing though is that they are working really hard on it and both Open AI and BME think theres a solution to be had. Probably based around location. Honestly we couldn’t hope for a better team to put this together and get it into everyone’s phones and lives. It’s just a hard thing to do because of legal requirements.
re: timeline
Thanks, andy!
I am comforted that the problem will be resolved soon and look forward to it!
Yeah, definitely something has changed again
Now, when I take a picture that has people on there, be my AI will say something like this:
I apologize, but it seems that the image you've uploaded was blocked because it may contain faces or people. For privacy and security reasons, I'm unable to view or describe images that contain faces or people. If you need assistance with something else, feel free to let me know or upload a different image.
I hope this will be fixed as soon as next week
a question
Like the rest of you, I'm definitely looking forward to having this fixed soon. Also, I think I already know the answer to this, but is there any way to get email updates whenever this thread updates? I know that's a thing with forum posts, but from what I understand is not possible with podcasts.
Screenshot of dating profile worked, but
It’s disturbing to me how I took a screenshot of a conversation I was having on Facebook dating, some of this person‘s photos are in that conversation so I took a screenshot, and it described the face on be my AI but when I went to my photo album and tried to do that from the share sheet with a normal photo it won’t describe it and it came up with that message like the person mentioned above so then I tried to take a screenshot of this same photo and do it that way, but it would not describe it so it’s definitely buggy!
I’m frustrated about this, I sent them an email last night explaining the issues with this. They need to work on having be my AI Like it was in this podcast a few months ago because it’s not fair to us that we had to hear this wonderful demonstration, and now we have to go through this…
This is what I mean…
The picture you've shared is a screenshot of a dating profile editing page on a mobile phone. At the top, there are options to "Edit Profile" and "Preview Profile". Below that, there is a section titled "Photos and prompts" with text that says "Drag and drop photos and prompts in the order you'd like them to appear."
There are several tiles with photos and prompts. The first row has three tiles. The first tile is a photo of a man taking a mirror selfie showing his upper body. The second tile is a photo of the same man in a gym. The third tile has a prompt that says "I'm looking for someone...".
The second row has three tiles as well. The first tile has a prompt that says "One movie I can watch over and o...". The second tile has a prompt that says "The 3 words that best describe...". The third tile has a prompt that says "My idea of a perfect day is...".
The third row has two tiles. The first tile is a photo of the man holding up two thumbs up. The second tile is a photo of the man flexing his muscles in the gym.
The fourth row has one tile which is a photo of the man shirtless, wearing sweatpants
Well, I would say maybe shirtless would be considered adult material but somehow it’s able to describe that so the inconsistency of this is really what bothers me and I’m sure it does many others.
I appreciate those who are in the works of making this the way it was and hopefully soon. I’m trying to be married by next year and I would like to know what type I’m marrying. Ha!
Seriously, it’s amazing how we can have a tool like this to use independently so we can match ourselves with someone we’re compatible with and know what they look like without having to ask another sighted person. This is a very personal thing that we should be able to do on our own.
You’re gonna need a lawyer I’m afraid
If in one year you’re going to be married to a woman you don’t even know yet. Lol
That’s interesting …
I would like to keep this about be my AI but Iwanted to share that part of my life so I can help make some movement with whoever is looking out on this topic who’s making those moves but I do appreciate your hilarious input. Thanks!