Welcome to a special episode of AppleVis Extra, where host Dave Nason is joined by Hans Wiberg, founder of Be My Eyes, and the new CEO of Be My Eyes, Mike Buckley, to discuss the recent announcement of Be My Eyes' new Virtual Volunteer feature powered by OpenAI's GPT-4 model.
Be My Eyes, a revolutionary app for the blind and low-vision community, has been connecting users with volunteers for assistance with everyday tasks since 2012. With the introduction of the Virtual Volunteer, powered by the advanced visual recognition capabilities of GPT-4, the app is set to take its power and value to new heights.
Hans and Mike share their excitement about the performance of GPT-4, stating that in the short time they've had access, it has shown unparalleled capabilities in image-to-text object recognition. The implications for global accessibility are profound, as this new feature has the potential to offer a greater degree of independence in the lives of blind and low-vision individuals.
The Virtual Volunteer stands out from other image recognition tools due to its ability to have conversations and offer comprehensive assistance with context and analysis. Users can send images of various tasks, such as identifying the contents of their fridge or reading a map, and the Virtual Volunteer not only identifies the objects but also provides additional information and suggestions, such as recipes that can be prepared with the ingredients.
The Virtual Volunteer tool is currently in beta and is anticipated to be available to users in Q3 2023 and will be free for all blind and low-vision community members using the Be My Eyes app. Don't forget to register in the Be My Eyes app to be placed on the waiting list for access to the Virtual Volunteer.
Comments
It's going to be a long few months
I was very disappointed to hear that the Virtual Volunteer feature won't be available to the public until late Q3. However, I suppose this means that I can take a break from constantly checking the status of my application 🤷♂️
Game-changer
I've wanted this for years. Apps like Seeing AI help, but are limited. I want an app that can look at a box of food and tell me the cooking directions without also reading the nutrition information and marketing hype. I want an app that can look at a receipt and give me a summary of what I bought and how much I paid, not read the restaurant address, website, phone, and coupon details. I want an app that can look at a menu and read the section headers (appetizers, sandwiches, entrees, desserts), and list the entries when I ask for more information about a section, and finally read the full description and price for any entry I express an interest in. In short, I want an app with nearly human intelligence.
Keep pushing this technology forward. It will change the lives of persons with disabilities.
I myself personally I’m not a fan of the a icing at all
I’m not a fan of AI doing everything, considering it’s based off a completely new GP tea server, which could be completely unreliable frill we know.
I’ve signed up for the beach out to see how it goes, but if my computer gets a corrupt error on a does spring sales on me, I’m never using it again, especially as it makes my stuff of the computer even more.