Something different is coming: a progressive web app for you all.

By Stephen, 12 November, 2025

Forum
Assistive Technology

Something Different is Coming
A Progressive Web App for Blind and Visually Impaired Users | Works on All Smartphones

I need to tell you about something I built. Not because it's "the best" or "revolutionary" – everyone says that. But because it works in a way that genuinely surprised me when I tested it.

The Problem I Kept Running Into
You know the drill with most vision AI apps:
Point your camera → AI speaks a sentence → that's it.
"It's a living room with a couch and a table."
Cool. But where's the couch exactly? What color? How far? What else is there? Can you tell me about that corner again?
You have to point again. Ask again. Wait again. Listen again.
You're always asking. The AI is always deciding what matters. You never get to just... explore.

What If Photos Worked Like Books?
Stay with me here.
When someone reads you a book, you can say "wait, go back." You can ask them to re-read that paragraph. You can spend five minutes on one page if you want. You control the pace of information.
But photos? Someone gives you one description and that's it. Take it or leave it. They decided what's important. They decided what to mention. They decided when you're done.
We thought: What if photos worked like books?
What if you could explore them at your own pace? Go back to parts that interest you? Discover details the other person missed? Spend as long as you want?

The 6×6 Grid: Your Photo, Your Exploration
Here's what we built:
Upload any photo. Any photo at all.
The AI divides it into 36 zones – a 6×6 grid covering every inch of the image.
Now drag your finger across your phone screen like you're reading a tactile graphic.
What This Actually Feels Like:
You're exploring a photo of your living room:
Start in the top-left corner – drag your finger there:
"Smooth cream-colored wall with matte finish, cool to imagine touching, painted evenly"
Slide your finger right:
"Large window with soft natural light streaming through, sheer white curtains that would feel delicate and silky between your fingers"
Down a bit:
"Polished oak coffee table, glossy surface that would feel smooth and slightly cool, rich honey-brown color"
To the left:
"Plush beige carpet, deep pile that looks like it would feel soft and springy underfoot, slightly worn in the center from foot traffic"
Wait, go back to that window – drag back up:
"Large window with soft natural light streaming through, sheer white curtains..."
You're in control. You decide what to explore. You decide how long to spend. You decide what matters.
Go to the bottom-right corner – what's there?
"Wooden bookshelf against the wall, dark walnut finish with visible grain, would feel smooth with slight ridges"
Move to the zone right above it:
"Books lined up on shelf, various colored spines, some leather-bound that would feel textured and aged"
This Changes Everything
You're not being told about the photo.
You're exploring it.
You can go back to that window five times if you want. You can ignore the couch and focus on the corner. You can trace the room's perimeter. You can jump around randomly.
It's your photo. You explore it your way.
And here's the thing: the information doesn't disappear. It's not one-and-done. It stays there, explorable, for as long as you want.

Now Take That Same Idea and Put It in Physical Space
You walk into a hotel room at midnight. You're exhausted. Strange space. No idea where anything is.
Usually? You either stumble around carefully, or ask someone to walk you through, or just... deal with it till morning.
New option:
Point your camera. Capture one frame. The AI maps it into a 4×4 grid.
Now drag your finger across your screen:
• Top-left: "Window ahead 9 feet with heavy curtains"
• Slide right: "Clear wall space"
• Keep going: "Closet with sliding doors 8 feet on the right"
• Bottom-left: "Clear floor space"
• Center-bottom: "Bed directly ahead 5 feet, queen size"
• Bottom-right: "Nightstand right side 4 feet with lamp and alarm clock"
You just mapped the entire room in 30 seconds. Without taking a step. Without asking someone. Without turning on any lights.
Want to know what's on the left side again? Drag your finger back over there. Want to double-check the right? Drag there.
The information stays right there on your screen. You can reference it. You can re-explore it. You can take your time understanding the space.

The Core Difference
Most apps: Point → Wait → AI decides what to tell you → Move on → Repeat
This app: Explore → Control the pace → Discover what matters to YOU → Information persists → Return anytime
That's not a small difference. That's a fundamentally different interaction model.
You're Not a Passive Receiver
You're an active explorer.
You don't wait for the AI to decide what's important in a photo. You decide which zone to explore.
You don't lose the room layout the moment it's spoken. It stays mapped on your screen.
You don't get one chance to understand. You can explore as long as you want, go back, re-check.
This is what "accessible" should actually mean: Not just access to information, but control over how you receive and interact with it.
I have big plans for this feature to expand it as well.

Oh Right, It Also Does All The Normal Stuff
Because yeah, sometimes you just need quick answers.
Live Camera Scanning
Point anywhere, AI describes continuously:
• Quiet Mode: Only speaks for important stuff (people, obstacles, hazards)
• Detailed Mode: Rich ongoing descriptions
• Scans every 2-4 seconds
• Remembers what it already said (no repetition)
Voice Questions - Just Ask
No buttons. Just speak:
• "What am I holding?"
• "What color is this shirt?"
• "Read this label"
• "Is the stove on?"
• "Describe what you see"
• "What's on my plate?"
Always listening mode – ready when you are.
Smart Search (Alpha)
"Find my keys"
AI scans rapidly and guides you:
• "Not visible – turn camera left"
• "Turn right, scan the table"
• "FOUND! On counter, left side, about 2 feet away"
⚠️ Alpha: Still being worked on.
Face Recognition: Alpha
Save photos of people → AI announces when seen:
"I see Sarah ahead, about 8 feet away"
Totally optional. Enable only if wanted.
Object Tracking: Alpha
Tell AI to watch for items:
"Keep an eye out for my phone"
Later: "Where did you last see my phone?"
→ "On kitchen counter, 22 minutes ago"
Meal Assistance
Food positioned using clock face:
"Steak at 3 o'clock, potatoes at 9 o'clock, broccoli at 12 o'clock"
Plus descriptions: portion sizes, cooking level, colors, textures.
Reading Mode: Alpha
Books and documents:
• Voice commands: "Next page", "Previous page", "Repeat", "Read left page", "Read right page"
• Speed controls: "Read faster" / "Read slower" (instant adjustment)
• "Check alignment" (ensures full page visible)
• Auto-saves progress per book
• Resume exactly where you stopped
Social Cue Detection: Alpha
Optional feature detecting if people are:
• Making eye contact with you
• Waving or gesturing toward you
• Trying to get your attention
Fully Customizable
Pre-set profiles or build your own:
• Scanning frequency (2-4 seconds)
• Detail level (Basic / Standard / Maximum)
• Voice speed (0.5× to 2×)
• Auto-announce settings
• Feature toggles

Why This is a Web App, Not an App Store App
Honest reason: We want to ship features fast, not wait weeks for approval.
Better reason:
App stores are gatekeepers. Submit update → wait 1-2 weeks → maybe get approved → maybe get rejected for arbitrary reasons → users manually update → some users stuck on old versions for months.
Progressive Web Apps are different:
Bug discovered? Fixed within hours. Everyone has it immediately.
New feature ready? Live for everyone instantly.
AI model improved? Benefits everyone right away.
No approval process. No waiting. No gatekeepers.
Plus it works everywhere:
• iPhone ✓
• Android ✓
• Samsung ✓
• Google Pixel ✓
• Any modern smartphone ✓
Same features. Same performance. Same instant updates.
Installation takes 15 seconds:
1. Open browser
2. Visit URL
3. Tap "Add to Home Screen"
4. Appears like regular app
Done.

Privacy (The Short Version)
• Camera images analyzed and discarded – not stored
• Voice processed only during active questions
• Face recognition optional
• Data encrypted
• Delete everything anytime
Critical Safety Disclaimer:
AI makes mistakes. This is NOT a replacement for your cane, guide dog, or O&M training. Never rely on this alone for safety decisions. It's supplementary information, not primary navigation.

When Does This Launch?
Soon.
Final testing in progress.
When we officially release, you will have all features even though some of the app and it's features will still be in beta.
The Real Point of All This
For years, accessibility apps have operated on this assumption:
"Blind people need information. we'll give it to them efficiently."
Fine. But also... what if I flipped it:
"Blind people want to explore. They want control. They want information that persists. They want to discover things their way."
That's what I built.
Not "here's a sentence about your photo" but "here's 36 zones you can explore for as long as you want."
Not "here's a description of this room" but "here's a touchable map that stays on your screen."
Information that persists. Exploration you control. Interaction you direct.
That's the difference.

One Last Thing
The photo grid gives you 36 descriptions per image. Detailed, sensory, rich descriptions.
So when it comes out, watch people explore single photos for 5-10 minutes.
Going back to corners. Discovering details. Building mental images. Creating memories of the image.
That's not just making photos accessible.
That's making photos explorable.
And I think that's better.

Coming Soon
Progressive Web App
Works on All Smartphones
Built for exploration, not just description

What do you think? Which feature interests you most? Questions? Thoughts? Comments below.

Options

Comments

By Stephen on Thursday, November 13, 2025 - 17:21

I’ll take a look at that in a second here. Thanks for letting me know. I’m just adding the ability right now for users to choose which voice they want and then I’ll take a quick look at that. You can also now send me a direct message through the app 😊.

By Enes Deniz on Thursday, November 13, 2025 - 17:43

  1. I do know this is a universal web app and you posted about it under the non-Apple category, but AppleVis is still an Apple-focused forum and besides, all the other popular sign-in options like Microsoft, Google and Facebook are all available, while Apple is not. So this just attracted my attention. It's likely the most convenient option on Apple devices as you can sign in without having to enter your account credentials and all you have to do is do biometric authentication and that's all.
  2. Even if the app is primarily intended to work on mobile platforms even though it's a web app, not everyone uses Talkback on Android, unlike iOS or iPadOS where you only have Apple's built-in screen reader (VoiceOver). So even if you have designed the app to work on mobile platforms and specifically refer to mobile operating systems in a certain message, you should probably replace Talkback with something like "Talkback or any other Android screen reader", as there are at least a couple of them out there (Jieshuo/Commentary and Prudence).

By Stephen on Thursday, November 13, 2025 - 17:46

I just pushed an update… Your bug should be fixed. Hopefully I squash that little bugger lol. Also, you should now be able to choose what voice you use if you’re using Alex on your device it should automatically detect that and respond in that voice. For android same thing.

By Amir Soleimani on Thursday, November 13, 2025 - 18:11

@Enes Deniz With all respect, could you please focus on the app instead of nit-picking?
Yes, AppleVis is still an Apple-focused forum, but this is the so-called non-Apple forum, and the dev has also indicated that we're dealing with the first beta. What's the point of repeating the point about Apple log-ins over and over again?

By Diego on Thursday, November 13, 2025 - 19:26

Hello guys!
First of all, I’d like to congratulate Stephen for the great app.

When I saw the post yesterday, I thought — probably because of all those glasses that got released but didn’t deliver what they promised — that it would be the same with this app.
It looked like it had lots of features, but I figured it would just be all talk and nothing practical.

I got excited when I saw the alpha release today! So, great job for shutting me up! Haha.

Now, on to the feedback.
I’m using a Galaxy S23 running Android 16.

Bugs:
The bug where you have to turn on the camera to use Live AI, as reported by another user here, also happens to me.
When I activate Live AI, the voice cuts off halfway and I can’t hear the description. The same happens in question mode.
It seems like the microphone keeps turning on and off constantly.
If I find more bugs, I’ll update you all.

Suggestions:
Instead of mentioning the names of screen readers like VoiceOver and TalkBack, why not use something more generic? That would work for Android, iOS, and even PCs if needed.
Something like: “Please turn off your screen reader before using the app.” I don’t remember the exact text, but it’s something along those lines.
It would also be nice to have buttons to increase and decrease speech speed. I know we have a slider, but at least on Android, having to double-tap, hold, and drag isn’t very precise.
Support for multiple languages — I’d love to have the descriptions in Portuguese.

Now, two questions:
I tried taking a photo of my room, but it seems like the objects are shown in the wrong places in the picture.
What’s the best camera position? For example, when I enter my room, my bed is on the left.

Another question:
I tried taking a picture of my dog, but double-tapping on him didn’t do anything.
Do I need to activate something to zoom in and analyze it better?

Thanks in advance, and keep up the good work!

By Brian on Thursday, November 13, 2025 - 20:01

Why do we need a head mount to sync the Meta smart glasses with this web application?

TIA

By Stephen on Thursday, November 13, 2025 - 20:20

Thank you so much for the kind words. As for your bug issues, I’m working on them as we speak. As for the photo layout I’m going to change how it presents the information to you. Also working on that now. This is why I had released it an alpha. How it’s presenting to you now when you go from top to bottom is essentially what’s furthest away from you and what’s closest to you. That will be changing here in a moment and hopefully it will be easier to understand what you’re feeling. As of now, the zoom in features only work when you upload your photo but in about five minutes that will be added to when you take a photo of your room as an example. I am hoping it will be easier… Myself personally I liked how it was presenting before, but it’s not for just me. It’s for all of us so I’m gonna try to make it as convenient as possible. I may even put two options in there where you can choose which layout you want.

By Brian on Thursday, November 13, 2025 - 21:00

Not to add to your workload, but I personally enjoy the layout of the closer to the bottom of the screen, the closer to you the item in the picture is.

By Enes Deniz on Thursday, November 13, 2025 - 21:06

I've suggested so many things even without being able to sign in and you never reacted to any of them like that. I do know that's the first beta but it's because I've only signed in to my Apple account on my iPhone that I can't sign in and test the app at all. Okay? Besides, I would say the main problem was your response in which you first tried to justify the lack of an option to sign in with an apple account rather than the lack of the option, but now I must say the main problem is your attitude regardless of what I suggest. I already think you will soon begin to charge a subscription fee after some time and I will probably not be able to use the app anyway so that's all from me. I will only continue to use the app as long as it's free, and won't suggest anything else.

By Stephen on Thursday, November 13, 2025 - 21:12

Don’t sweat it :). I just changed the layout. Let me know what you think. If it doesn’t work for you, I’ll see what I can do about keeping both options so that way you can choose what type of layout you want. Also there is a little surprise button the Home Screen. I’m still dabbling with the feature so it might be a little bit broken or buggy.

By Amir Soleimani on Thursday, November 13, 2025 - 21:18

@Enes Deniz First, please get your facts in order. Stephen didn't address your nit-picking - I did. So don't direct your anger at him. In fact, he remained silent and didn't find the nit-picking worthy of a reply. Second, yes, this service cannot remain free as he mentioned in his first message. What's wrong with that? Do you want all of that for free? Not doable. Sadly this attitude towards developers is something that may get under their skin eventually.

By Enes Deniz on Thursday, November 13, 2025 - 21:30

No, it's not unfair that the app will likely become paid at some point. It's me who will probably be unable to use it. Likewise, it's not unacceptable or anything that Apple sign-in is currently unavailable; it's unacceptable that the dev evade my remarks on that and attempt to justify not adding it instead of telling me something like "I'm working on it.", "It's on my to-do list." or even "Hey, seems like I just skipped that one. Thanks for pointing it out!". But now that I've got this response from him, only until or unless it becomes paid will I continue to use the app, and that is if the dev considers adding Apple sign-in, but it is because of this incident that I will stop using the app if it becomes paid, even if I can somehow pay the fee, not because I would be unable to pay a subscription fee anyway.
PS: Speaking of facts, the dev won't have to pay anything to Apple or Google to have his app published on the Apple App Store or Google Play.

By Amir Soleimani on Thursday, November 13, 2025 - 21:58

But, @Enes Deniz, the AI stuff behind the app isn't free. And how can it be? Is PiccyBot free with all of its features? Just see how JAWS 2026 makes a difference between Home/ Home Pro/ Pro users when it comes to new AI-oriented features. Yes, who doesn't like free apps? But it's a fact no matter how saddening it might be, and I haven't even considered the time and effort he's devoting to it. As for Apple sign-in, what can I add other than the fact that he's said he's working on it? You're dealing with an alpha or beta web app, and he could have excluded, say, Google sign-in if he had wanted to, depending on his priorities. I'm not in a position to provide advice, but this attitude will get us nowhere.

By Stephen on Thursday, November 13, 2025 - 22:06

Write now continue with apple sign in isn’t supported but you can still sign up using your apple email.

By Joshua on Thursday, November 13, 2025 - 23:07

this is awesome, i use android and one of the things that is frustrating is these really cool apps only to find out moast are IOS only and i have to wait for a long time for an android version if they come at all

By Joshua on Thursday, November 13, 2025 - 23:10

don't know if i just mist it but i didn't find a link to this

can someone share it?

thanks

By Stephen on Thursday, November 13, 2025 - 23:19

Hello Joshua. My goal is to make this app universally accessible across both iOS and android. You can visit the app here and you can also save it to your home screen to use it like a native app.
http://visionaiassistant.com

By Joshua on Friday, November 14, 2025 - 00:50

thanks for the link

By Brian on Friday, November 14, 2025 - 01:20

So I gave my living room another go, using the newer layout. Honestly, I think I would be satisfied with whichever layout you decide to go with. They both give enough information, distance, and details of items in and around the cameras point of view.
I had a bit of enjoyment with this earlier, after scanning my room with the new layout, I zoomed in on my coffee table, then focus again on items on the coffee table, more specifically a water bottle, that was roughly about half full of water, and a television remote. Now, it could not tell me the label of the water bottle, I am thinking because the label was likely facing away from me. However, it did a fairly good Job Describing the buttons that it could see from the initial picture of the room aesthetics. This application has become quite impressive.
Kudos on creating such an intuitive and enjoyable interface.

By Stephen on Friday, November 14, 2025 - 01:28

Thanks so much. I’m super glad you’re enjoying it. I’m looking into developing social media type features where you can share your taken photos or your uploaded photos with your friends, so they can search through them as well this way you can actually share the tactile photos.

By Stephen on Friday, November 14, 2025 - 02:22

Now you guys can feel through each other‘s photos and shared tactile photos with each other.

By quanchi on Friday, November 14, 2025 - 13:27

Thank you Stephen for this amazing app.
I tried the app and it's really really good at what it does and it's making looking through photos fun.

I have a suggestion. I know how expensive building a service that relies on AI can be because of the pricing of the models.
Can you allow users to put their own API keys for using the models?

People can either choose from a list of models or even connect a custom model that they're maybe locally hosting to process images.

A lot of tools that allow local model hosting use the OpenAI API protocol so hopefully there won't be issues connecting the app to them providing that the model supports images of course.

Making the users able to put their own API keys for state of the art models or connect the app to their locally hosted LLMs will gear some users toward that option, thus cutting down on the costs you have to pay to keep the app going.
Second, if they decide to use their local LLMs, they'll be assured that their data won't leave their devices, so it's better for their privacy.

I know that the quality of local LLMs isn't as good as well known state of the art models, but I guess if someone is hosting one, they know about that downside already.
Plus, it doesn't hurt to give the users the option.

If it can be made that users can put their own API keys without creating an account, that would be better too.
Personally speaking, I wouldn't mind paying for the feature if you decide to implement it.

I would love to use an app like this but with my locally hosted models.

By Stephen on Friday, November 14, 2025 - 15:02

Thank you so much both for the feedback and the suggestion. Right now, the web app architecture presents some significant technical challenges - mainly around secure key management, CORS restrictions for localhost connections, and ensuring the consistent reliability that blind users depend on across different model configurations. Since web browsers block remote websites from connecting to localhost services for security reasons, supporting local LLMs would require users to install additional proxy software, which adds complexity that could compromise the accessibility and ease-of-use this app is built around. I an however keeping this on my radar for future exploration. Thank you again for the excellent suggestion!

By Stephen on Friday, November 14, 2025 - 15:18

I should probably rename this button, but go outside and do the explore your room feature. Zoom in to houses, cars, trees let me know if you were able to capture a photo of a bird and zoom into it through our social media spot on the app! Search for me at Stephen_Lovely. Would love to feel what you’ve captured!!!

By Geofilho Ferreira Moraes on Friday, November 14, 2025 - 15:56

Congratulations on the app!

The language is a problem.

When will you have support for other languages? Brazilian Portuguese?
Offering other languages ​​will make the app more universal.

By Stephen on Friday, November 14, 2025 - 16:05

It’s next on my to-do list. I’ll be working on it throughout the day and over the weekend. My goal is to have it done by the weekend, but this is gonna take a little bit of time to make sure it doesn’t conflict with the app. I’ll announce it here when it’s officially rolled out. I’ve been building it in the back end.

By Stephen on Friday, November 14, 2025 - 16:53

I’m trying right now to add multiple languages, but it is proving to be more challenging than originally anticipated. I will be focusing on the language translation before pushing another update. Please continue to let me know if something‘s broken and then I can add it to the next big update but this language thing is gonna take a bit.

By Stephen on Friday, November 14, 2025 - 17:48

Ok so the app should now support nine languages including Portuguese, which I’ve had a couple recommendations for. Let me know if it works for you guys 😊.

By Brian on Friday, November 14, 2025 - 19:09

This has got to be the fastest turn-around on bug resolution, and feature requests, that I have ever experienced with an in-development application. Now, if only other developers worked like this... 😊

By Amir Soleimani on Friday, November 14, 2025 - 19:18

Thanks, Stephen, for adding multiple languages. But any chance of expanding on those?

By Stephen on Friday, November 14, 2025 - 21:58

Hey guys, so no longer do you have to turn on and off your own device screen reader to access explorer mode. The entire app now has its own built-in screen reader. I’m still working on trying to get the Alex voice for you guys, but it seems to be giving me a bit of a trouble which is why that’s not there yet but for some reason, I can get all of the voices that don’t really matter in that suck lol. But at least now it should be more user-friendly and you can just keep your own device screen reader off while in the app. Maps feature is a little bit broken and the social media features are in the process of being fixed. Thank you so much Brian. I definitely appreciate it. As for other languages, right now I can only do the languages that your device supports, but I will work on trying to get more. I just can’t guarantee when or if that’s possible. I also still have to implement dark mode too. Don’t worry y’all I have the whole weekend to work on this.

By Stephen on Friday, November 14, 2025 - 23:25

Working on it for desktop users as well. I have some cool things in mind for y’all.

By Stephen on Saturday, November 15, 2025 - 01:52

You guys should be able to use a computer with a mouse or if you have a touchscreen computer, you should be able to use the app with that as well.

By Jurgen on Saturday, November 15, 2025 - 07:13

Hi Stephen,
great idea.
I registered successfully to find just one option: turn on the built in screenreader.
After clicking on this the following message appeared:
404
Page Not Found
The page "gesturetutorial" could not be found in this application.
Go Home

What can I do to use your service?
Thanks and all the best
Jürgen

By Stephen on Saturday, November 15, 2025 - 08:18

Sorry about that I’m fixing it now.

By Stephen on Saturday, November 15, 2025 - 09:01

Should be working now for you. Sorry I broke things when tinkering with other things lol.

By mantanini on Saturday, November 15, 2025 - 10:00

What are the supported languages? Are Spanish, French and Bulgarian among them? Can you add Bulgarian if it is not among them and if possible because I am learning Bulgarian and I think it would be useful.
I am thinking of visiting this country soon.
Thank!

By Suriyan on Saturday, November 15, 2025 - 10:45

I love all the features you described, it's awesome. I'll definitely be waiting for your web app.

By Stephen on Saturday, November 15, 2025 - 10:53

Hello mantanini. You can check out all the languages we support in the settings portion of your app at:
http://visionaiassistant.com

By Stephen on Saturday, November 15, 2025 - 10:55

It’s out in alpha/beta mode so it’s free for now. Maybe I should make a second entry for folks now that it’s live? All you need to do is go here:
http://visionaiassistant.com

By Suriyan on Saturday, November 15, 2025 - 11:14

Oh, thanks for your reply. I was just reading through the other comments and realized you've already made it available for testing.

I registered successfully to find just one option: turn on the built in screenreader.
After clicking on this the following message appeared:
404
Page Not Found
The page "gesturetutorial" could not be found in this application.

I just tried it a while ago.

By Stephen on Saturday, November 15, 2025 - 11:22

It’s still doing that? I’m on it now.

By Stephen on Saturday, November 15, 2025 - 11:28

“Should“b e fixed now. Let me know if it is still giving you a hard time :).

By Brad on Saturday, November 15, 2025 - 12:54

HI, I tried it on my laptop and it didn't work very wel.

I tried the subreddit r/shitamericanssay and youtube.com and both gave me 404 errors

For the Sub it told me what the sub was but that's as far as it went, for youtube it just told me that there was a 404 error and that it doesn't exist on this ap.

By Stephen on Saturday, November 15, 2025 - 12:59

Hey Brad thanks for letting me know. I will look into this…I know a lot of sites don’t allow iframe like YouTube for examples so that could be one of the reasons. When you say it doesn’t work on your computer very well is that the only thing you mean? the more details I have the better :).

By Suriyan on Saturday, November 15, 2025 - 12:59

I'm now logged in. The system itself is impressive to me.
One additional request is that the web app be available in Thai. I think blind Thais would really appreciate it if it were available.

By Stephen on Saturday, November 15, 2025 - 13:14

Yayyyy 👏🏼 big bug squashed! I can’t make any promises on that one, but I’m already looking into it. The language seems to be giving me a bit of difficulty. You can always dm me thru the app now that your in if you have issues :).

By Stephen on Saturday, November 15, 2025 - 13:21

So you now have spatial audio paired with voice feedback when exploring photos and navigating a room. New options for this feature are available in your settings. Photo exploration inside social media apps is currently broken. This is next on the fix list along with the new web browser feature. The built-in screen reader is acting up, although it does not interfere with your own device’s screen reader. Photo exploration and zoom modes still function with system feedback. Turn off your on-device screen reader for now when exploring the grid so you can double tap and zoom into each element.

In your settings you will also find a control that changes room explorer distance descriptions from feet to meters. This should make navigation easier regardless of where you live. Dark mode has been added for low-vision users as well.

By Suriyan on Saturday, November 15, 2025 - 13:55

While using the web app's social media page, I noticed that under the post content, where the like and comment buttons are, there are two unlabeled buttons. Without trying to click them, it's impossible to tell what they do. When opened on a computer, the buttons appear as "Unlabeled 3 Button" and "Unlabeled 4 Button."
When opened on an iPhone, the text only appears as "Button."
While this isn't a major issue, it could impact the credibility of the web app.
However, for me, this isn't a major issue, but I wanted to let you know.