A Guide to the new on-device view assistant for the blind by Privacy AI

By Aliènette, 13 April, 2026

Privacy AI is a helpful tool for your iPhone, iPad, or Mac that uses artificial intelligence directly on your device. Most AI apps send your data to the cloud, but this one works completely offline. This keeps your conversations and photos private because they never leave your device. You can use it to write stories, translate different languages, and look at documents. The developer recently added a feature called View Assistant. This tool is free and uses your camera to continuously describe what is around you. It does not need an internet connection and there are no monthly fees. The layout can be a bit tricky for VoiceOver users because it does not follow a standard simple design. It takes a little patience to learn where everything is, but it is very useful once you understand it. This guide will walk you through the setup.
The Most Important Rule: The Menu Button
The Menu button is your best friend in this app. There are a lot of features, and it is easy to get confused about which screen you are on. If you ever feel lost, find and double tap the Menu button. This button always brings you back to the main list so you can start over.
Step 1: Getting the Vision Model Ready
The app comes with a few basic models already installed, but the View Assistant needs a specific vision model to work. You have to download this before you can use the camera features.
1. Open the app and tap the button that says Get Started.
2. Find the Menu button and double tap it.
3. Look for the option called Local Models and select it.
4. Select the option labeled GGUF Models.
5. You will see a long list of choices. Swipe past the options for importing files or using Hugging Face. Keep swiping past the recommended models until you hear the section for Vision Models.
6. Look for a model named LFM2.5-VL-450M. This is a small and fast model made for describing things in real time.
7. Tap the Download Model button. You will need to wait a few minutes for the download to finish.
8. When it is done, find the back button and tap it to return to the previous screen.
Step 2: Opening the View Assistant
Now that the model is on your device, you can find the view assistant.
1. Double tap the Menu button again.
2. Select the option for Apps.
3. Find and select View Assistant.
Step 3: Picking Your Settings
When you open the View Assistant, you will see a button at the top to start the environment reading, but you should check your settings first.
Model Selection: Use the model picker to make sure the LFM2.5 model you just downloaded is the one being used.

Capture Interval: This is how often the camera takes a new picture to describe. You can set it to 1, 3, 5, or 10 seconds. Choosing 1 second will give you updates more quickly.
Prompt Edit Field: This is where you tell the AI how to talk to you. You can type a specific instruction like: Tell me what you see in one short sentence. You can change this whenever you want more or less detail.
Enhanced Text Reading: There is a toggle for this. Turn it on if you want the AI to focus on reading a label or a page of text. Turn it off if you just want to know what a room looks like.
Resolution Selection: This lets you choose the quality of the image. Higher quality gives more detail but it might drain your battery faster.
Step 4: Using the view assistant
1. Go back to the top of the screen and double tap the button labeled Start Environment Reading.
2. Point the camera of your phone at whatever is around you. It uses the back camera by default but you can also switch to the front camera if you want.
3. The AI will take a picture based on the time interval you picked and will automatically speak the description.
4. When you want to stop, find the button that says Stop Environment Reading and double tap it.
5. You can always use the Menu button to go back to other parts of the app.
How to Change the Voice
The app uses the standard voices built into your phone. If you want to change how the voice sounds or pick a different language, you have to go into your main phone settings.
1. Open the Settings app on your iPhone.
2. Go to the Accessibility section.
3. Tap on the option for Speak to Read.
4. Select Voices to pick a new person or a different language for the AI to use.
Helpful Tips
Privacy AI is a very powerful app, and it is normal if the interface feels a bit overwhelming at first. Just remember that the Menu button is always there to help you start over if you get stuck. The developer is working on making the app easier to navigate for VoiceOver users in the future. Using this tool gives you a way to understand the world around you while keeping all your information completely private.
App Store link:
https://apps.apple.com/fr/app/privacy-ai-powerful-chatbot/id6738392421?l=en-GB

Disclaimer

The article on this page has generously been submitted by a member of the AppleVis community. As AppleVis is a community-powered website, we make no guarantee, either express or implied, of the accuracy or completeness of the information.

Options

Comments

By Enes Deniz on Monday, April 13, 2026 - 09:00

  1. Local AI (Qwen3.5-0.8B) is already small and accurate enough, and it’s multilingual with support for over 200 languages and dialects, so you don’t have to download LFM2.5-VL-450M even on older devices. If you do care about speed over accuracy and wish to reduce the interval to 1 second, however, download LFM2.5-VL-450M in MLX format, not GGUF. Either way, it’s not an essential requirement at all, and the app definitely lets you pick other models.
  2. The latest beta updates add more preset prompts for the View Assistant and offers a more refined user experience, including the ability to double-tap with two fingers (also known as a “magic tap”) to start and stop recognition.

By Aliènette on Monday, April 13, 2026 - 09:28

In the MLX model list, the only vision LFM currently recommended is the 1.6B version. I suggested the GGUF instead because while MLX models are generally faster on iOS, the gguf model list gives access to the latest LFM 2.5 VL 450 vision model. The Qwen model is definitely fast. inIn my own testing, the Liquid model feels even quicker. Both are excellent choices to experiment with, though.

By Enes Deniz on Monday, April 13, 2026 - 09:34

You can search for LFM2.5-VL-450M, SmolVLM2-500M-Video-Instruct or any other model not listed.

By Aliènette on Monday, April 13, 2026 - 09:42

That is definitely true, and you can certainly look for new models on Hugging Face. I was thinking more about the people who are just starting out with this app. The main hurdle with Hugging Face is that they will have to track down the specific mproj file for any vision models that aren't already included in the app's standard list.

By Aliènette on Monday, April 13, 2026 - 10:22

Some of the latest models aren't very stable on MLX yet, but fortunately, people can just use the GGUF versions for now in case they run into issues.

By Enes Deniz on Monday, April 13, 2026 - 11:36

FYI, I already told the developer that Privacy AI should auto-detect and download projectors/vision encoders/vision adapters/mmproj files or whatever they're called, for a more user-friendly experience.