Giving accurate opinions when A.I. is involved

By Siobhan, 17 June, 2024

Forum
Assistive Technology

Hi all. I was taking a survey for a site a few moments ago, with a fictional scenario of someone needing clothing for a formal occasion next month. They were then shown a video with "Amanda" name changed, so they could be guided onto what they might like. twice I failed the quality check when I could not identify the placement of Amanda, top corner left, right, etc. What I'm getting at is, how is it possible anyone using any assistive technology going to be able to interact with a virtual assistant if even in a fake circumstance, they are unable to know exactly where this assistant is? I realize this was simply a survey, but it irks me that I can't give valuable feedback, whether or not it was accessible can come later, because I visually was unable to know where the assistant was. Is it just me or dis this not bode well for what's coming? Full disclosure I used a Mac to take this survey, however the original point is still valid.

Options

Comments

By Assistive Inte… on Thursday, June 20, 2024 - 01:46

Do you mean you tried using AI to describe the scene and if failed?

I have found myself using Be My Eyes more and more in the past few weeks. Of course, it just helps me do something where i know what I am doing and I would never use it for something vitally important.

If I had to do that, I would use AIRA on my phone or laptop to get confirmation from a real person.

Computer vision has come so far, but we really need it to go all the way don't we?

By Brad on Thursday, June 20, 2024 - 01:46

The app on windows, not the phone one, but you're on a mac so I don't know of anything that can scan your screen for you.

You could have called a volunteer and asked them.

There's VOICE, I think it's called on the mac but I don't know how good it is with this stuff.

By Brian on Thursday, June 20, 2024 - 01:46

There is VOCR, and it is pretty clever. I would say it is comparable to some of the more advanced OCR add-ons for Nvda or what is included with Jaws. I would not necessarily compare it to Be My AI though.

Regardless, I think Siobhan was asking how are, we, as in those of us who see very little to not at all, expected to enjoy the same benefits of all this shiny new AI that is being, quite litterally, spoon fed to us, without requiring assistance?

Siobhan, feel free to correct me if I am wrong. . . 😇

By Siobhan on Thursday, June 20, 2024 - 01:46

Seriously, it's not about what app you use, what OS you use, I was simply saying that on any platform, we are gonna come up against this. I don't care so much I failed the survey, annoying or not, Irish temper or not. brad, don't... I get your theory. However, what if someone can not use tech, whatever platform, whatever version it is, and wants to use A.I? What if they will trust Amanda, hoping her wardrobe doesn't malfunction even though er best buddy Joe says it's fine, until it happens later? In other words, if you don't have access to AIRA, or Be My Eyes, or Seeing A.I., do you trust it? This goes to all countries, not just the developed ones. No disrespect meant by my last statement I'm sorry if it sounded horrible. Oh and Brian, love teasing your name ;)

By Brian on Thursday, June 20, 2024 - 01:46

. . . beaten irish brats for less. 😇😈

By Brad on Thursday, June 20, 2024 - 01:46

If they wanted to use AI but couldn't use tech,, they'd not use AI, plain and simple.

By Brian on Thursday, June 20, 2024 - 01:46

I have those 3 races in my ancestry. Know what that makes me?

. . . a mutt.

True story.

By Gokul on Thursday, June 20, 2024 - 01:46

Are we saying here that AI can give us wrong info and therefore cannot be trusted? or are we talking about the whole haves/have nots thing? What's the whole Amanda thing? A little more context would be nice.

By AppleVis on Thursday, June 20, 2024 - 01:46

Member of the AppleVis Editorial Team

Hello everyone!

We understand the passion and enthusiasm that often fuels our discussions, but it's crucial that we maintain a respectful and constructive environment. Unfortunately, some comments in this thread breached our community guidelines, specifically:

  1. Please do not post replies which add no value to the existing discussion. For example, if somebody has asked for information on a specific app, replying to say that you have not used that app is of no value and only serves to get in the way of what might be helpful answers. Additionally, if someone asks a question about how to do a particular task, replying to the post and saying to do an internet search to find the answer is not at all helpful.
  2. Please stay on topic. Always stick to the original topic that was stated by the person who started the forum thread. If you have a suggestion or comment that is on a different topic, please start a new thread.
  3. Please be polite. The AppleVis forum is designed to build a positive, thriving community. Please give the same consideration and tolerance to others that you would like to receive from them. It is perfectly okay to have a differing viewpoint from someone else; please remember, though, to express your views in a respectful manner. Posts that are deliberately insulting to other members of the AppleVis community will be edited and/or removed.
  4. No flaming. Flaming is the act of posting messages that are deliberately hostile and insulting. These types of posts are absolutely not allowed and will result in the immediate removal of the offending post.
  5. No trolling. Examples of this type of include:
    1. Creating threads for the sole purpose of causing unrest on the forums.
    2. Causing disturbances in forum threads, such as picking fights, making off topic posts that ruin the thread, insulting other posters.
    3. Making non-constructive posts.
    4. Excessively communicating the same phrase, similar phrases, or pure gibberish.
  6. Please do not post inappropriate or offensive content. This includes posting messages with insults or profanity; or messages which are religiously, racially, or sexually offensive. These types of posts will result in the post being edited or removed and could also lead to your posting privileges being revoked.

We kindly ask that everyone respect these guidelines, which are in place to foster a positive and welcoming community. While differing opinions are welcome, expressing them in a respectful manner is essential.

Please be aware that continued breaching of these guidelines after being warned may result in account suspension.

Your cooperation and understanding are greatly appreciated.

By Brad on Thursday, June 20, 2024 - 01:46

I new this would happen, that's completely fine, I hope something was learnt.

Like I said I'll just ignore comments about irish tempers in the future,, I can't change people the only person I can change is myself.

Funny how I cared about something so much in the moment but now I just am... Meh, about it. Humans eh? We're a strange bunch.

By Assistive Inte… on Thursday, June 20, 2024 - 01:46

I was surprised to get to the 'messages deleted' comment and to realise it hadn't been added while I was reading.

By Assistive Inte… on Thursday, June 20, 2024 - 01:46

I'm still not sure what the message is here. Is it that now Apple Intelligence is 'for the rest of us' that we need to make sure it is for all of us?

By Brad on Thursday, June 20, 2024 - 01:46

I think the OP wants to focus on something like this... What will people who can't use tech that well or at all do if they want to use AI, at least I think that's what she wanted to ask with this post, i'm really not sure.

I only know this because of how she responded to me, I'm not sure what I was not supposed to have not done, but yeah; that's what i'm getting from this.

I still stick to what I said. If they can't use tech/AI, then they can't use tech/AI? Are you wanting us to think of a voice bassed platform or something? I'm truly really confused here.

By Brian on Thursday, June 20, 2024 - 01:46

I still think the point of interest of this thread was that AI is going to be more, "exclusive", than, "inclusive", as more and more companies turn to AI for everyday tasks. Hell, in my Networking Cert program, we are using a version of Zoom, called WebEx (can be found on Microsoft Store fyi), and anyways in recent updates there is already a built in AI assistant.

AI is everywhere. Its like COVID, only digital.

Remember everyone, keep all of your apps at least 6" apart, and for God's sake wear a medical mask! 😷😩

By Magic Retina on Thursday, June 20, 2024 - 01:46

Ha! AI being likecovid is my new favorite saying, that sums up how I feel about it pretty nicely.

Honestly I wish there was a way to get away from AI period. I'm sure it's useful to someone somewhere but I have yet to actually see it being so and for the tremendous amount of harm it does to basically everything it's in as well as the environment, it doesn't seem like any negligible benefits are worth the horrendous damages. It's just NFTs all over again but way more rich tech bros fell for this scam. I am glad Apple told me flat out that I will not be buying a new model phone or iPad any time soon if it's all just going to be loaded up with AI junk I have no use for.

As for AI and accessibility, it does sound like what I said last year is coming true: it's just another way people will excuse ignoring accessibility and shoving us to the side, claiming their pet chatbot took care of the accessibility needs and it's our fault for their broken site or app not working.

(I'm aware I'm kind of an outlier for how much I full on hate AI at this point but I feel like one of us has to say these things where people in charge of the tech might read them.)

By Gokul on Thursday, June 20, 2024 - 01:46

Well, when the art of manually making fire was invented, I'm sure there must've been people who said we should not play with fire. And I'm almost equally sure that those people must've had to play catch-up later, or altogether content with being left behind. The case must've been similar with the Wheal, with the steam engine, with electricity, with computers, the internet and what not.
And the people who embraced fire, (in spite of maybe some early painful burns), must have discovered effective ways of containing it, using it safely, of harnessing its potential for growth and general well-being. The point being, even if the humans had never embraced fire, fire would have been around still, but we wouldn't have been able to make sure fire rescue services are around... Just saying.

By Brad on Thursday, June 20, 2024 - 01:46

I'm all for AI when it comes to disability, but I do see the other side, we don't need to wakc it into everything.

By Assistive Inte… on Thursday, June 20, 2024 - 01:46

Is turning into the same sort of stupid arguement you get everywhere else online. I suppose it can't be stopped, but I used to quite like reading this site - it is just ful of argumentative rubbish lately!

By Brian on Thursday, June 20, 2024 - 01:46

Which part is rubbish? I think I got a little lost along the way navigating all these replies. . . 😕

By Holger Fiallo on Thursday, June 20, 2024 - 01:46

AI is not smart because the input come from humans who have their own issues and may not be aware of accessibility. An AI that does not consider those with vission issues or blindness is not surprising. AI that would had taking the time to say AI asisstent is on the left or right but like I stated AI is garbage in and garbage out. Humans are the one who program them. AI still probably telling you that its sorry if you tell it that you are blind.

By SeasonKing on Thursday, June 20, 2024 - 01:46

WE do like to participate in civilized discussions, but the OP's post sounded like just that initial phase of frustration of not being able to do something with the available tools.
Not enough context was given on what was the cause of the issue, what tools were being used, where and how was AI involved exactly etc.
If a video was displayed, and the survey asked you to find a person's position in that video, then the video needs to be accessible with audio descriptions. No AI currently publically available is able to describe videos that effectively. It would be possible to capture screenshots of the moment where Amanda shows up on screen and get it described with Be My Eyes or something, but that's also very inefficiant.
In terms of usage of AI interfaces, similar to any other element on screen, with keyboard navigation, where the AI interface is placed would simply wouldn't matter to the user. You would be able to navigate the UI same as before. On touchscreens, it's possible to navigate and figure out positions of objects provided the UI is accessible.
We are posting here to get thoughts from others, and we need to be mindful of making sure that we provide them enough information for them to help us effectively. Else it's just IT support Email chains, asking for more information.

By Brian on Thursday, June 20, 2024 - 01:46

I think the underlying issue, when it comes to the claim of "political commentary", is that there are two sides of the fence (for lack of a better description), between people who understand that AI can be both helpful and a henderence. And those who believe that AI will become their own very best friend forever, will solve all of their problems and generally enrich their lives; the more they incorporate AI into their personal lives.

Personally, I have experienced its helpfulness, in fact I am sure we all have. It absolutely has its use, I have no doubt about that. However, and I believe I have said this in another post, where students are using AI to cheat on assignments, research, essays, and the like. Imagine being a Professor and having to fail half a class for cheating and/or plagiarism because some students foolishly used AI to write essays as part of a final grade?

And sure, nay sayers will say that is just one example, will not matter and lead nowhere. Well, I say tell that to the parents of said students. The ones who rally against AI in schools, and then in general. Imagine if laws suddenly have to be passed to limit when and where AI is legally allowed?

What will we do, as a group of individuals whom, for better or worse, will have come to absolutely depend on AI to function in this wacky wide world of ours?

By Gokul on Thursday, June 20, 2024 - 01:46

Well, if laws are to be passed on the subject, I guess as far as accessibility is concerned, it would likely be at the safe/legal side of the equation? I mean, the same political positions that advocate a total bann on AI ought to allow the use of the same for exceptional circumstances like accessibility since that's the same politicality that generally shows an increased awareness/concern on matters such as inclusivity and accessibility?
That aside, I don't know if most of the people who in general vote for the 'sensible' use of AI would advocate for dependence as such. So...
As for the professor who imagines herself to be cheeted because half of the class took help of AI tools to craft essays, I guess at some point in the past she would also have felt cheeted because kids used Google to do stuff in hours while in her university days, she had to scour through library books for days on end to complete the same work. The point being, in as much as we have tools that help kids craft good assignments, we also have tools that help teachers catch plageorism, including AI-produced plageorism.
Having said everything, if someone is using AI unethically for anything, and there are suddenly laws that prevent them from doing that, and because of it they're suffering, well, why should we care anyway?

By Brian on Thursday, June 20, 2024 - 01:46

Yes, to the accessibility accommodation. Yes, one would certainly hope that any future laws passed regarding AI and its general use, would fall on the legal side of things. I am not sure what you were trying to articulate with your comment on professors imagining students cheating?

This is not an imagination. I left out a lot of information, because of privacy reasons, but the Certification program I am in is dealing with students who are using AI, such as, but not limited to, Chat GPT, to work out their assignments and essays. Now, keep in mind, these are not necessarily students with blindness, or any other limitation, but rather your everyday student who has access to your everyday AI, and instead of studying their course, they are asking AI for the answers.

Again, this is not my nor anyone elses imagination. This is fact. The Prof of my current course warned us on day 1 about this, and that we should refrain from using it to, as he put it, "Get easy answers".

Finally, you are right in that we should care less about somebody being limited from using a particular toolset, such as AI, if they are just going to use it for stupid and/or illegal reasons. My concern was more to the point of what happens when/if said limitations affect "us"?

This is all just food for thought, and why people should not complain so much if someone has an alternate method of thinking where AI is concerned.

By Gokul on Thursday, June 20, 2024 - 01:46

neither was I talking about imaginery facts. I was talking about a real-life situation indeed. I mean, why can't the said certification program modify itself by incorporating tools and saifguards to prevent the unethical use of AI, at the same time ensuring that both the students and teachers are able to tap into the ethical potential of existing AI instead of pining about not being able to keep up with emerging technology? The whole thing about imagination was that I imagine that some 10 years ago, this professor must've issued the then students sturn wornings against using Google to, as they put it, get easy answers. The point being, AI is here, and I believe it's here to stay, whether we accept it or not. So it'd make better sense to accept that fact, which will then allow us to use it in the most productive and least damaging way possible.

By Brian on Thursday, June 20, 2024 - 01:46

As for tools in place to determine and/or detect students using AI to cheat. It's a possibility that is in the works. I wouldn't know. I am sure something must already exist, or else they, the Profs, have their own ways. Much like folks here can tell when someone has used Chat GPT to make a post, ya know?

By Assistive Inte… on Thursday, June 20, 2024 - 01:46

after an analysis of the post and comments it ended up in the same position as me: it didn't really know what the point was, either of the initial post or the various threads within the comments.

Glad to see it's not just me.

By Brian on Thursday, June 20, 2024 - 01:46

You are adorable. Never change.

By Tara on Thursday, June 20, 2024 - 01:46

I'm reading this post and am still flabbergasted. Siobhan this makes know sense. That survey seems weird anyway, and isn't even a real world example. Why would you need to know where the assistant is positioned on the screen anyway? And Siobhan you say, what if someone can not use tech, whatever platform, whatever version it is, and wants to use AI? If they can't use tech, then they won't be able to use AI anyway. And that's not these companys' problem. If someone doesn't want to learn tech or can't get access to it, that's not anyone's problem except theirs for not wanting to learn, or government's and certain organisations in developing countries for not investing their time and money into helping blind people learn tech. That is not the responsibility of AI companies. You could argue that about any application or piece of tech. What if someone wants email but doesn't want to use tech? Well, they won't be able to access email then will they? This survey is clearly an example of someone who is using AI just for the sake of it, and not applying it to a real world example. A real world example would be a blind person getting the info their taxi is coming, and we saw GPT 4O do that anyway. I agree with Gokul's points. The education system is broken anyway and needs a complete overhaul the world over. If AI will do that then so be it. AI is better than some human teachers anyway. I could go on about how AI is better than some of the worst human teachers I've seen at explaining things and breaking things down, but that would be going wildly off topic. AI shouldn't write your essays for you, and stuff like that should be stamped out. But should AI explain things you're having trouble with? Hell yeah! Bring it on!

By Holger Fiallo on Thursday, June 20, 2024 - 01:46

It will be interesting when Apple releases their calculator that suppose to give you the answer to an equation just by adding the equal sign. Can you say, cheating. Homework now will be interesting. Math problems will be 100% right thanks to it.

By Siobhan on Thursday, June 27, 2024 - 01:46

Sorry I didn't give much context. The base point is simple. Forget about if this was a survey or a real world example, we are being excluded as there was no way to give valuable feedback. This had been about clothing and of course I wasn't expecting to go through the entire survey, but look at the real world aspects here. You'll have clothing stores use this to create whatever outfit you want, or they suggest. This technology is in its' infancy so we really should be able to have ourselves heard and I was disappointed to learn yet again, we are on the outskirts of this emerging idea. As for audio descriptions, that wouldn't have been possible as the virtual assistent was speaking. Now as for everything else, I'll continue being myself and those who are unhappy with my conduct simply skip on as I shall.

By Gokul on Thursday, June 27, 2024 - 01:46

Sorry to say this, but it's still not clear what exactly is the problem that the original post is trying to address. If you are trying to say that the visually impaired community remains at the fringes of this emerging technology, (all personal opinion on the statement itself aside) the only way to address it is continued and increased participation so that the sidelined voices have more and more space to be heard. We don't solve the problem by diching the tech just because we've been sidelined; in which case the sidelining will only escalate further.

By Brad on Thursday, June 27, 2024 - 01:46

I get what you're saying, I think, you're saying that AI will be able to create a visual representation of what the clothing will look like on the person and I get how annoying that could be but honestly, we've been left out of things before and that's just the reality of being blind so it's not a big deal to me.

I guess you could try and get the NFB involved to see if they'll do the good ol' american thing, suing, but apart from that; I don't see much else you can do.

By Assistive Inte… on Thursday, June 27, 2024 - 01:46

We could maybe, perhaps, sort of get involved? Use this technology and give feedback to the developers and commmunities involved? If they know ai-generated image descriptions are useful to blind and visualy impaired people, they might put a littel bit of effort into improving it.

My suggestion is to adopt an open-minded approach to AI and to explore how it can be an assistive technology - a sort of AI for the rest of us with special needs?

By Siobhan on Thursday, June 27, 2024 - 01:46

The whole point wasn't about how i got it done, why it didn't work apart from it didn't, it was about how to improve everything. So I can't see colors. Is that any reason for me not to pick out something nice for an interview, dinner etc? I respect I want to make my own decisions as do you all. However this goes off topic so I'll bring it back. If you in general do not see colors and trust friends, salesmen etc, that's just cool. However I was frustrated on we have no way to even put a foot in the door, as it were. As for the suing aspect, thanks but the organizations here are so not caring about what looks good to them, they could care less about the person at the. other end of the line. Maybe some of this has to do with countries and how they view things, but those who don't feel like they should be heard just, make me want to fight harder for those who do.

By Brad on Thursday, June 27, 2024 - 01:46

It could totally be a country thing, or perhaps it's just that i'm a guy and don't care much if at all for fassion.

I honestly can't think of anything else to suggest apart from writing to the devs if this stuff happens.

Or better still, actually going to the shop and asking for assistance, this might be one of those cases where going online isn't actually the best thing to do.

By Gokul on Thursday, June 27, 2024 - 01:46

I guess there's no point talking about trying out AI to somebody who can not trust their friends much less sailsmen. When people approach stuff with such attitudes, making a point that today you can at least get a description of clothes or whatever while some 6 months ago that was totally not the case doesn't make a difference. As much as I believe that stuff should be more accessible, and that more stuff should be accessible, I'm also greatful for whatever I already have, and am hopeful that there're incredible things waiting round the corner.