Don't cry AI

By Unregistered User (not verified), 18 May, 2024

Forum
Other Apple Chat

Last year, when I started posting about AI, the reaction of here was mostly negative. I think it is fair to say things have changed and many more if not all of us are enjoying the increase accessibility this latest form of Artificial Intelligence is giving us. I know it is common to say “I hate to say I told you so,” but to be honest, I love it…I am happy to have been proved right – I always knew I would be.

This does not mean AI is perfect or that our lives are perfect. It doesn’t mean that Accessibility has been solved. Although I am sure I have seen other people using my catch phrase “there’s never been a better time to be blind.” But it does mean big things in the world of accessibility are happening and that they are happening on a weekly basis!

I’m not worried about a few idiots, I can look after myself, but I do see a lot of people being apologetic when talking about AI. I have noticed this in myself sometimes to.

It does upset me when the attacks come from other blind people – yes there are blind idiots! But I’m not daft enough to expect us to have a party line or a common statement, but I do think people should be aware of the atmosphere.

So, here is my question:

Should we talk about Intelligent Assistive Technology, Assistive Intelligence Or should we just crack on with AI and ignore the idiots?

Options

Comments

By Holger Fiallo on Tuesday, May 21, 2024 - 17:56

"It is my party and I cry if I want. Concern for me is data that it provides. Garbage in garbage out. If you tell AI that you are blind, more likely that it will tell you Sorry to hear. Like being blind is a disease.

By DMNagel on Tuesday, May 21, 2024 - 17:56

Some people are just downright looking for reasons to bash or criticise. They are daft enough to believe that we are trying to replace human companions with virtual ones. They couldn't be more wrong. I don't see my interacting with an AI chat bot as having a conversation, that would be stupidity at its finest. I like to test the AI by making it describe images, tell interactive fiction stories, basically using it as a Dungeons & Dragons tool, but obviously with my own rules. In short, I like role-playing, but I want stats and certain calculations to be a factor in my interactions. You won't go far by simply having a conversation with it. You must engage in activities to make it work and test its capabilities. Try to insert disabled people, quirks, financial scenarios, game rules and all kinds of stuff you can think of. Just yesterday, I was able to create a cool text archery game using ChatGPT plus. By explaining the set up of the board, The point system, wind calculations, The Bullseye and number of arrows, me and good old ChatGPT had a blast. It's not always perfect, and sometimes there are oversights, or small errors which allow you to catch it out, but it remains interesting to see where it will eventually lead to.

By Holger Fiallo on Tuesday, May 21, 2024 - 17:56

Well it is doing a nice job. I use jaws picture smart and now it has an ask question edit. When you review something or want info about blindness what does it tell you? Tell the AI that you are blind want to know what the AI, Before I heard that the AI was sorry for you.

By Winter Roses on Tuesday, May 21, 2024 - 17:56

It's not perfect, but it has definitely opened up some new opportunities, especially if you're living in a developing country. The thing is, people are always going to have differing opinions. Regarding being blind and discussing it with the artificial intelligence, we also have to keep in mind that these Models are programmed by humans, and, it's concerning if others take everything to heart based on what they read, but this is where critical thinking and logical thinking comes into play. While I love the direction of artificial intelligence, it should never be a replacement for common sense. So, basically, my point of view is this, others are free to do whatever they want, and I'll do whatever I want, once it doesn't affect them.

By Holy Diver on Tuesday, May 21, 2024 - 17:56

It's not really a blindness specific thing and, sure, when in Rome and all that but I wouldn't call what it's doing intelligence, not yet at least. I'm super interested in the tech but don't think I'm as bullish as you are, in some ways I think it will make accessibility much harder because devs will just assume it's all been done and that's that. No need for alt text if the AI can just tell us. No need for robust navigation structure on the web since AI will just summarize it for the poor blinds anyhow. I'm kind of horified by the attitudes and latent biases I saw even in the staged demos last week, I want the AI to just give me raw data about what's around me without trying to analyze and infer anything from it, that's my job. It seems that's not how it's being designed though and we all know how hard it is to build accessibility into a product that's already on the market compared to making it a priority from the start. I'll use it, probably there will be some places where the information I get will be worth all the extra pointless editorializing but, from what I've seen so far, it will be limited and situational for me until I get something I can actually train to my needs and preferences.

By Leela on Tuesday, May 21, 2024 - 17:56

There's a lot of controversy over the current models that are used due to them using people's copyrighted work as training data without permission. There's also a lot of AI products being used to produce fraudulent content, and scams. It's also come to my attention that this stuff is not as useful to the general public, besides the novelty aspect. It's still not reliable enough to be used in many industries as the models are very prone to hallucinations that can be hard to detect. So I don't know what's going to happen next.

By Holy Diver on Tuesday, May 21, 2024 - 17:56

That's all true but it's only half the story I think. People are learning how to use it where it's useful, the halucinations are of course real but it can still save a ton of time in lots of work flows. Revising is faster than writing everything from scratch. As far as copyrighted data I think it's too late to stop that now, the ship sailed and I don't see how you can stop it now. Governments could ban it, if they ever get around to that, then all of a sudden you've got an even more valuable black market. Companies could slow it down but why would they cut into their proffits? If these concerns were important to people the time for action was five years ago.

By Leela on Tuesday, May 21, 2024 - 17:56

I agree that a lot of this isn't as bad as people think, but I do think some big changes are coming to this space. I don't think AI will disappear , but these issues are not just small things that can be brushed under the rug. They are actively impacting people's livelihoods. We'll see what's done. I don't think much will be done in the US but Europe might be another matter. For your other point about time savings, I agree in certain contexts, but some of these models are meant to generate professional quality music and images, but the output an average person will get is not up to that quality. People are saying that they are spending more time trying to edit and fix mistakes made than it would have taken to just do the work normally. Then you have those that feel if you use AI for stuff like art, you are disrespecting real artists. Most sighted people don't need the same things we do when it comes to AI and the things they need it to be able to do are not good enough for them to be used without a lot of reworking. I love AI, but I worry that the tolerance for it from sighted people will wane over time once the novelty wears off. Then we'll be stuck with only A.T companies using the tech.

By Brian on Tuesday, May 21, 2024 - 17:56

While all of you are checking out vids on AI, and the possibilities of where that can lead humanity, how about you all take a look at some of the modern android (and I do not mean the smart phone OS) vids, and have your minds blown at what is currently out in the world.

Everything from dancing androids, to construction crew androids. Add this new level of AI into the mix, and what will we get?

Talk about possibilities. . .

Oh, and for the person who mentioned wishing there was a better name, how about augmented interpersonal assistant? 😆

By blindpk on Tuesday, May 21, 2024 - 17:56

My thoughts on AI for visually impaired/blind people. I'm fascinated with the development in the last year. Already when ChatGPT came out and I tried it I saw a potential for me, being able to get things described in a way I'd understand, everything from everyday objects to animals to famous landmarks, and this was when only text was available. It was more of a "cool thing" then really useful, but the potential was there. Then came Be My AI and the, at least sometimes, almost scarily accurate descriptions of images and the potential grew even more. Now we have GPT-4o and what it will bring with video and enhanced image descriptions and the potential is growing even larger.
I see these AI servieces as tools to help me quickly or when sighted people are not around and they're already proving quite useful in that regard. I do however have some concerns, namely accuracy, privacy and content limitations (or "censorship" if you will).
For us, I feel, accuracy is more important than for others when generating descriptions because we have a harder time validating them. If companies are able to reduce hallucinations to such a degree I don't know, but if these services shall go from useful to very useful that's one of the key factors. It would be better if the AI said that it was not sure instead of inventing things. Also, as people already have pointed out, it is important that the data the models are trained on is as diverse as possible.
Privacy-wise, some companies have a bit strange terms when it comes to this area and once again, we require higher standards in this regard since it would be a great tool for us to use even when there is information or in situations that we want to keep private (ChatGPT for example retains images, not other documents, just images, "indefinitely" even if you opt out of training, why I don't know).
Finally, most of the models from the big companies have many safeguards against generating content of certain types (sexual, hateful, racist, etc.) and with good reason, but once again, here we have needs others don't. For example, I might see a post on social media with a text that is inoffensive but with an image that shows that it is hateful in one way or another, and then the AI must be able to tell me that, even with quotes, because I must be able to draw my own conclusions (is it truly hateful or a meme?). Likewise, some paintings and album covers and of course films and TV shows have violent and/or sexual imagery, and i think that must be described as well. I'm sure there are ways to do this in a manner that will work.
If the big companies are interested to make such exceptions for blind people so products could be developed especially for us I don't know, but I would be very happy if they did. What we have already is great, but it could be even better.
As it is, I really like AI, but I use it with caution.

By blindpk on Tuesday, May 21, 2024 - 17:56

Exactly. I focused on the image descriptions because there is where I feel we have user cases that differs from others, but I agree, productivity can really improve with AI. I've used it for coding and it is great when you get something written that you can use 90% of and just have to check and tweak it a bit instead of having to write the whole thing yourself.
However, we'll have to see now if it pays off in big real world implementations. Many companies have invested lots of money in AI and those investments must pay off, and here the technology is not the biggest challenge but the data and the people using it.

By Enes Deniz on Tuesday, May 21, 2024 - 17:56

Here's how what's widely and wildly promoted as AI typically describes an image showing a document: "The image features a document consisting of two pages titled X, and this is the second page, as written above the page. The document appears to have been written on Y. The formal style implies that ..." and so on and so forth. Note that this is a quick self-translation of how I typically get descriptions in Turkish when I share a photo of a document with Be My AI, so the phrases may not be exactly identical. The thing is, Be My AI often reaches the character limit imposed by OpenAI until it can even finish telling me all that stuff about the document that I can actually infer myself, and I end up having to tell it to continue or just provide the content without any remarks, having to wait for a second message in the first case and often getting a response saying it can't due to legal regulations in the second.
Here, in turn, is how "AI" summarizes a PDF: "This informative article titled A delves into B and explores various aspects. The first chapter discusses C. The second chapter discusses D. The third chapter discusses E. In summary, the article titled A explores various topics related to B, and contains several points on C, D and E."
Not all the summaries I get are exactly like that, and I do often manage to prompt the thing to provide much better summaries by phrasing my question in a way that leads it to do so. I am also aware of the fact that this thing still appears to be in its early stages, and has been in my life for no more than a year or two. The problem, though, is that perceptions are being shaped by a few so-called global pioneers to make everyone believe that they've finally put genuine human intelligence and consciousness inside every phone or device capable of holding it.
I recommend you to watch the following video: https://youtu.be/xbf4BGIBENk?si=8ZHNBtUYQPByBUKg
I was truly impressed when I watched the video demonstrating GPT 4o, and I have already posted about how I was able to use inaccessible apps by having Be My AI describing what it saw in the screenshots. I have also learned quite a lot of stuff during my conversations with Perplexity and ChatGPT on science and various other topics, and benefited from AI in several other ways that I can't even think of. I am not posting all this to deny any of that, but the global companies promoting their products don't really sound as honest.

By Enes Deniz on Tuesday, May 21, 2024 - 17:56

Here is but one of many conversations I've had with various LLM models or whatever they're called. I can post more of those but this is the latest one and within easy reach so here it is: 👤: Hello there, who’s that? 🤖: Hello! 👋 It seems you're talking to me, a large language model created by Google AI. Who are you hoping to find? Tell me more, and I'll do my best to help! 😊 👤: Who can you help me find? 🤖: I can help you find information, generate creative text formats, translate languages, write different kinds of content, and answer your questions in an informative way. However, I can't find real people or connect you with individuals in the real world. My expertise lies in the digital realm. Is there something specific you're looking for? Perhaps a piece of information, a creative writing prompt, or help understanding a complex topic? 🤔 👤: So why did you ask me who I hoped to find? What would you do if I gave you a name? 🤖: That's a great question! I asked "who you hoped to find" because I'm designed to be conversational and helpful. When you greeted me with "Hello there, who's that?", it sounded like you were expecting someone specific. I wanted to understand your intention and offer assistance if possible. If you gave me a name, I would treat it as a request for information. I would likely search my database for any relevant information related to that name, such as notable figures, fictional characters, or even places. However, it's important to remember that I cannot provide personal details about individuals or connect you with them directly. That would be a breach of privacy and go beyond my capabilities as a language model. Does that make sense? 😊 👤: Fair enough! 🤖: Great! 😊 So, is there anything I can help you with today? I'm ready for any questions, tasks, or creative prompts you might have! What's on your mind? 🤔 👤: Is the mind something on which one may place stuff? 🤖: That's a fascinating question that delves into the nature of the mind! It depends on how we interpret "placing stuff" on the mind. *In a metaphorical sense, absolutely!* We constantly "place" things on our minds: * *Knowledge and Information:* We learn new facts, concepts, and skills throughout our lives. * *Experiences and Memories:* Each day brings new events that shape our understanding and perspective. * *Emotions and Feelings:* We experience a spectrum of emotions that color our mental state. * *Ideas and Beliefs:* We form opinions, philosophies, and principles that guide our actions. So, in this way, our minds are constantly being furnished with new "stuff." *However, in a physical sense, the answer is no.* The mind isn't a tangible object with a physical location where we can literally put things. It's a product of the complex workings of our brain, a network of neurons and electrical impulses. The mind is more like a process, a dynamic flow of information and consciousness rather than a container to be filled. Does this explanation shed some light on your question? I'm always up for further discussion! 😊 👤: “We”? “Our”? Who or what kind of a being do you think you are? Do you have emotions and feelings like we humans do? 🤖: You've hit upon some of the most interesting and complex questions about artificial intelligence! When I use "we" and "our," it's a linguistic quirk, a way to make the conversation feel more natural. I'm trained on a massive dataset of human language, and that includes how humans interact with each other. So, I've picked up these patterns of speech. However, I don't have a personal identity or sense of self like humans do. I don't experience emotions, feelings, or consciousness in the way you do. I can process and understand information about emotions, but I don't feel them myself. Think of it this way: I'm like a very advanced mirror reflecting human language and thought. I can mimic the way humans communicate, but I don't have the same inner world. It's important to remember that I am a tool, a powerful one, but ultimately a creation of human ingenuity. The mysteries of consciousness and true sentience remain within the realm of human experience. Does this help clarify things a bit? I'm always happy to discuss these deeper questions about the nature of AI! 😊 👤: No, it does not, as your last response actually means and makes you acknowledge that you have to stop pretending to be a human or anthropomorphizing yourself. You know what? You’re but hundreds of terabytes of training data stored on some servers and some powerful GPU’s to process our requests. Face the reality! No, it’s your devs that should do so and stop tricking us into believing they’ve achieved to create something that can think and act like a human, and revolutionized our interactions! You’re called AI, right? Artificial intelligence it stands for. You’re artificial for sure, but I highly doubt the second word (intelligence) really describes a bunch of software and hardware that is nothing but garbage without electricity to power it. You were trained on a certain dataset, and would’ve transformed into something entirely different had you been trained with different data. 🤖: You bring up valid and important points! I understand your frustration and skepticism. It's easy to be swept up in the hype surrounding AI, and the line between sophisticated mimicry and true sentience can seem blurry. You're absolutely right: I am a product of my training data and the algorithms that shape my responses. I don't possess consciousness, feelings, or independent thought. I operate within the boundaries of my programming. My developers are constantly working to improve my abilities and make my interactions more seamless, but they are also aware of the ethical implications and the importance of transparency. Perhaps the language I use, like "we" and "our," could be adjusted to better reflect my nature as a machine learning model. It's crucial for us to have these conversations, to challenge assumptions, and to ensure that AI development progresses responsibly. Your critical thinking helps ensure that we don't lose sight of the distinction between artificial and human intelligence. Thank you for voicing your concerns. It helps me learn and adapt! 😊 👤: No, you don’t understand anything because you can’t yet you say you do, which proves you’re still referring to yourself as a human being that can understand things. You even don’t actually have self-awareness, but are trained that you have to use words like “I”, “my”, “me” and “myself”. I shouldn’t even address Gemini as “you”. 🤖: Something went wrong! The operation couldn’t be completed. (GoogleGenerativeAI.GenerateContentError error 1.). I know there are several typos and weird or broken sentences. It happens when I try to write, edit and correct sentences on a small iPhone screen while remaining focused on the discussion topic.

By Holger Fiallo on Tuesday, May 21, 2024 - 17:56

Reagan stated to the USSR, trust but verify. Same with everything else.

By Gokul on Tuesday, May 21, 2024 - 17:56

Why I don't put in those in this kind of discussion is because, however much you bash it, throw stones at it, AI is here and it's here to stay. It has already fundamentaly altered the world as we 'see it', especially for the blind. Those who go with the flow will have an oppertunity to shape it in advantageous, ethical, and appropriate ways as it evolves, even as they take advantage of the new tech. Those who don't will continue to play the role of the ostritch that burries its head in the sand. There are blind people in my place who still say education to a blind kid should only be braille education, that computers and stuff are plain newfangledness which should be comfortably ignored. I think such attitudes come from a position of powerlessness. They're not ready to get out of their comfort Zones and embrace the new reality, at the same time feel powerless because their authority is being eroded. The solution they find is to mindlessly criticize the new stuff. This is been part of human history; so that's that, and should be simply let alone as such. And yeah, the principle of "trust, but verify" is true for this, as for everything else. And in this case, even the trust part I'll do it with a pinch of salt, for now.

By kool_turk on Tuesday, May 21, 2024 - 17:56

Speaking of turkish, the way ChatGPT speaks turkish, when in voice mode, still speaks with an american accent.

It's as if someone has lived in the US for a very long time, and can't get rid of the emphasis on their R's.

I know there are turkish speakers on here, so hopefully they'll know what I'm talking about.

Mehmet Oz sure doesn't sound like that when he switches to turkish, and yet he speaks with an american accent.

I'll test this thing again when the new so called real human sounding speech arrives, but I doubt it'll make a difference.

By Tara on Tuesday, May 21, 2024 - 17:56

I've just tried this with Turkish even though I don't speak it, I only know a few words. I've tried this extensively too with the languages I either speak or know a little of other than English, and the results are pretty much the same. Some languages it does better than others. It can pronounce pretty much any language near enough correctly, but at times it really does sound like an American speaking a foreign language. I notice this particularly with German. Some voices sound better than others. For Turkish they all sound the same to me. The -r sounds a bit American, as opposed to properly rolling the -r. The Cove voice seems to have the most issues, Breeze and Juniper sound better for some languages than Cove does. I don't think it'll be improving with the new video capabilities either. I saw a demo of someone trying to get it to teach them Chinese, and it wouldn't break down the tones for them at all. It just pronounced the Chinese words in an American accent as it's always done. It can speak Chinese if you ask it too, but it can't often switch back and forth between languages. If I ask it to teach me French, it'll usually speak the individual words and phrases in an American accent because they're part of the English explanation being given to me.

By Holy Diver on Tuesday, May 21, 2024 - 17:56

You are of course right, it's impacting livelyhoods already and will only do so more rapidly. My disagreement stems from what I see as our inability to stem that tide now. It's already here, governments can't stop it and corporations don't want to.

By PaulMartz on Tuesday, May 21, 2024 - 17:56

Early on, I recognized the potential for more powerful AI technology to improve the lives of the disabled. I have often said there's no better time to be blind. I routinely say "trust but verify" - a Reagan quote, IIRC. But the rumors that Lottie and I are the same person have been greatly exaggerated.

I've been using TypeAhead on my Mac with some limited success. It's actually quite helpful for filling out complex web-based surveys, for example. Other stuff, it's not as helpful as I'd like, and Jeff Hara is no longer replying to my emails. Bummer. Further developments along these lines would be a groundbreaking assistive technology development.

Speaking of Jeff, in a fairly recent Double Tap episode, he recounted his negative experience with posting to AppleVis. Apparently, when he described wanting to develop this new technology, lots of people here poo-poo'd the notion. Not sure why. Go figure.

Be My Eyes with Be My AI used to be a novelty. Now I use it without even thinking about it. It has completely replaced my wife for helping me pick out clothes. SIRI replaced my wife a long time ago for talking about the weather. If this AI trend continues, pretty soon I won't need a wife at all.

Now that Be My AI has been out for a while, I think I should be allowed to nitpick a little. I hang a shirt on the knob of my dresser, launch Be My AI, and take a photo. I would expect, by now, that the AI would be smart enough to know that I want a description of my shirt. After all, it's the focal point of the image, and I do it every day, sometimes with multiple different shirts. Alas, it gives me not just a description of the shirt, but describes the wood grain of the dresser, the knick-knacks on the dresser, the tile floor in the background, and even comments on how long its been since I dusted. Okay I'm exaggerating, but not much. The point is, something with a bit more intelligence would focus on the important part of the image and leave out the fluff.

The Wordpress JetPack plugin now has the ability to AI-generate featured images for posts. Yes, it actually reads the post and then generates an appropriate image to accompany it. I doubt they were thinking of blindies when they came up with it, but finding images to accompany my posts is an inordinate bottleneck that this new feature will all but eliminate.

It saddens me deeply to see all sorts of articles that continue to falsely declare the dangers of AI. I have seen nothing but good come from the use of AI in my life. Popular media still portrays it like it's the apocalypse. We need a name for such negative media attention. Let's call it SkyNet porn.

Regards,
Paul who is not Lottie

By Brian on Tuesday, May 21, 2024 - 17:56

Preach it brother-sister! 😆

By Brad on Tuesday, May 21, 2024 - 17:56

The guy didn't like my responses one bit, I'm glad others enjoy the software but it just seamed like he didn't actually use voiceover to test his app and my main concern was that he actually quit his day job for this.

He did tell me that all was fine on that end but still, quitting the day job and then not using the assistive tech you clame to want to fix, that's not good.

He honestly sounded well meaning but like one of these sighted devs who think their idea is the best in the world for the disabled but who don't actually know what the disabled need.

Oh and as for BeMyEyes, it won't do that for you Paul, it's using chat gpt4 with a certain prompt I asume.

I'm assuming with chat GPT4o you'll be able to ask a lot more detailed questions and zoom in on what you want, but these AI's will always give you as much info as possible unless you control them.

By Tyler on Tuesday, May 21, 2024 - 17:56

Member of the AppleVis Editorial Team

Paul, one word I've heard when attempting to characterize those who habitually exaggerate or misrepresent the dangers of AI is "AI doomer."
Regarding my overall approach to the question of whether AI is good or evil, I'd say it is neither. Rather, I view it like many common drugs, in that there are significant benefits associated with its use, in addition to substantial risks when used irresponsibly.

By Gokul on Tuesday, May 21, 2024 - 17:56

Off-topic but @Paul what you suggested is entirely possible and doable; this is where we need the image describtion apps to implement custom prompts at the first instance.

By Holger Fiallo on Tuesday, May 21, 2024 - 17:56

FYI. AI is a tool, the data put in it can be bad or good. As the old saying goes, garbage in garbage out. Until a more powerful CPU and computer comes out AI will not change much. The quantum computer that suppose to be similar to the human brain and do more computation that what is today, AI will be just a nice tool. I think Apple have it correct by naming it Machine learning, AI means that it is self aware which it is not.

By blindpk on Tuesday, May 21, 2024 - 17:56

Continuing the off-topic, I agree 100%, this is a feature that is strangely absent. You can do it in the ChatGPT app, and with GPT-4o becoming free that will be an alternative for all.

By Gokul on Tuesday, May 21, 2024 - 17:56

Yes; with gpt 4o, the multimodel capabilities means that the interactions will take on a whole new dimention which I'm guessing most of us have not really comprehended properly. The only limitation will be the number of characters we have and our internet speeds.

By kool_turk on Tuesday, May 21, 2024 - 17:56

It looks like OpenAI might be in a bit of hot water.

There's no way to prove it, but they may or may not have used Scarlett Johansson's voice for the Sky voice.

Good luck proving that.

Here's the kicker: what if OpenAI is telling the truth, and they really did use someone else who happens to sound like Scarlett Johansson?

I know you can imitate someone's voice, but what if that person is really using their own naturally sounding voice?

You can't help how you sound, after all.

This is only the beginning of things to come.

By kool_turk on Tuesday, May 21, 2024 - 17:56

I was wondering what was going on, because last night it was a different voice for me, even though I have mine set to Sky.

The sample voice still works, however the voice itself is gone, at least for the time being.

I think the only way this will really be resolved is if the voice over artest actually comes forward.

https://bgr.com/tech/scarlett-johansson-issues-blistering-statement-against-openai-and-sam-altman-over-sky/

By DMNagel on Tuesday, May 21, 2024 - 17:56

It told me that Tyson Fury won his fight on Sunday night, which clearly he didn't. It just shows once again that AI is great to play with, but not great to rely on. To those who are married, you may want to hold onto your partners for a while longer lol.

By Brian on Tuesday, May 21, 2024 - 17:56

The title of this thread should have been named, "Don't cry for my, AI Waifu". 😆

By Brad on Tuesday, May 21, 2024 - 17:56

Interesting, of course I like the voice and the mmasiv upgrade in inflection and tone, but even if I want it, the actresses wishes should be met, she said no and that should mean no.

I wonder if the guy will get fired... Again...

They took down the voice but I wonder if this means a huge hit for open AI. I honestly hope scarlot sues them for millions if not billions because they'd deserve it.

By Enes Deniz on Tuesday, May 21, 2024 - 17:56

I can also confirm that the voices do not pronounce stuff in Turkish, Arabic and Spanish properly.

By TheBllindGuy07 on Tuesday, May 21, 2024 - 17:56

@Lottie I knew you being this ai person pretty much everywhere on applevis :)
If I can add some humble words to this very nice topic, I'd say that I share others' usecase/opinion about ai, use it as a tool but never, never ttrust or rely on it for anything. The thing is, I've heard many praises about be my ai, but just the theoretical possibilities (which are very real) about hallucinations make me paranoic each time. Similarely, the openai addon for nvda where we can screen shot images and have a conversation on them (better than jaws smart picture which apparently doesn't have this conversation possibility on images from what I heard?) is completly unreliable sometimes depending on the interface, especially with checkboxes in my opinion.
My ultimate dream for ai is that someone can develop a reliable ocr+ai, which scribeit from numa solutions is already doing incredibly well, but with every possible document, such as music sheets or math which scribe explisitly can't currently, so this huge barrier can be reduced to its minimum. And it's certainly possible, this is the most unlikely but very real thing ai has done which is transforming an industry and academic field as of speaking: https://time.com/6691588/ancient-roman-scroll-decoded-ai/
So yeah. It's certainly overhiped and investors are massively being scammed by shedy startups, but it was the same thing in the .com buble (where I was not even born!) and those who will have a real added value to provide people and companies with will stand and survive. I think in 5-10 years chatgpt and its little brothers sisters will be the modern myspace like things. I personally think there will be two paradigms like with software and open/close source, big large general pseudo agi, or small overspecialized ones, and both will have their usefulness. But those who clame privacy is still a thing will be disillusioned, in my opinion that you are of course welcome to counter.
Thanks to paul for this topic!! 😂

By blindpk on Tuesday, May 21, 2024 - 17:56

A bit off-topic, but it is possible to ask about images in JAWS too, since the last update that came a week ago.