Age verification

By Lee, 3 August, 2025

Forum
Accessibility Advocacy

Hi Guys,

As you know in the UK we now have to age verify for certain apps and webpages. This supposedly going forward will include Spotify, Wikipedia and other non adult webpages and apps. So, after looking into how we can do this I discovered that a lot of apps like Spotify will use an app called Yoti. I installed the app and you have to go through a 3 part process to complete. Part 1 and 2 went perfectly. Now, part 3 is where you have to take a picture and as many of us know getting your face in a small frame is almost impossible. The app itself seemed to realise I was using VoiceOver because UK Daniel suddenly popped up saying double tap anywhere on the screen for guidance. Great I thought. No, not, after 10 minutes gave up. You only get 4 attempts before you have to start part 3 again and roughly 95% of the time Daniel just says face not on camera or words to that affect. Occasionally, I got face to far right. So as anyone tried to register with any app or even this one and if so how in the world did you do it? Totally stumped and if this does come in for apps then we need a way of doing this. MODS put this here because this is to do with getting the app to work. If you think of a bettter ploace feel free to move it.

Options

Comments

By Ash Rein on Sunday, August 3, 2025 - 11:48

I don’t know that there’s any other way to resolve this than to reach out to government and make a complaint. AI might figure this out. But for now, there’s really no way around it unless you get somebody to help you, which essentially obliterates your privacy.

By Ash Rein on Sunday, August 3, 2025 - 11:50

A lot of people on this website jump to lawsuits. But in this case, we might actually have a worthwhile claim. We might have to go as far as to get a lawyer and sue the pants off of the government.(s). this might even eventually fall into a class action space.

By Holger Fiallo on Sunday, August 3, 2025 - 12:26

This is going to be a interesting topic. Some would want you to give a picture of your lisence or in the US state ID. Not doing so. My info would be all over.

By mr grieves on Sunday, August 3, 2025 - 12:41

I really don't like the way this is going. If you do manage to complete this successfully somehow, does it then act as a key for everywhere or do you need to authenticate for each service?

For something like Spotify, I subscribe anyway so they have my card information. Does this count or do I still need to do this? I think one of the ways you can authenticate is by holding up a credit card?

This feels to me like the cookie warnings all over again but 10 times worse. What happens to my face once I've scanned it? Does it get stored on someone's server there? Is it handed over to ChatGPT for training?

I get why this is coming into force, and there are good intentions behind it I'm sure, but it feels like it is going to be hugely detrimental for most people.

Assuming it isn't going to be overturned, what I hope is that Apple can build something that works across my devices and once I've proved myself to Apple they can just pass an assurance onto other companies. I trust Apple to handle my data much more than I would some company I've never heard of.

By mr grieves on Sunday, August 3, 2025 - 13:31

I was reading a little more about this in relation to Spotify. It sounds like they may use algorithms to decide if you need age verification at all, and then it's apparently only to access mature content. But if you fail the check your account can become deactivated and even deleted if you don't take action soon enough.

I'm still unsure as to where the line is. Is it anything with rude enough words that it can't be played on the radio? Or does it have to be more explicit than that?

I was certainly able to listen to some potty-mouthed lyrics from Mclusky without being hassled. So maybe that kind of thing is OK. Or maybe it's not rolled out everywhere yet. Or perhaps Premium subscribers get away without this.

I'd certainly be interested to hear from anyone that has had this forced upon them.

By PaulMartz on Sunday, August 3, 2025 - 13:34

In order to publish a book through Amazon Kindle Direct Publishing, you must verify your identity, and part of that process involves photographing your ID. I was unable to get through this process independently. Eventually my sighted spouse was able to help me through it, but it was a struggle even for her.

I contacted Amazon KDP support, told them I was blind, requested accessible accomodations, and was told sorry, this is the only way. I requested the issue to be escalated as it is a clear violation of the US ADA. Support thanked me and pledged to raise the issue internally, That was in May, and I haven't heard a peep since.

By Lee on Sunday, August 3, 2025 - 13:42

I think that different services use different ways. So you may have to do this a number of times. Even worse is that I believe it isn't done once then forget. Everytime you access the services you have to verify. Ok once signed up this maybe not so hard but who knows. It is claimed that personal data/photo etc is not stored anywhere that can be hacked or accessed by 3rd parties. I've heard that before though so who knows. Mind if we can't sign up without sighted help anyway then we are being sidelined and as per the norm with stuff we may have been forgot. As I say the Yoti app does suggest someone had mentioned accessibility but I doubt anyone actually tried to sign up with VO turned on.

By mr grieves on Sunday, August 3, 2025 - 14:05

I don't see how this can work if you have to repeat it every time you use a single service.

That is going to make a lot of what we take for granted now a pretty miserable process. I'll need to get ready a few hours in advance if I want to watch something on my TV that might have a bit of coarse language on it.

It feels to me like this could result in a boom to online piracy as people try to find ways round this. Already VPN downloads are going through the roof although that won't last long I'm sure.

This is making me very anxious. I guess some of this will come out in the wash, and it is early days.

By Holger Fiallo on Sunday, August 3, 2025 - 14:52

There are already a third party app for it but it got hack and people still use it. Do not recall the name.

By Winter Roses on Sunday, August 3, 2025 - 16:08

I don’t understand where this whole ID verification process came from. Out of nowhere, companies are suddenly acting like verifying our identities is the new norm. What exactly are we doing here? You want my face now? You want my card? I’ve never used my real name or personal info online, and I rarely— if ever —use a credit card unless it’s for something absolutely necessary. If it’s a frivolous purchase, I use gift cards. Always have.

But apparently, this is all being rolled out under the excuse of “protecting kids from mature content.” Really? Come on. You and I both know kids today are way smarter than most adults give them credit for. Whatever you’re trying to hide from them—they’ll find it. They always do. What’s wild is that this shift happened practically overnight. YouTube, Spotify, Amazon—suddenly they all decided this was going to be the new model. Like they all had a secret meeting while we were sleeping. Now you’re telling me I’m going to have to verify myself? I’m not even living in the U.S., so I’m not sure if it’s going to hit my region yet, but it’s starting to look like this is the way things are going to go from here on out. Why? Why now? Why not 10 or 15 years ago? What do you think this is gonna solve by implementing this policy? Children have existed ever since the beginning of time. Like, which event triggered this? No warning. It came out of the blue. If the goal of this whole ID verification is to protect the babies, then what does that even mean? Should we ban fast food next? Kids can buy too much McDonald’s. You know, too much fast food. Totally rots the brain. Or maybe that should be regulated too. Or how about banning television altogether? After all, kids can watch content on TV that they’re not supposed to see. Sure, TVs come with restricted access settings, but so do online platforms. Should we also ban the radio? Someone could be listening to programs filled with government propaganda or content not meant for their innocent little ears. It feels more like they’re trying to figure out who is using which account. And once they have your name and personal information, it becomes harder for anyone to hide behind anonymity online. Maybe that’s what this is really about—surveillance under the mask of “safety.” Because right now, the logic doesn’t hold. That’s the excuse, but it’s a flimsy one at best.

Which brings up another question—if verification becomes mandatory, does that mean we’re all going to be forced to use our real names now? How exactly are they going to identify people using a screen name or a fake username? Nearly everyone I know online uses a nickname or a handle. Especially on platforms like YouTube. So how is that even supposed to work? Supposedly, AI will be the one determining if you’re an adult or not. Yeah, that inspires so much confidence. I saw the uproar on Twitter—people were furious, especially those who let their kids watch cartoons on their adult accounts. I don’t think it’s that deep. Would going incognito work? In this case, make a separate profile for your kids. That’s what YouTube Kids is for, right? Now, I get that not every service has a “kids version,” but come on—this isn’t going to protect anyone forever. And as someone who is blind, I’ve already had to give up far more privacy than I would like. It’s exhausting. Each platform requires a different verification method, and, according to them, your info is not stored anywhere, but we all know that the government knows how much you eat, sleep, and shower, long before even you know that you're going to do it, so, it's a mood point, I guess.

And don’t even get me started on that whole “we’re not going to monetize AI-generated content” policy. Do you honestly think it’s because they care about creativity or humanity? Please. It’s because they need humans to keep making original content. Why? So they can train their AI models. That’s what this is about. They want fresh, authentic, human-made data to feed back into ChatGPT, Gemini, and every other AI system. They don’t value your voice. They value your data . It’s not about protecting creativity—it’s about protecting their business model . Because AI can’t learn from content made by AI. It needs people—actual people—to keep the system growing. That’s why they’re trying to push people back into the content grind. So no, you’re not winning. You’re being harvested. I don’t know where all this is going, but one thing’s for sure: this isn’t a tech update. This is the beginning of a new world order. Guys, the end is near.

By Holger Fiallo on Sunday, August 3, 2025 - 16:25

Control disguise as concern.

By Winter Roses on Sunday, August 3, 2025 - 16:44

From my very limited understanding, this so-called protection policy doesn’t seem to be about shielding kids from explicit content. Based on what I’ve been reading, the way they determine whether you’re an adult or not has less to do with actual age verification and more to do with your content behavior. Let’s say you have a Spotify account or a YouTube profile, and you usually watch or listen to content that isn’t necessarily for kids—maybe animated videos or cartoons that are not intended for children. Then one day, you go to watch or listen to something explicit. That’s where the system steps in. The AI watches your activity, and based on patterns like “this account usually interacts with cartoon-style content,” it flags you as potentially underage. Then suddenly, you’re told to verify your age with ID.

But that logic makes no sense.

There’s a ton of content that falls into that gray area. Animation isn’t only for kids—adults watch it all the time. And what about adult artists who have a lot of young fans even though their content is not labeled explicit? Are their entire audiences going to be flagged? How exactly is this supposed to work? It seems backwards. They’re not protecting children—they’re monitoring your behavior. Watching what you consume. If it doesn’t line up with what they think an adult would normally watch or listen to, you get flagged. And if you don’t verify with government ID, your account could be deleted. That’s the new policy.

Some of these companies are getting too big for their boots. I don’t think this is about safety anymore. I think it’s about control. There’s obviously more going on behind the scenes. This feels less like a protection plan and more like the early stages of a new kind of surveillance state. A digital compliance system disguised as a child protection law. Where does it end? If you’re regulating what we do online, what about offline? Will we have to scan our ID to buy an album in a store? To hear a song on the radio? Are these policies going to bleed into real life too? Let’s be real: people will start using VPNs. Fake IDs will spike. It’ll create new problems while pretending to solve old ones. I’m not sure where this is going, but it’s starting to feel like a massive overreach. Maybe if enough of us stop using these platforms, or boycott the services enforcing this kind of monitoring, something will shift. Maybe new platforms will rise up that respect privacy again. But right now? I’m confused. I don’t know what this is turning into, and that scares me more than any so-called “explicit” song ever could.

By mr grieves on Sunday, August 3, 2025 - 16:46

One thing that confuses me is that you are only going to be asked by Spotify to do this if you try to access certain content. Yet at that point if you fail to complete the process your account may be deactivated and later deleted.

I don't understand why this is so heavy handed. Sure if I don't manage to verify then I understand if the content I was trying to access is going to be blocked to me, but why should I be unable to access anything at all in the case?

If my Spotify account gets deleted, that will be the final straw. I am going to throw all my electronic equipment off the nearest bridge and go live in a cave.

By mr grieves on Sunday, August 3, 2025 - 17:00

Sorry if I'm being thick, but how exactly does this benefit a company like Spotify?

Before this law, all these companies like Spotify and YouTube have been constantly tracking our behaviour and storing who knows what data about us.

So what extra data are they collecting here? My worry is not that Spotify have my face or ID, more that some random other company does and I don't know who they are let alone trust them with a copy of my passport or face.

In Spotify's case, if they are going to end up deactivating and deleting accounts, not to mention the bad press this whole thing is going to get, then I can't see how this really benefits them. They will lose business from it, guaranteed.

I would imagine that at some point we will end up with some kind of virtual ID probably owned by the Government. So we prove ourselves to them, and then that's the only thing passed around. That makes more sense to me and feels less intrusive. However, we all know how good the Government is at keeping hold of sensitive data and not infringing on our civil liberties.

By Jonathan Candler on Sunday, August 3, 2025 - 18:33

I'm not in the UK but if this reaches over to the US which it's looking like, no no and, more, no! Nobody is getting my ID if AI claims that I'm not over 18. I'll be putting a stand to stop this as many of us in the US are going to do! I don't get why this even needs to be a thing in the first place! The internet is not responsible for children. The parents should take full responsibility to have their devices locked down when needed! If online services and websites are gunna be doing this and if even one of them pulls this on me I'll no longer use any, of their stuff! I would say more of what I'd think but I'm not bout to get political, as much as this topic is already is in the first place Lol.

By Brian on Sunday, August 3, 2025 - 18:33

Of course I have no concrete proof of this, but my thoughts are going like this:
• Use systems like this "Age Verification" to collect marketing data under the guise of "protecting children from illicit content".
• Sell data to shady marketing companies who will gladly pay top dollar for this information in order to turn around and spam you with marketing ads.
• Send out a ton of emails soliciting consumer goods and services in the form of marketing ads.
• Profit!

As someone who has a free Spotify account, I already get quite a bit of email adverts from Spotify about upcoming concerts, merch I absolutely, must, buy, etc. I imagine for those of you being forced into this new system, the adverts are going to double, at the very least.

AnyWho, just my 2¢.

By Winter Roses on Sunday, August 3, 2025 - 18:56

Privacy is important, sure—but let’s not pretend the government doesn’t already know everything about you, whether you gave them permission or not. These companies already know your information, and even if they don't, they will make something up once it fits their agenda. It's only because they choose to not act on it since it's not beneficial to them yet. It’s smart to protect yourself and your assets, absolutely, but beyond your face and your ID, what makes you think they don’t already have copies of everything about you stored somewhere? The truth is, we’ve already been sold into a form of modern slavery, and the scariest part is most people don’t even realize it. As someone who’s blind, I understand wanting more privacy than someone who can see. But the moment you go to the mall, the park, or any public space, cameras already have access to your face. You upload a photo, a video, or maybe you’re caught in the background of someone else’s post—you’ve been captured. Not to mention satellites. Good luck hiding from those. Companies say they care about consent, but really, it’s not a question of if you’ll comply with their rules—it’s a question of when compliance becomes mandatory. The investors and corporate heads don’t worry. They already have access to everything that matters: your time, your energy, your money, your attention, your data, and your decisions. Whether they know you or the version they think they’ve built from your digital trail, they don’t care. If you don’t give them what they want willingly, they’ll put you in a position where you don’t have a choice.

Sure, you could throw your phone into a lake, pack up your stuff, and run off to the middle of nowhere with a few alpacas—but let’s be honest: can you live without Spotify, Netflix, Uber Eats, Amazon Fresh, Instacart? If you’re not giving up your information today, they’ll corner you into doing it tomorrow. It’s only a matter of time before you’ll need to show ID in the form of a polygraph test to buy groceries, watch a movie, go swimming, or enter a club. Cancel all your subscriptions if you want, but they already know you won’t. And even if you do, they’ve still got millions who won’t. The people at the top already made their money. They don’t even want actual people anymore. Haven’t you noticed? Look around—more and more companies are pushing for artificial intelligence to take over. But if you’re replacing all your workers with AI, who exactly do you think is going to keep buying your products and services? Is money going to become a display item—a dusty dinosaur relic of the past that rich people collect and admire in glass cases, rather than actually spend?

We’re heading toward a high-tech barter system. Only now, the product being traded is human attention, human energy, human behavior. People themselves are the commodity. And for all our talk of leaving, disconnecting, and escaping—it’s all talk. These companies know that. They know most people will come crawling back. Even if they need us for their business, they need us in a way that we will never need them. And that’s the part that should scare you the most. Instead of moving forwards, the world is only moving backwards.

By Holger Fiallo on Sunday, August 3, 2025 - 19:19

If they want privacy protection, they can use our face ID in our devices to confirm who we are and not to leave to the net. In our contact we have our info and age and so on. If they want to confirm something just scann our face ID or finger print and leave it at that. I am not going to load any info about me. Do not trust third party apps or google or whoever. This will not work because many will complain about it. Long live America.

By Tara on Sunday, August 3, 2025 - 19:41

Hi,
If you need help taking a picture, set up Aira on your phone, and do a free five-minute call. You can share your screen with the agent, and they'll be able to see you and what's on your screen, and they can direct you. I did this to take a picture of my passport for the Revolut app. I haven't seen this age verification here for any apps or services yet. I'm on Spotify premium, and I tried playing some explicit Limp Biscuit content the other day just to see if I'd get the age verification come up, but nothing. I wasn't using a VPN. But you could always use a VPN anyway. I use Express VPN on Windows and my iPhone sometimes. For those living outside the UK, we knew this was coming. The online safety act has been on the cards for a long time. It's a long story and off topic.

By Holger Fiallo on Sunday, August 3, 2025 - 19:52

No. I would not share it. It has more than your photo.

By Joshua on Sunday, August 3, 2025 - 22:29

this shouldn't be a thing at all, it's the parents job to see what there kids are doing on line and to protect them, screentime and other parental controls exist for this, what the UK government is doing is about control, if this ever comes to Canada i am gonna avoid it for as long as i can

By Holger Fiallo on Monday, August 4, 2025 - 00:50

People are no longer responsible. They want others to do the job that they suppose to do. Sad but there it is. Curious what will happen to the google feature of incognito.

By Gokul on Monday, August 4, 2025 - 01:34

In a form or another that is. for example one has to take a similarly impossible live selfie to get linkedin verified. Never been able to crack that one. Sure, having to verify for using something like spotify is going to affect more folks and does have other concerns associated as discussed above... So yes, the problem is two-fold; accessibility and privasy...

By Winter Roses on Monday, August 4, 2025 - 02:43

I have a question, and maybe it sounds a little unintelligent—possibly even a bit stupid—but I’ve always believed that if you don’t ask questions, you never really learn. So here goes:

I understand that security and privacy are huge issues, especially when it comes to identity verification. But speaking from the perspective of someone who is blind, I figured companies working in tech, especially accessibility-aware ones, would consider this. My question is: why do we have to upload a live photo of our ID document? I mean, if I already have a clear picture of my ID saved on my phone—why can’t I upload that instead of having to take a new one through the app or website? Wouldn’t that image still contain all the same information?

Even more than that, why is uploading a full photo of the entire document required? Couldn’t I input the relevant number from my national ID, passport, or tax registration? Or if they need some image-based verification, couldn’t we upload a cropped version or only the part with the actual identifying number? I’m not entirely sure why an entire picture of the document is necessary for identity verification.

Then there’s the issue of selfies. Some services ask you to upload a selfie as proof of identity—but again, what if I already have a few saved selfies? Why can’t I choose from those instead of having to snap a new one every time? Is it really that risky to let users upload existing photos? Are people seriously out here using someone else’s pictures to sign up for services they’re not authorized to use? Which brings me to a related thought. What’s stopping someone—like a cheeky younger sibling or cousin—from picking up your phone, snapping a quick photo of your ID, and using that to sign up for a service behind your back? I know this might sound paranoid or sci-fi, but it’s not unrealistic. Blind users, or anyone really, could be vulnerable to that kind of misuse. Is there anything in place to prevent it?

And when it comes to technologies like Face ID—how do these systems verify that the person is who they say they are? Like, if someone registers an account with a fake screen name but verifies using a real face, is that enough? Is the legal name cross-referenced somewhere along the line? Or is that not even part of the process?

I remember hearing about a YouTube video ages ago (I think it was the Dolan twins or someone like that) where they tested whether Face ID could be tricked because they were identical twins. I never saw the full thing, but the idea stuck with me—if twins with nearly identical faces can confuse facial recognition, how secure is it really? Can it tell them apart by minor details like a mole or skin texture? And if someone altered their appearance—like with makeup or cosmetic changes—would that still fool the system?

And to stretch it a little: what about clones or doppelgängers? I know we’re getting into sci-fi territory now, but I’m genuinely curious—how do these systems work under the hood, and what are their limitations? From where I'm standing, this doesn't make any sense whatsoever.

By kool_turk on Monday, August 4, 2025 - 04:26

Hi! Just thought I'd chime in with an answer to the twin question.

I can't speak to the rest of your questions since I honestly don’t know, lol.

But regarding whether a twin can unlock your phone with Face ID — I can confirm that it’s possible on an iPhone. I have a twin, and he’s able to unlock my phone using Face ID.

That said, I *can’t* unlock his laptop with my face, so it looks like Microsoft is doing something a bit different.

In general, Face ID isn’t the most secure option. You’d probably be better off using a fingerprint instead. But really, every form of biometric ID has its own quirks and limitations.

By Brian on Monday, August 4, 2025 - 05:43

They can be exactly alike, down to the smallest detail, with the exception of fingerprints. There are absolutely no two fingerprints that are the exact same. Even with identical twins.
Why Apple ever thought face ID was more secure than touch ID is beyond me ...

@Winter Roses,

I may have an answer to your first question, the one regarding why you can't just use a saved photo, and have to take a new photo for verification purposes. I, believe, it has to do with a meta-data that can be found in the information that your image will be providing, when you use that software to take a picture for verification purposes.
Don't quote me on this, it's just my belief and understanding, Based on logic and things I've read around the Interwebs.

By JoĂŁo Santos on Monday, August 4, 2025 - 06:34

I am personally in the there are no dumb questions camp. We are born ignorant, so having questions is pretty normal, and even if it wasn't, I would still attribute more value to the people who ask honest unframed questions than on people who just make assumptions or ask loaded questions, and yours are totally unloaded so in my opinion they are perfectly legitimate and provide a lot of value in a public debate.

As for your questions, the answer unfortunately is that the people in power tend to be quite ignorant, and in addition they tend to surround themselves with assessors who are supposed to know better but are either almost just as ignorant or are motivated by some kind of agenda. The bottom line is that you can simply not trust any information coming from a user's device because it can be easily manipulated, and since software is pretty easy to replicate, there's really no way to prevent even people who lack the proper knowledge to work around the protections from using solutions created by those who possess that knowledge. There are certain problems that are simply not worth attempting to solve because the only possible solutions end up creating more problems than the one they're supposed to address, and many times aren't even that effective in addressing the original problem anyway. You might only be noticing this now, but in the tech field we've been trying to inform people about these things for many years, as many politicians tend to believe that somehow there's a way to create an encryption backdoor that cannot be leaked or misused.

This kind of totally disproportional solution is, in my opinion, a huge red flag about the health of a democracy, because it shows a tendency towards taking sovereignty away from the people. Unfortunately most people are not sensitive to any of this, only waking up to reality long after the situation has passed a point of no return, after which they no longer have the ability to fight back, because they let the legal instruments that they could use to do it either get dismantled or turned into domestic surveillance to quickly squash any dissident movement before it has a chance to gain momentum. This is especially concerning now since, with the help of AI, it's pretty easy to automate the job of keeping tabs on everyone so mass surveillance has never been easier, so after a regime finishes its transaction to fully blown authoritarianism in the modern era, the only chance to fight back might be if the people in power demonstrate an astronomical level of incompetence.

By Ash Rein on Monday, August 4, 2025 - 11:56

I hate the word should. It’s a fantasy word. It doesn’t exist in real life. Everybody on this website should see. We should all be very wealthy. We should all have great jobs. The world shouldn’t have to deal with micro plastics. It should be a nice day today. It’s a terrible lazy word. And it makes the people that use it alleviate responsibility from themselves. Literally everything we do in the world requires a license, except one thing. You have to have a license to cut hair, to do nails, to be a doctor, to drive a car. But anybody annd everybody can have as many children as they want. and nobody even blinks. And the vast majority of people Make terrible terrible parents. Even if they’re good people.

I’ve said my peace in terms of what’s needed. If this were something that affected the deaf community, they would be at an uproar; and completely unified. This so-called community, however… it seems like all you guys can do is complain about what should be. As opposed to really considering what is, and what we can do together to fight it. I’m not even necessarily against the idea of verification. As long as it’s accessible. But repulsively, it’s the same old same old here.

out of all the comments, only one of you had any shred of awareness to even provide a possible solution. And that makes me so disgusted.

you did absolutely nothing when it came to the discriminatory policies put into place regarding guide dogs/airlines in the US and Canada. Let’s see what you’re going to do when it comes to this.

By Holger Fiallo on Monday, August 4, 2025 - 14:02

There you go. How about shell. Privacy is the thing today. Will see how affect those who are blind.

By Tara on Monday, August 4, 2025 - 15:17

Hi,
If you absolutely must verify your age, if you don't want to use a VPN or whatever, then here in the UK, using Aira for a free five-minute call is the next best thing if you haven't got a sighted friend or family member around. I suggested Aira because this law is affecting the UK only. No other countries are having to do this. I won't be advocating for an accessible age verification thing because I don't believe age verification should even be a thing to start with. I can't advocate for accessibility if I don't even agree with the service I'm advocating for. I can either use a VPN, vote for a certain political party who say they'll repeal the online safety act if they get into power or not use certain services at all. If people want this to be accessible, either write to your local MP, set up a social media campaign or write to the RNIB Tech Talk podcast and they can highlight the issue.

By Igna Triay on Monday, August 4, 2025 - 16:22

Okay… and you think your rant changed anything?

You call out this community for “doing nothing” about the guide dog policy — so let’s start there. Did you? If not, then using that issue to shame others is pure hypocrisy. And even if you did act — good. But don’t assume everyone else could. Not every blind person uses a guide dog. And even among those who do, not everyone lives in the U.S. or Canada, where the issue took place. In fact, most people on AppleVis aren’t even from the U.S. or Canada. So if they didn’t have a dog, and didn’t live in those regions, what exactly were they supposed to do?

And before you even try the next move — saying, “Well, even if someone didn’t have a guide dog, they should’ve taken action out of principle” — that argument fails, too. People can’t always take meaningful action on issues they aren’t personally affected by, don’t fully understand, or literally don’t have access to. As an example, someone who’s used a cane their entire life, maybe has a phone and a few mobility tools, but has never used a guide dog — how do you expect them to be informed on this issue, much less to advocate effectively? Yes, research exists — but research only goes so far, especially when it comes to something as involved and high-stakes as guide dog handling. You can’t expect people to mobilize around something they have zero firsthand knowledge of and no lived context for.

Your line about the Deaf community being more unified? That’s not just a weak comparison — it’s condescending and manipulative as hell. Every community — Deaf, blind, disabled, or otherwise — has internal disagreements, divisions, and friction. Pretending they’re perfectly united just to guilt-trip this community into action isn’t solidarity. It’s pressure posing as insight.

And that “only one person here had awareness” claim? No. One person agreed with you. That’s not awareness — that’s alignment. Everyone else had different views, and your disgust doesn’t make them less valid. And just to be clear — no, you’re not the only one here capable of critical thinking. What you’re doing isn’t insight — it’s entitlement masked as superiority. The way you talk down to everyone else, as if disagreement equals ignorance, is arrogant, condescending, and self-important. You’re not pushing for change. You’re just lashing out and pretending it’s leadership. There’s a difference.

Now — to everyone else:

This push for digital ID and AI verification isn’t new. None of it is. Facial recognition? If you’ve traveled internationally, you’ve already had to put your face up to a kiosk. Shopping malls, public spaces, movie theaters — you’re already being recorded constantly. And if you’ve ever used Seeing AI, you know how easily expression, lighting, or facial hair throws off those so-called “smart” age estimates. The idea that AI can verify age accurately is just marketing fluff with a badge on it.

And this whole panic about privacy? That ship sailed a long time ago.

If you have a Google account, your data’s already archived. Your phone number, your IP, your location, your behavior. Use a credit card? Tracked. Pay with PayPal? Tracked. Apply for a passport? Tracked. Fly internationally? Your biometric data is scanned and matched against databases. Order food from Uber Eats? Tracked. Subscribe to Netflix? Tracked. Activate your phone? That too. Even gift cards and cash purchases are logged through store surveillance, metadata, and transaction history.

And even if you just step outside? You’re still being tracked. In today’s world, there’s almost no public space without a camera. Stores, intersections, building lobbies, transportation hubs — sooner or later, you’re going to pass in front of one without even realizing it. You don’t have to “give consent” to be recorded anymore. Just existing in public is enough.

Whether you wanted it or not, whether you’re aware of it or not — your data is already out there. That’s not theory. That’s fact. And in 100% of cases? It was handed over willingly, just to exist in modern life. The moment you participate — digitally, financially, socially — you’re in the system.

So when Spotify or YouTube eventually ask for ID or a selfie? Don’t act like that’s the breaking point. You’ve already given that same data to immigration agencies, banks, streaming platforms, government registries, food delivery services, app stores, phone carriers, and every street camera you’ve ever walked past — probably hundreds of times without a second thought.

And if you think that’s where it ends? It doesn’t. Your phone calls can be logged. Your text messages can be scanned, stored, and flagged. Metadata — who you contacted, when, for how long — is already being tracked by telecom infrastructure. Your voice assistants are always listening for wake words. Your location history is mapped down to the meter.

And to be clear — no, this level of deep surveillance isn’t applied indiscriminately or without legal pretext. It’s not “normal,” and your calls or texts aren’t being monitored arbitrarily. Typically, access to that kind of data happens through law enforcement requests, subpoenas, or fraud investigations — like tracking a stolen phone, identifying credit card abuse, or locating a suspect. But the infrastructure to extract and analyze that data — including metadata, device identifiers, location history, and usage patterns — is already in place and actively used. So if you think you have privacy, you don’t. You never did. Not in this system.

By Tara on Monday, August 4, 2025 - 16:51

I agree with you about advocacy. Fighting your corner takes time and patience, and it takes even longer for change to happen. Patience is something I don't have regarding these issues, I'd rather use something like Aira or use a VPN in this case. At least it gets the job done. What I was saying is that there are ways round this particular issue. And I wouldn't advocate for anything to do with guide dogs because I haven't got one and don't want one. My agreement with Ash was more to do with taking proactive immediate action to solve your issue. Use Aira or get a VPN rather than advocacy if you can't or don't want to advocate. And yep your data is out there. But a bill like this makes it even easier for a government to track what you're doing.

By mr grieves on Monday, August 4, 2025 - 16:51

I think regarding the privacy aspect, I mostly agree with you. But I think there are a few considerations.

1. Are we consenting to that data being taken? In a shopping mall, the answer would be no. Here we are being asked to hand it over freely.
2. Do we trust who we give it to? I think at an airport security check, we tend to trust the authority figure. Or at least we are scared enough of them not to question it. But it's maybe a different matter when you are talking about some random third party on the internet where you have no idea who they are or what their intentions are.

I think what will happen is that this will be normalised - it's just another little slip down that already slippery slope. And random popups on the internet asking us not only to accept cookies but now also to hand over our ID will just be something people do without thinking because it will be everywhere.

As I said before, I would be a lot happier if I was entrusting this with Apple who could then simply answer a yes or no question of a web site in a trusted way - ie "am I of a particular age or not?". Sites don't need my birthday or my passport number or anything else, they just need a yes or a no with a bit of assurance behind it.

I think the aira solution is OK for those that know about it and can cope with it, but that will be the minority. And if you have to do it more than once in a month then you will start having to pay further blind tax just to work the system.

There's no reason why accessibility should be a problem here. We have plenty of tech that can tell us if we are looking at a camera or not. It's difficult at this stage when most of us haven't had to confront the problem yet. I'm not sure I want to send off my face just for the sake of testing an app's accessibility out. It does feel sensible to try to get some podcasts onto the subject. I think if genuinely we are being blocked then even mainstream media might be persuaded to get involved, the Verge springs to mind but I'm sure there are others.

By Igna Triay on Monday, August 4, 2025 - 17:10

@Tara, I get where you’re coming from. Using Aira or a VPN as a quick workaround? Makes sense, and I respect the pragmatism. But here’s the catch: shortcuts get the job done for now—they don’t fix the system. And every time we lean on a temporary solution, it gets easier for the system to justify keeping the barrier in place. So while I totally understand not wanting to advocate, especially when you’re not directly affected, the problem doesn’t disappear—it just waits for someone else to deal with it.

@MrGrieves, your points are well thought out. But here’s the issue: this “slippery slope”? We’ve already slid halfway down it. The idea that this bill “makes it easier” for governments to track you assumes it was hard before. It wasn’t. As I mentioned earlier—credit cards, PayPal, phone metadata, GPS, facial recognition at airports, surveillance cameras in public, even basic app usage—this data’s already flowing in a dozen directions. What this bill does is drop the mask and formalize what was already happening in the background.

On consent: most of the systems you mentioned never asked for consent. No one asks you if you want to be tracked in a mall, or if you’re okay with your face showing up on ten security feeds just walking to a bus stop. The consent argument falls apart when the entire structure is built on assumed compliance.

And about trusting Apple to answer “yes or no” about your age—that’s ideal in theory. But the second they become the middleman? That trust will erode too. And then we’re just back where we started, only with prettier branding.

This doesn’t mean we throw our hands up—but it does mean we stop pretending we were ever standing on solid ground to begin with.

By mr grieves on Monday, August 4, 2025 - 17:17

The problem with this argument is that we say we have already admitted defeat so what’s the harm in just going that little bit further.

(Edit - messed up first time. This is why I don’t tend to use my iPhone to write anything! lol)

By mr grieves on Monday, August 4, 2025 - 17:22

Sorry just going back to the Apple thing - right now companies like Spotify etc are deciding which companies to use for the verification systems. I think it would be much better if I could make that decision. Maybe I trust Apple, or - don’t laugh - Google or someone else. I think it would be better if that was up to me, not someone else. As right now we are going to have loads of these companies providing these services and not all of them are even going to be genuine in the first place, let alone trustworthy going forwards. A bit like choosing a password manager.