One of Europe's most controversial digital policy proposals promises to protect children, but the cure may be worse than the disease.
Every few years, a policy idea comes along that is almost impossible to argue against in public. Mandatory age verification for the internet is one of those ideas. Who could possibly object to protecting children online? The framing is genius, politically speaking, because anyone who raises concerns about how it actually works gets to sound like they are against child safety. That framing is also, I think, the main reason these proposals keep advancing despite being fundamentally broken.
Age verification mandates are spreading across Europe and beyond. The UK's Online Safety Act is already law. The EU has similar proposals in motion. Australia has moved to ban minors from social media outright. The political momentum is real. But political momentum and good policy are not the same thing, and if you look past the messaging and into the mechanics, what you find is a system that fails at its stated goal, creates serious new risks, and shifts the burden entirely onto the wrong people.
I am against these policies. Not because I think children don't need protection online, but because age verification does not actually protect them, and the collateral damage is enormous.
Why people support it
The arguments in favor are not stupid, and I want to be honest about that.
Kids are online too much and too young. They are exposed to pornography, to violent content, to adults who should not have access to them. Parents feel powerless. Schools are overwhelmed. Tech companies have spent years promising to fix these problems and have delivered very little. The frustration is legitimate.
Against that backdrop, age verification feels like an obvious solution. We already require ID to buy alcohol, to gamble, to see certain films. Why should the internet be exempt? If a child cannot walk into a strip club, why can they access one on their phone? A legal duty of care, the argument goes, would force platforms to finally take responsibility for their users. Biometric checks or ID verification could confirm that the person on the other end is actually old enough. Platforms would have to curate content appropriately for different age groups. The rules would be consistent across countries and companies instead of the current patchwork of half-measures.
Some proponents go further. They argue that minors simply are not mature enough for social media at all, that the evidence on mental health, attention, and development points clearly toward keeping young children off these platforms entirely. And they argue that age verification can be done in a privacy-friendly way, through third-party services that confirm your age without revealing your full identity.
I understand all of this. I think some of the underlying concerns are correct. But the proposed solution does not match the problem, and in several ways it makes things worse.
It does not work
This is the part that should matter most and somehow gets the least attention in public debate.
Age verification does not keep minors off the internet. It does not meaningfully reduce their exposure to harmful content. We already have evidence for this. Around 80% of minors encounter pornography online despite the age restrictions that currently exist. 68% of children under 13 use social media even though virtually every platform already prohibits it. These numbers should embarrass anyone claiming that verification is the missing piece.
The evasion routes are trivially easy. VPNs are free or nearly free. Older siblings' or parents' credentials work fine. Platforms hosted outside the regulating country's jurisdiction do not care about its laws. A moderately tech-literate thirteen-year-old will bypass an age gate faster than a legislator can explain how it is supposed to work. This is not a theoretical concern; it is what is already happening with the age restrictions we already have.
So who does get affected? Adults. Law-abiding adults who want to access legal content and are now required to hand over government-issued ID or submit to biometric checks to do so. The policy does not filter out minors. It filters out people who follow rules, which is exactly the population that was never the problem.
The data problem
Every age check requires data. Someone has to see your ID, your face scan, your date of birth, and confirm that you are who you say you are. That information has to exist in a system somewhere, even if only briefly.
Proponents like to say this can be done anonymously through third-party services. But think about what "anonymous age verification" actually means. A service confirms your age. To do that, it first needs to know your real identity. Even if it discards that information after the check, it possessed it. And "we delete it afterwards" is a policy promise, not a technical guarantee. Policies change. Companies get acquired. Servers get breached.
This is not a hypothetical risk. Centralized collections of identity data get hacked with depressing regularity. The more people are required to verify, the bigger and more attractive the target becomes. We are talking about building, by law, continent-scale databases linking real identities to the platforms people use and the content they access. The history of data security tells us exactly how that ends.
Proponents frame verification as "enhancing security" and "protecting platform integrity." I think the opposite is true. You are taking sensitive personal data that did not previously exist in one place and forcing its creation and concentration. That is not security. That is a new vulnerability, built by mandate.
The chilling effect
There is a cost to age verification that gets very little attention because it is hard to quantify, but it is real and it matters.
When people know their real identity is tied to what they read, watch, and say online, they change their behavior. They avoid content that might look bad if someone found out. They hold back opinions. They do not click on articles about sensitive topics. They stay away from support forums for things they are not ready to discuss publicly.
This pattern is well-documented in research on surveillance and self-censorship. It has a name: the chilling effect. And it does not fall equally on everyone. The people most affected are those with the most to lose from exposure: journalists protecting sources, lawyers communicating with clients, abuse survivors seeking help, LGBTQ+ individuals in environments where being outed is dangerous, political dissidents, whistleblowers.
These are not edge cases. These are the people privacy exists to protect. An identity-linked internet is a less free internet, not because of some abstract principle, but because real people will not seek help, will not speak up, and will not access information they need.
For authoritarian governments watching Europe lead the way on mandatory identity verification, this is not an unfortunate side effect. It is exactly the capability they want. When a European democracy says "this is how we protect children," it provides cover for every government that wants the same infrastructure for very different reasons.
The offline analogy does not hold up
The comparison to pubs, casinos, and cinemas is the most popular argument for age verification, and it is also the most misleading.
A pub is a single physical venue with a single entrance. A bartender glances at your ID, confirms you look old enough, and hands it back. They do not photocopy it. They do not store it in a database. They do not link it to a record of everything you drink that evening.
The internet is not a single venue. It is a global network with billions of access points, run by millions of operators across every jurisdiction on earth. Checking ID at one door accomplishes nothing when there are a thousand unguarded windows. And the "checking" does not look like a bartender's glance. It looks like uploading your passport or submitting to a face scan, with that data flowing through systems you do not control and cannot audit.
There is also the question of who pays for all those doors. Compliance costs money: building verification systems, contracting with third-party providers, handling the legal liability. Large platforms can absorb this. Smaller sites, independent platforms, open-source projects, and individual creators cannot. The practical result is a regulatory barrier that protects incumbents and makes the internet less diverse, not safer.
Answering the rebuttals
"Age verification can be privacy-preserving." This is the most common defense and it does not survive contact with how verification actually works. To verify your age, someone must first establish your identity. Whether that check takes one second or one hour, your identity was in someone else's hands. "Privacy-preserving" is a label, not a technical property of these systems. The data either existed or it did not, and if it existed, it could be intercepted, stored, leaked, or subpoenaed.
"Filters and parental controls are not enough." Enough for what? Filters and parental tools already exist, and they work about as well as verification does, which is to say imperfectly. The argument for verification assumes it solves the gap that filters leave open. It does not. Kids bypass verification the same way they bypass filters: VPNs, shared credentials, alternative platforms. You have not closed the gap. You have added a second system that fails in the same way the first one does, except this one also collects your ID.
"Platforms should have a duty of care." I agree, in principle. But "duty of care" in practice means platforms must decide what content is appropriate for which audiences and enforce those decisions at scale. History tells us what happens: they over-censor. Legal content gets blocked. Edge cases get resolved in favor of removal because the legal risk of under-blocking is worse than the cost of over-blocking. Duty of care sounds like responsibility. In practice, it is a license for platforms to restrict legal speech, and a legal obligation to do so.
"Minors are not mature enough for social media." This might be true, and it is a conversation worth having honestly. But the question is not whether children should spend less time on social media. It is whether age verification is the tool that achieves that. And the evidence says no. Kids are already on social media in massive numbers despite rules that are supposed to keep them off. Blanket bans enforced through verification do not account for the reality of how young people actually use the internet. They ignore the role of education, parental involvement, and platform design in shaping that use. And they treat a complicated developmental question as if it can be solved with a login screen.
"The current system is failing, so we need to try something new." This is the most emotionally compelling argument, and the most dangerous one. The current system is failing, in many ways. But "something must be done, this is something, therefore we must do this" is not reasoning. It is frustration dressed up as logic. A new policy has to actually work better than what it replaces. If it does not, you have spent political capital, created new infrastructure, imposed new burdens, and left the original problem exactly where it was, except now there is also a continent-scale identity database with no clear purpose.
What would actually help
If the goal is genuinely to protect minors online, there are things that would actually move the needle.
Fund law enforcement properly. The agencies investigating online child exploitation are understaffed and under-resourced in almost every country. Give them what they need to do targeted, warrant-backed investigations instead of asking them to sift through millions of automated flags.
Fix platform design. The features that actually harm young people are not the absence of an age gate. They are algorithmic recommendation systems that push progressively extreme content, engagement mechanics designed to be addictive, and default settings that expose children to contact from strangers. Regulate those. Force transparency about how recommendation algorithms work. Require platforms to make their safest settings the default for young users.
Invest in education. Digital literacy programs that teach children and parents how to navigate the internet safely, how to recognize predatory behavior, and how to use the parental controls that already exist would do more good than any verification mandate. It is less politically satisfying than passing a sweeping law, but it actually addresses the problem.
Improve parental tools. The controls available to parents today are often clunky, hard to find, and inconsistent across platforms. Making them better, more accessible, and more standardized is a concrete improvement that respects both children's safety and everyone else's privacy.
None of these solutions are as simple as "require ID at the door." They require sustained effort, real funding, and political patience. But they address the actual mechanisms of harm instead of performing a security check that determined users will always get around.
The real question
Age verification is popular because it is easy to explain and hard to oppose in a soundbite. But policy should not be evaluated by how well it polls. It should be evaluated by whether it works.
These mandates do not reduce minors' access to harmful content in any measurable way. They create new risks by forcing the collection of sensitive identity data at massive scale. They chill legal speech and disproportionately harm the most vulnerable users. They impose costs that entrench large platforms and squeeze out smaller ones. They provide a template for authoritarian governments to follow. And they drain attention and resources from the approaches that actually have a chance of making children safer.
The kids these laws are supposed to protect deserve a serious response to a serious problem. Age verification is not that response. It is the appearance of action, and we should stop pretending otherwise.
AI Transparency Disclosure: Portions of this work were refined with the help of AI tools, used to improve clarity, structure, and readability. All ideas, arguments, and original insights remain my own. The technology served only as an aid to polish expression and presentation, not as a source of creative or intellectual content.