Shoshana Weissmann's Techdirt Profile

Shoshana Weissmann

About Shoshana Weissmann

Posted on Techdirt - 23 May 2025 @ 11:02am

Kids Don’t Have IDs And Age-Estimation Tech Is Frequently Very Wrong

Lawmakers continue to propose new bills that would require social media companies and app stores to segment users by age and obtain parental consent for minors. Laws like Utah’s new App Store Accountability Act and an identically named piece of federal legislation are being pushed across the country. Other bills, such as the Rhode Island Social Media Regulation Act, would require specific social media sites to verify user age.

The goal of these bills is to restrict minors’ use of social media and other apps by enabling parents to either consent to or revoke their child’s use of these services. Unfortunately, these proposals fail to account for the difficulty of verifying—or even estimating—a child’s age online.

No effective age-verification solutions for younger users currently exist, and there are real barriers to verifying parental consent. As in past posts from this series, we will examine these issues briefly and explain the main problems with each proposed solution.

Many children don’t have (and can’t get) government identification

The first issue is that many proposals attempt to age-gate social media by requiring users to upload government documents to prove their age. The laws themselves (or the accompanying regulations) often specify the need for a government ID card, and therein lies the first problem: Children generally don’t have government IDs and often can’t obtain them to begin with. For example, in Washington, D.C. only those aged 15 or older can acquire a limited purpose non-driver identification card; in New Jersey and Massachusetts, the minimum age is 14.

Additionally, fewer teens are choosing to get driver’s licenses. A study by the Congressional Research Service (CRS) found that just over 1 percent of 14- and 15-year-olds, 2 percent of 16-year-olds, 43 percent of 17-year-olds, and 60 percent of 18-year-olds have a driver’s license. The CRS was unable to find similar data for non-driver ID cards. This suggests that a significant number of young legal adults would be unable to access social media under these bills. And this is no small First Amendment problem—if the bills go into effect, millions of adults could be denied access to a popular medium for sharing and receiving speech.

Existing age-estimation technology is inaccurate

The second issue is that some proposals allow for age-estimation technology, which uses biometric means (e.g., face scans) to estimate user age. Unfortunately, these methods are plagued with inaccuracies, often failing particularly badly where it matters most: along the margins of age and age categories. Authors of an ongoing National Institute of Standards and Technology (NIST) study evaluating existing age-estimation algorithms go so far as to say, “[W]e do not have any evidence (yet) that an age-verification classifier can outperform a regression-like estimator on the same task.” (Regression-like estimators estimate user age broadly, whereas classifiers attempt to pinpoint an exact age—a key distinction between the two.)

Notably, the Federal Trade Commission (FTC) voted unanimously to deny applications by the Entertainment Software Rating Board, Yoti, and SuperAwesome for use of a new verifiable parental consent mechanism under the Children’s Online Privacy Protection Rule. The FTC’s rejection letter specifically cites comments in the Federal Register that “raised concerns about privacy protections, accuracy, and deepfakes.”

Various legislation attempts to treat minors differently according to age—an approach that presents particular problems when it comes to age-estimation technology. While it’s important to recognize developmental differences between 10-year-olds and 17-year-olds, the algorithms will fail significantly when attempting to differentiate minors aged 18 and over from those aged 16 to 17, 13 to 15, and under 13. According to the NIST study, even Yoti—the best age-estimation software currently available—has an average error of 1.0 years, while other software options err by 3.1 years on average. Yoti demonstrates a true positive rate of 0.57 for children aged 13 to 16, which means it can correctly place someone in that age range slightly more than half the time—only somewhat better odds than flipping a coin.

This error is understandable in that the task isn’t an easy one—after all, we’re talking about differentiating between months (or even days) of life. But if age verification were required by law, 19- and 20-year-olds would routinely be classified as underage, requiring parental consent to use social media and/or other apps. Even as the technology evolves, a fraction of a percent of error can still amount to millions of people needing to use a different verification method.

Other proposed documents can’t accurately convey age, identity, or familial relationships

Legislators have also suggested using Social Security numbers (SSNs) to establish age and identity under these bills; however, SSNs can’t confirm identity, age, or familial relationships. The absence of photographs on Social Security cards means minors can easily use someone else’s SSN to access apps. Additionally, although SSNs are commonly used as identifiers absent the physical cards, they don’t actually convey identity—which is why synthetic identity fraud is not only possible, but extremely common.

None of these methods can effectively establish parental consent

Every one of these methods fails at providing parental consent for children to access social media or other apps. As already explored in this series, ensuring guardians and their children share last names is insufficient, as they may be different due to divorce, foster care, care by a non-parent family member, or something else. Four percent of children don’t even live with their parents. Because most children don’t have government photo IDs, age-estimation technology is flawed, and SSNs can be used fraudulently (and because none of these methods establish familial relationships), all are useless for the purpose of verifying parental consent.

Establishing familial relationships is important to verifying proper parental consent; in fact, the CRS explored the use of birth certificates for this purpose. However, there are two major issues with this method. First, several million Americans lack access to their birth certificate; second, birth certificates alone can’t prove identity because they don’t include a photograph. Unfortunately, there is no obvious way to combine birth certificates and face scans for children without a photo ID. This is a problem because verifying identity is crucial to determining the status of a child’s parent or guardian and ensuring minors can’t use other people’s identities to overcome age-gating.

Some bills, including the federal App Store Accountability Act, simply require that the parental account holder is at least 18 years of age and “affiliated with one or more account of a user or prospective user who is a minor.” While this approach would reduce burdens on everyone involved, it would also allow non-guardian adults to approve a child’s online behavior.

Conclusion

Much of the age verification and estimation debate fails to account for two simple truths: 1) children don’t have government identification; and 2) estimation tools are not quite where they need to be. Lawmakers must grapple with these realities in order to implement sound policy.

Republished with permission from R-Street. You can read the original here.

Posted on Techdirt - 30 January 2025 @ 01:21pm

No, Conscripting The App Stores Doesn’t Solve The Problems With Age Verification

Lawmakers show no sign of slowing down with laws to limit minors’ use of social media. State and federal legislation mandating that sites verify users’ age and adjust their social media experiences accordingly are still popular, despite the fact that they have repeatedly failed court challenges. As of late, policymakers have turned to a different model where parents have to consent to app store downloads made by their children. But this new approach is just as doomed as others because of inescapable functional and constitutional issues.

Many advocacy groups that have supported other attempts to block youth access to social media have grabbed hold of the app store mandate as an easier, more technologically feasible approach than prior proposals like mandating content filtering at the mobile-device level. In practice, however, these proposals run into all of the basic concerns with constitutionality, efficacy, and cybersecurity that R Street and others have warned are inherent in any attempt to age-gate access to general-purpose digital services.

The idea of using the app store as a checkpoint for restricting youth access to harmful content caught steam after Meta endorsed it in 2023. An early attempt to enact the proposal into law was blocked in Louisiana late last year, and a similar amendment at the federal level was introduced by Rep. John James (R-Mich.) but did not advance. Rep. James and Sen. Mike Lee (R-Utah) subsequently introduced similar bills, H.R. 10364 and S. 5364, at the very end of 2024. These two federal bills share a title (the “App Store Accountability Act”) but use different mechanisms to require age verification, although the net result of either would be largely the same.

Presumably, these last-minute federal bills anticipate a more serious effort in the coming year, and similar legislation is expected to pop up in a number of states in 2025. South Carolina’s H. 3405, also called the “App Store Accountability Act,” is the first to have been actually filed, followed by Alaska’s H.B. 46 and Utah’s S.B. 142.

App-store versus social medialevel age verification doesn’t change anything

The primary, unavoidable problem with these proposals is that they would require the app stores to use some method to verify the age of every person in the United States who wishes to use any of the large app stores on their mobile devices and to obtain “verifiable parental consent” when a minor wishes to access an app or in-app purchases.

South Carolina’s H. 3405, for example, forces age verification on every person in the state as soon as the law takes effect. “Beginning January 1, 2026, using commercially available methods, app store providers shall determine the age category for every individual located in this State that purchases or uses apps from their app store and verify that user’s age,” reads the proposal.

This would result in the same problem that any other age verification and parental consent scheme with no better chance of being found constitutional. Specifically, full age verification requires documentation like government IDs, Social Security numbers, or credit card information. And even if the laws are tailored or interpreted to merely allow age estimation, the most widely available commercial technologies to accomplish this would still force every app store customer to submit to an intrusive—and, for younger adults, error-prone—biometric scan.

In either case, obtaining parental consent requires some form of documentation proving that the permission-giver is both an adult and a legal parent or guardian of the specific teenager or child. There is no way around this, meaning the bills require not just age verification, but identity verification. This poses a host of problems. For example, not all children and parents share the same last name; some minors have nontraditional families and multiple legal guardians, and some children might simply obtain their parents’ information and impersonate them. And whatever personal information parents have to provide about themselves or their children becomes vulnerable to hackers, as proven by recent leaks of these very age and identity verification systems.

Because other, easily accessible tools already exist for parents to restrict their children’s access to apps and the app stores, it is very unlikely that courts would agree that universal age verification constitutes the least-restrictive means of achieving the goal of protecting children from harmful content on their devices. Being unable to pass the least-restrictive-means test under First Amendment precedent sets these laws up for failure.

Indeed, there have never been more options for parents to lock down and monitor what their children access online. Virtually all modern cell phones and devices have easy-to-use parental controls at the device, app store, and browser levels, and there are websites dedicated to walking parents through their use. There is also a thriving marketplace for third-party software that can be downloaded to provide even more granular and comprehensive restrictions on what online services children can access on their devices, and when. This does not mean the task is easy for parents, but they do have an abundance of options that are more effective than this type of legislation would be.

Even if these bills could work, gating at the app-store level is a poor fit

Another oddity of specifying the app stores as the means to protect children from harm is that the internet’s worst material cannot even be accessed through the app store. For example, one of the primary justifications for the App Store Accountability Act is to protect minors from sexually explicit content, yet adults-only apps with this type of content are not even allowed in the Android or Apple app stores, and other similar sites like OnlyFans do not even have an app. Sometimes app updates attempt to evade these prohibitions or apps find other ways around the rules initially, but the app stores eventually find them and remove them. This also used to happen by using a program meant for employee-only apps. This means that the most concerning app for children is whichever web browser they prefer where they can already directly access adult content.

Of note, app ratings are also designed to help parents and caregivers. The social media apps that allow adult content, such as Reddit and X (formerly Twitter), are rated M for mature, whereas other social media apps like Instagram, TikTok, and SnapChat are rated T for teen, meaning caregivers can block children’s access with simple parental controls.

Once again, age-gating access to general-purpose platforms is unconstitutional

The main effect of these app store mandates is to shift liability from companies that own individual apps to the owners of the app stores themselves. Sen. Lee’s version allows any parent a full private right of action to sue app store owners for exposure to harmful content, using this as the teeth to coerce app store owners into enacting age verification by providing a safe harbor from liability if they do. South Carolina’s version appears to lack any explicit safe harbor, meaning that the app stores could be liable in spite of their best efforts to comply. The lack of safe harbor creates a perverse incentive for app store owners to block access to any apps that might host content that could get them sued. In general, laws that cause this sort of “collateral censorship” of otherwise-protected speech have failed First Amendment scrutiny in the courts, with California’s Age Appropriate Design appearing destined to fall for just such an infringement.  

Aside from questions about their practicality or efficacy, all of these bills contain other constitutional problems. Sen. Lee’s bill seeks to limit minors’ access to content that is “any graphic image or video of real or simulated violence.” This would not only implicate apps that might house content like disturbing war and true crime footage but also potentially important historical content. Such a provision would not stand up in court. Plus, it runs into similar under-inclusivity constitutional issues as past laws that were found unconstitutional, in that it seeks to limit access to violent content on apps, but not on websites, in movies, or in video games.

Shockingly, every bill like this mentioned in this post also requires that app developers verify user age or age category and obtain parental consent, regardless of the app type. This means that parents would have to consent to their children using calculator apps, bible apps, history apps, or anything else innocuous and rated for all ages. As Justice Antonin Scalia wrote in Brown v. Entertainment Merchants’ Association, in which the Supreme Court found that laws cannot condition children’s access to non-obscene speech on parental permission, “we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.”

This has no effect on laptops

A final and important point is that most of these proposals really don’t have much impact on content access on laptop computers. Sure, the text of the bills account for them, but downloading apps outside of device app stores is a common practice on laptops. Parents can already prevent their minors from downloading external software if they adjust the laptop settings, which gets to the original issue—these bills cause constitutional issues in an effort to force certain companies to give parents features that already exist.

Conclusion

The benefits parents stand to gain from this type of legislation are severely limited and account for features that already abound. If these requirements were to be put into law and force parental consent, they would create massive cybersecurity and privacy problems, including heightened opportunities for identity theft. They would also violate the First Amendment many times over. Instead of continually searching for new and creative ways to force age-gating mandates onto online services, legislators should focus on strategies to help parents understand the power they already have to manage their children’s access to online content.

Originally posted to R Street’s series on “The Fundamental Problems with Social Media Age-Verification Legislation.”

Posted on Techdirt - 28 June 2023 @ 12:05pm

Social Media Was Useful For Me, As An Ill, Nerdy Teenager

Lately, concerns about allowing minors to use social media have been front and center in the public discourse, but ways in which they use it productively have been absent from the conversation. My own anecdotal experience does not constitute comprehensive data, but it’s important to understand how social media can be extremely useful for teens who are sick and who have unique interests. Current proposals and laws range from restricting minors on social media to outright bans. Many of these laws would have prevented me from finding medical answers, friends, and career options.

My autoimmune disease count is up to eight, and always growing. Last year it was a solid seven, and six the year prior. Autoimmune diseases tend to come in groups and can mimic each other, making diagnosis difficult. Ever seen the Simpsons episode with Mr. Burns’ doctor explaining how his many diseases work together by pushing a bunch of fuzzy toys through a door? (See: Three Stooges Syndrome.) My first diagnosis was at the age of 13: Crohn’s Disease. It’s only the beginning of the story.

Current laws and proposals would endanger minors’ access to health information

A key piece of how I found my answers both as a teenager and an adult was social media. As I tell my story, it’s important to keep in mind that support forums for people with various medical conditions would be covered by many new social media laws and proposals intended to protect kids online. Multiple proposals completely ban minors below various ages from accessing social media. Even for the laws and proposals that don’t ban minors from using platforms but instead require parental consent, that minor will then have their and their guardian’s personally identifiable information tied to a forum about their condition forever.

The most effective age verification tools currently in use require very personal information, from government IDs to face scans. This is troubling for many reasons, a key one being that platforms regularly suffer from data breaches and hacking. While I am open about most of my health struggles, many people understandably want to keep their medical information private. Not only would tying one’s identity to sensitive medical information discourage speech, it would also discourage support networks and medical information sharing.

My diagnosis process before social media, and then with social media

It took well over a year, eight gastroenterologists, five gynecologists, two colonoscopies, endless cat scans, blood tests and more to get my endometriosis diagnosis. My father had suggested that exact diagnosis early on after some research, but multiple doctors dismissed the possibility. Being very sick as a female teenager led to more rolled eyes than genuine concern. Seven gastroenterologists, two gynecologists, one school psychologist, and various other medical professionals completely dismissed me. Sure, they would find an inflamed colon or an ovarian cyst or fluid in my pelvis, but that wasn’t enough to overcome the fact that I was too “young” to have any condition. This period of my life was hell. I was very sick, I didn’t know why, and with enough doctors calling me “crazy” or telling me I should “stop lying” or “go to the mall,” I questioned my own sanity.

The last gastroenterologist was very helpful and agreed I had Crohn’s Disease—a diagnosis I received from my fourth gastroenterologist after a colonoscopy—but believed my most severe issues may be caused by something else, and it was beyond his expertise. After a visit with a third gynecologist, a fourth confirmed his suspected diagnosis: Endometriosis.

Until recent years, this disease was often written off by doctors as “in your head” even when internal bleeding and other clear signs were visible. Later, I learned it took some women as much as 20 years to get a diagnosis. My endometriosis was successfully removed during a laparoscopic surgery when I was 14 years old. The surgeon noted the severity of my endometriosis. No wonder I was prescribed Vicodin by a pain management specialist, and the pain was still pretty bad.

During the next few years, the medical tests slowed, but did not stop. I still had unresolved symptoms and wanted to figure out what was causing them. And by now, I was understandably skeptical of medical professionals and began more research of my own. After years of my primary care doctors writing off endless colds and infections as “a bad season,” I searched something like “endometriosis” and “getting sick all the time.” I ended up on a message board—a form of social media—where someone described the same problem and that they had been diagnosed with fibromyalgia. After finding rheumatologists in my area and their reviews, I settled on someone a bit far from me because he was so highly rated. He instantly recognized my symptoms as that of fibromyalgia. Since he prescribed me Cymbalta, I spend a tiny fraction of my time sick as I did before and my seasonal allergies no longer induce asthma.

These days, my favorite doctor who has helped me identify and treat several smaller diseases will actually conduct internet research in front of me. But in addition to this, most successful supplements and diagnosis recommendations for me began on various forms of social media. A random follower of mine on Twitter recommended a supplement that I began taking after checking its interactions with my other medications and supplements, and it has greatly increased my energy and overall well being.

Another person on Twitter discovered the source of one of my strangest symptoms that I’ve been trying to figure out for years, and it works a lot like Erythromelalgia. I’ll be talking to my doctor about it soon. Without the use of social media as a teenager, I would not have my fibromyalgia diagnosis and I would not have been able to learn how to use social media to find answers to my many medical problems. And although I am very open about my experiences now, it wasn’t always that way. Plus, age verification as a barrier to accessing these forums may have given me pause in asking for help and researching my medical problems.

Social media gave me a career and a hobby

And as one can imagine, with severe pain, nausea, and exhaustion comes a lot of time on the couch. When I was at my worst at ages 13 and 14, I watched a lot of news, and it grew my passion for politics. I decided at the time that I wanted to be a U.S. senator (hence “Senator Shoshana”). My father found the local Young Republicans club, and I became fast friends with many people in their 20s and 30s who treated me like a younger sister. Over the following years during high school, I would keep in touch with these new friends on social media. My friends from school didn’t care much about politics, and it was a good outlet for me to talk with likeminded people. Under several current legislative proposals, this would not have been possible.

Local elected officials and government staffers would often add me as a friend on Facebook. It was an incredible way for me to network and acquire opportunities to intern as a teenager, in addition to the in-person events. Unfortunately, many laws and legislative proposals limiting the use of social media by teenagers specifically prohibit people who are not already friends with the minor from finding them in search results or messaging them. If this was law during my teen years, it would have prevented me from growing my career by talking to elected officials online, even with parental consent.

This list isn’t exhaustive. I made friends on social media with a talk radio host whose ideas resonated with me—we’re still friends today. I have been sewing since I was eight years old, and loved Project Runway. I added my favorite designer from the show and his model on MySpace, and was able to converse with and learn from the model. I also made friends with a model from a vintage clothing store who was very supportive of my sewing and taught me about vintage clothing. All of this and more showed me how big and how small the world was, and all the opportunities at my keyboard that I could use to grow as a person and as a professional. My thinking regarding laws about social media isn’t so much about protecting what I had, but more about not closing the door to all this opportunity behind me for future generations.

Outright government bans are the wrong strategy

Not all minors will use social media safely and productively. Parents have an important role to play by talking to their children, using filters, or maybe banning their children from social media where appropriate. But outright government bans undermine parental and child choice. And legally-mandated age verification creates tethers to one’s online information where anonymity would be preferable, in addition to security concerns for children and adults alike.

As is the case with all government action, well-intended proposals can have negative consequences and stifle people’s opportunity. If these proposed social media restrictions had existed when I was younger, it’s possible I’d have missed out on formative pieces of my career, not to mention hobbies. And who knows, I may have never gotten the health diagnoses that drastically improved my quality of life. I implore policymakers to think before they take these social media ban leaps and unintentionally hold back generations to come.

Shoshana Weissmann is the digital director and a policy fellow at the R Street Institute.