KOSA Won’t Make The Internet Safer For Kids. So What Will?

from the let's-think-this-through dept

I’ve been asked a few times now what to do about online safety if the Kids Online Safety Act is no good. I will take it as a given that not enough is being done to make the Internet safe, especially for children. I think there is enough evidence to show that while the Internet can be a positive for many young people, especially marginalized youth that find support online, there are also significant negatives that correlate to real world harms that lead to suffering.

As I see it, there are three separate but related problems:

  1. Most Internet companies make money off engagement, and so there can be misaligned incentives especially when some toxic things can drive engagement.
  2. Trust & Safety is the linchpin of efforts to improve online safety, but it represents a significant cost to companies without a direct connection to profit.
  3. The tools used by Trust & Safety, like content moderation, have become a culture war football and many – including political leaders – are trying to work the refs.

I think #1 tends to be overstated, but X/Twitter is a natural experiment on whether this model is successful in the long run so we may soon have a better answer. I think #2 is understated, but it’s a bit hard to find government solutions here – especially those that don’t run into First Amendment concerns. And #3 is a bit of a confounding problem that taints all proposed solutions. There is a tendency to want to use “online safety” as an excuse to win culture wars, or at least tack culture war battles onto legitimate attempts to make the Internet safer. These efforts run headfirst into the First Amendment, because they are almost exclusively about regulating speech.

KOSA’s main gambit is to discourage #1 and maybe even incentivize #2 by creating a sort of nebulous duty of care that basically says if companies don’t have users’ best interests at heart in six described areas then they can be sued by the FTC and State AGs. The problem is that the duty of care is largely directed at whether minors are being exposed to certain kinds of content, and this invites problem #3 in a big way. In fact, we’ve already seen politically connected anti-LGBTQ organizations like Heritage openly call for KOSA to be used against LGBTQ content and Senator Blackburn, a KOSA co-author, connected the bill with protecting “minor children from the transgender.” This also means that this part of KOSA is likely to eventually fall to the First Amendment, as the California Age Appropriate Design Code (a bill KOSA borrows from) did.

So what can be done? I honestly don’t think we have enough information yet to really solve many online safety problems. But that doesn’t mean we have to sit around doing nothing. Here are some ideas of things that can be done today to make the Internet safer or prepare for better solutions in the future:

Ideas for Solving Problem #1

  • Stronger Privacy: Having a strong baseline of privacy protections for all users is good for many reasons. One of them is breaking the ability of platforms to use information gathered about you to keep you on the platform longer. Many of the recommendation engines that set people down a bad path are algorithms powered by personal information and tuned to increase engagement. These algorithms don’t really care about how their recommendations affect you, and can send you in directions you don’t want to go but have trouble turning away from. I experienced some of this myself when using YouTube to get into shape during the pandemic. I was eventually recommended videos that body shamed and recommended pretty severe diets to “show off” your muscles. I was able to reorient the algorithm towards more positive and health-centered videos, but it took some degree of effort and understanding how things worked. If the algorithm wasn’t powered by my entire history, and instead had to be more user directed, I don’t think I’d be offered the same content. And if I did look for that content, I’d be able to do so more deliberately and carefully. Strong privacy controls would force companies to redesign in that way.
  • An FTC 6(b) study: The FTC has the authority to conduct wide-ranging industry studies that don’t need a specific law enforcement purpose. In fact, they’ve used their 6(b) authority to study industries and produce reports that help Congress legislate. This 6(b) authority includes subpoena power to get information that independent researchers currently can’t. KOSA has a section that allows independent researchers to better study harms related to the design of online platforms, and I think that’s a pretty good idea, but the FTC can start this work now. A 6(b) study doesn’t need Congressional action to start, which is good considering the House is tied up at the moment. They can examine how companies work through safety concerns in product design, look for hot docs that show they made certain design decisions despite known risks, or look for mid docs that show they refused to look into safety concerns.
  • Enhance FTC Section 5 Authority: The FTC has already successfully obtained a settlement based on the argument that certain harmful design choices violate Section 5’s prohibition of “unfair or deceptive” business practices. The settlement required Epic to turn off voice and text chat in the game Fortnite for children and teens by default. Congress could enhance this power by clarifying that Section 5 includes dangerous online product design more generally and require the FTC to create a division for enforcement in this area (and also increase the FTC’s budget for such staffing). A 6(b) study would also lay the groundwork for the FTC to take more actions in this area. However, any legislation should be drafted in a way that does not undercut the FTC’s argument that it already has much of this authority, as doing so would discourage the FTC from pursuing more actions on its own. This is another option that likely does not need Congressional action, but budget allocations and an affirmative directive to address this area would certainly help.
  • NIH/other agency studies: Another way to help the FTC to pursue Section 5 complaints against dangerous design, and improve the conversation generally, is to invest in studies from medical and psychological health experts on how various design choices impact mental health. This can set a baseline of good practices from which any significant deviation could be pursued by the FTC as a Section 5 violation. It could also help policy discussions coalesce around rules concerning actual product design rather than content. The NTIA’s current request for information on Kids Online Health might be a start to that. KOSA’s section on creating a Kids Online Safety Council is another decent way of accomplishing this goal. Although, the Biden administration could simply create such a Council without Congressional action, and that might be a better path considering the current troubles in the House. I should also point out that this option is ripe for special interest capture, and that any efforts to study these problems should include experts and voices from marginalized and politically targeted communities.
  • Better User Tools: I’ve written before on concerns I had with an earlier draft of KOSA’s parental tools requirements. I think that section of the bill is in a much better place now. Generally, I think it’s good to improve the resources parents have to work with their kids to build a positive online social environment. It would also be good to have tools for users to help them have a say in what content they are served and how the service interacts with them (i.e. turning off nudges). That might come from a law establishing a baseline for user tools. It might also come from an agency hosting discussions on and fostering the development of best practices for such tools. I will again caution though that not all parents have their kids’ best interests at heart, and kids are entitled to privacy and First Amendment rights. Any work on this should keep that in mind, and some minors may need tools to protect themselves from their parents.
  • Interoperability: One of the biggest problems for users who want to abandon a social media platform is how hard it is to rebuild their network elsewhere. X/Twitter is a good example of this, and I know many people that want to leave but have trouble rebuilding the same engagement elsewhere. Bluesky and Mastodon are examples of newer services that offer some degree of interoperability and portability of your social graph. The advantages of that are obvious, creating more competition and user choice. This is again something the government could support by encouraging standards or requiring interoperability. However, as Bluesky and Mastodon have shown, there has been a problem with interoperable platforms and content moderation because it’s a large cost not directly related to profit. This remains a problem to be solved. Ideally a strong market for effective third party content moderation should be created, but this is not something the government can be involved in because of the obvious First Amendment problems.

Ideas for Solving Problem #2

  • Information sharing: When I went to TrustCon this year the number one thing I heard was that T&S professionals need better information sharing – especially between platforms. This makes perfect sense: it lowers the cost of enforcement and improves the quality of enforcement. The kind of information we are talking about are emerging threats and the most effective ways of dealing with them. For example, coded language people are adopting to get around filters to catch sexual predation on platforms with minors. There are ways that the government can foster this information sharing at the agency level by, for example, hosting workshops, roundtables, and conferences geared towards T&S professionals on online safety. It would also be helpful for agencies to encourage “open source” information for T&S teams to make it easier for smaller companies.
  • Best Practices: Related to other solutions above, a government agency could engage the industry and foster the development of best practices (as long as they are content-agnostic), and a significant departure of those best practices could be challenged as a violation of Section 5 of the FTC Act. Those best practices should include some kind of minimum for T&S investment and capabilities. I think this could be done under existing authority (like the Fortnite case), although that authority will almost certainly be challenged at some point. It might be better for Congress to affirmatively task agencies with this duty and allocate appropriate funding for them to succeed.

Ideas for Solving Problem #3

  • Keeping the focus on product design: Problem #3 is never going away, but the best way to minimize its impacts AND lower the risk of efforts getting tossed on First Amendment grounds is to keep every public action on online safety firmly grounded in product design. That means every study, every proposed rulemaking, and every introduced bill needs to be first examined with a basic question: “does this directly or indirectly create requirements based on speech, or suggest the adoption of practices that will impact speech.” Having a good answer to this question is important, because the industry will challenge laws and regulations on First Amendment grounds, so any laws and regulations must be able to survive those challenges.
  • Don’t Undermine Section 230: Section 230 is what enables content moderation work at scale, and online safety is mostly a content moderation problem. Without Section 230 companies won’t be able to experiment with different approaches to content moderation to see what works. This is obviously a problem because we want them to adopt better approaches. I mention this here because some political leaders have been threatening Section 230 specifically as part of their attempts to work the refs and get social media companies to change their content moderation policies to suit their own political goals.

Matthew Lane is a Senior Director at InSight Public Affairs.

Filed Under: , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “KOSA Won’t Make The Internet Safer For Kids. So What Will?”

Subscribe: RSS Leave a comment
28 Comments
Anonymous Coward says:

All the problems have existed from well before the Internet existed, as influencer magazines caused body image problems, and the ‘Anarchists Cookbook’ was circulating, along with porn, and various underground magazines with all sorts of extremist views. Nor was that much harder to find such material if that was what was desired. As ever the answer is supervision of young children, along with education as they become more and more independent.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

'Protecting the kids' starts at home

Unless I missed it the most basic ‘protection’ seems to have been left out: Parents teaching their kids how to properly deal with questionable content online.

‘This is the sort of stuff you might encounter online, if you have any questions or you run across something that bothers you you should feel comfortable coming to me, your parent, and I will talk to you about your concerns/question and how you might deal with and/or avoid that sort of stuff in the future.’

PaulT (profile) says:

Re:

I have a feeling that the parents who do such things aren’t the ones who have that problem, and are way less vocal…

I saw a story today about a woman in the UK complaining about condoms/minor sex toys being available in a major chain store in the UK today. The thing is, she’s 20 years younger than me and I’m sure I saw those things when I was a kid in the same stores. So, these people apparently seem to have real problems with things they saw as children themselves. I can understand not wanting to explain the worst of online porn to a 5 year old, but if they’re losing their minds over things they saw as children, well…

This comment has been deemed insightful by the community.
PaulT (profile) says:

I mean, the only thing that will make things safer for kids online is the realisation that the internet isn’t for kids. If you let you child go online unsupervised it’s the same as if you let your kid wander round 42nd Street in New York in the late 70s. They might come back OK, but chances are high that they’ll see something you don’t want them to see.

Despite the attempts of various people over the years, it’s still not safe to let your kid wander around NY alone after dark, and it’s also not safe for them to be online without you doing something. Almost every solution consists of “don0t let your kid go down to NY after dark” or “chaperones are necessary”.

I understand that the internet and social media are attractive to kids and that parents have pressures that are hard to deal with, but solutions that consist of telling everyone else to take the responsibility won’t work.

At the very least, any workable solution seems to depend on telling American citizens that they have to have a government ID that can track them, and that every other country on the planet has to honour it… which doesn’t seem realistic.

I mean, FTA:

a government agency could engage the industry and foster the development of best practices

OK… but the internet is global. How does this agency enforce it elsewhere?

One of the biggest problems for users who want to abandon a social media platform is how hard it is to rebuild their network elsewhere

Yes. But “network” in this case isn’t the computer network, it’s the people. Lots of people want to move away from Xitter but they stay because their audience is there and there’s problems with the other major venues (Bluesky is invite only, Threads isn’t in the EU, people have problems understanding Mastodon). As we saw when people left MySpace to go to Facebook years ago, it’s possible to have audiences go somewhere else, but it has to be established where they go to.

Technical operability cam help, but it’s really just a case of where the crowd decide to congregate. That’s not something that any tech platform can help with. Interoperability will go a long way to make it easier to switch, but if your favourite people are on one platform, the audience won’t easily switch – and the content providers won’t switch to places that don’t have an audience.

Cowardly and Anonymous says:

Re:

How do you square this view with the fact that things like YouTube Kids and Disney’s plethora of children’s content exist? What about school websites and academic content and tutorials meant to help kids learn? Correct me if I’m wrong, but I don’t recall any amusement parks or daycares along ‘that’ section of 42nd Street.

The difficulty is that the Internet DOES contain things suitable for children, and sometimes intended primarily for children; but it also contains every kind of porn you could imagine, and all sorts of violence from every conflict in the last 20 years, and suicide-baiting, and pro-ana sites, and things you wouldn’t even imagine. How do we help children in an environment where these two VERY different types of content are both available on the same Internet?

Cowardly and Anonymous says:

Re: Re: Re:

I absolutely agree! You can implement some tools to try and make it easier to help parents and children work together to negotiate the Internet, but as unsatisfying and difficult as this answer might be, the real solution comes in the form of good parenting. It isn’t easy to generalize, certainly not in this day and age. But that’s what actually works.

We can pretend all we want that the Internet isn’t suited for children, or that every adult must stop doing anything that isn’t child-safe in case, somewhere, a child somehow becomes involved. But it won’t solve the core problems putting kids at risk. All we end up with are the same risks for kids, and a Disneyland Internet that refuses to give any space to anything potentially ‘objectionable’.

And I don’t envy the kids! Just one example: in my day, the adult content was all far away on other servers, on other websites somewhere. Now there are jihadist preachers and neo-nazis on YouTube, and other bad actors on the other platforms they’re using. It makes it challenging to actually stay away from the stuff at times. There are multiple complex and interwoven problems, and there is no one solution that is going to resolve all of the problems children face online. But one thing that at least helps with these problems is a solid education in how to use the Internet as a child. With ongoing leadership from a parent they trust, that child can share their concerns and continue to grow in both their ability to use the Internet and their ability to stay safe in the process.

Nimrod (profile) says:

Safe

There is one thing, and one thing alone that will keep children safe. It’s the same thing that has protected children throughout history. It’s called ADEQUATE PARENTAL INVOLVEMENT. Too bad nobody has the time for that any more. They feel the need to outsource THEIR RESPONSIBILITIES to anyone who will shoulder the load, then it all hits the fan when things don’t turn out they way they had anticipated.

Cowardly and Anonymous says:

Or, bear with me here, we could get rid of all the nasty stuff and leave the benign content alone. Just automate everything to not allow anything that includes CSAM, or grooming, or anything illegal, and leave everything else alone. Do like we used to do and find a way to make the “impossible” happen, instead of making excuses.

/s

This comment has been flagged by the community. Click here to show it.

MindParadox (profile) says:

I remember in the 80s as a kid we had those commercial that were literally “Don’t share your name over the phone”(late 80’s changed to online) “Don’t tell people if you are home alone” “Never give out your address to strangers” Never give your real name or any information online”

I mean, those things stuck with me, seriously.

And then Myspace/facebook/twitter/whatever and all the other “Share all of your most personal details with the entire world!” sites happened, and the “solution” to making kids safe online was to make everyone under 13 or so lie about their age, cause ya know, making them do everything online behind the websites back is SOOOOO much better 😛

Have kids only spaces. Lego does this, it obviously can be done. Keep the adults out of em(The Lego ecosystem is actually pretty self policing, cause the kids don’t want adults there so they report em, and the adults get blocked)

Have actual serious deterrents for grooming and such. Note, I didn’t just say online. It should be YEARS in prison at a minimum first offense for that crap.

Sum Wun says:

Re:

A brain exercise.

Due to corrupt corporate copyright control, private organizations such as Steam/Valve are allowed legally, to place a shit-ton of unexamined code on the computers of children, purportedly to police an prevent the possession and use of pirated game software.

Nobody is examining this code or even trying to determine precisely what it does while on the computers of millions of children nationwide and it has full access to the child’s computer and its native software.

Now pretend that you’re a criminally corrupt company that wants to utilize this utterly unsupervised and completely legal ability to store and run code on children’s computers for nefarious purposes. What possible shenanigans could you pull off?

That One Guy (profile) says:

Re:

Great, now come up with a way to do that that doesn’t result in a massive stockpile of personally identifiable data that will be hacked on a regular basis, doesn’t negatively impact individuals or groups that might not be able/willing to provide the credentials to verify their age, doesn’t chill speech by imposing a mandatory ID check on any ‘questionable’ content that people might not want to have their ID attached to, can’t be trivially bypassed via faking credentials…

ECA (profile) says:

Child safety?

There REALLY isnt such a thing.
You have to Prove to me that every family is a safe haven for children FIRST. They ARNT.
Then you have to prove to me that Children, ARNT being taught to BE SAFE, while outside, While WALKING TO SCHOOL, WHILE IN A MALL, WHILE playing in a park.
90% of this has to deal with WHO IS RESPONSIBLE, WHO IS DEMANDED, WHO IS REQUIRED to TEACH/HELP/ASSIST, Children as they GROW UP. AND ITS NOT THE GOV., NOT THE INTERNET, NOT your NEIGHBORS, NOT THE SCHOOL.

PARENTS ARE SUPPOSED TO BE RESPONSIBLE, but who is at home? The Capitalists Dont give a Flying Doodle wop. BECAUSE THEY ARNT supposed to be responsible.

CAN you give the ability to PARENTS, to restrict things, FOR A REASONABLE PRICE OR FREE? Yes and NO. There are NOW phone restrictions That can be done, and Computer restrictions, so what else is NEEDED?

Is there anyone in here, that Used to Look at the CARTOON SITES? Cartoon network and others, in the past and RECENTLY..
#1 they are CRAP. #2 they WANT your money #3 they want your KID to PLEAD for your money.

There is NOWHERE, you can drop your kids off to have a FREE DAY. Daycare costs per hour about as much as min+ wage. A Baby sitter? $10 per hour??
I hope you still have family that LOVES you, and YOU can help them as THEY help you, to get 1 day off from your kids.

Who remembers the commercial, “Whats in your wallet”.

NoahVail (profile) says:

Unhelpful reply of the day

Our problem mostly solved itself.

We couldn’t afford phones for the kids and kept computers in the common area. They didn’t frequent social media because it wasn’t interesting to them.

I did run a reverse proxy at home to protect my kids from ads and malware. It also filtered particularly egregious content but I mostly setup the filtering and forgot about it. I didn’t monitor. I did teach some “A can lead to B. Here’s how B might harm you in the long run.” and left them to it.

They do a fair job of governing themselves.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...