Andy Jung's Techdirt Profile

Andy Jung

About Andy Jung

Posted on Techdirt - 12 March 2026 @ 12:16pm

Don’t Ban Kids From Using Chatbots

Laws prohibiting minors from accessing AI-powered chatbots like ChatGPT would violate the First Amendment. But that’s not stopping lawmakers from trying.

Senator Josh Hawley has introduced the Guidelines for User Age-verification and Responsible Dialogue Act of 2025 (GUARD Act), which would require AI companies to “prohibit” minors under “18 years of age” from “accessing or using” AI chatbots that “produce[] new expressive content” in response to “open-ended natural-language or multimodal user input.” Earlier this year, Virginia and Oklahoma introduced similar bills, as did California last September. The crux is the same: to prohibit minors from accessing chatbots capable of producing human-like speech.

If passed, these bills will get struck down in court for violating the First Amendment, which prohibits laws “abridging the freedom of speech.” Specifically, minors have a First Amendment right to receive information. The Supreme Court has explained, “minors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.” This right applies to the Internet with full force.

When analyzing these laws under the First Amendment, a court would start by asking whether the government is regulating speech. Speech is a broad concept, including written and spoken words, photos, music, and other forms of expression like computer code and video games. Chatbot outputs are speech; they comprise all these forms of expression. Laws prohibiting minors from accessing chatbots regulate speech by cutting off young users from the ideas and information communicated in outputs.

Next, a court would assess whether minor chatbot bans regulate protected or unprotected speech. The vast majority of outputs are protected speech: Teens use chatbots to search for information, get help with schoolwork, for fun or entertainment, and to get news. Here, the only relevant category of unprotected speech is content that is obscene to minors. The GUARD Act, for example, states that “chatbots can generate and disseminate harmful or sexually explicit content to children,” and the Virginia bill would block chatbots “capable of … [e]ngaging in erotic or sexually explicit interactions with the minor user.” Sexually explicit outputs to minors are likely unprotected speech, but the bills go much further by blocking all youth access to chatbots.

Because these bills regulate a mix of protected and unprotected speech, the court would then assess whether the prohibition on teen usage is content-based or content-neutral. Content-based restrictions target speech based on its viewpoint, subject matter, topic, or substantive message. On the other hand, content-neutral laws regulate nonsubstantive aspects of speech, like its time, place, or manner.

These bills are content-based because they prohibit access based on the subject matter of chatbot outputs. The GUARD Act would prohibit minors from accessing chatbots capable of “interpersonal or emotional interaction, friendship, companionship, or therapeutic communication.” The Oklahoma bill would block chatbots that “express[] or invit[e] emotional attachment” or “form ongoing social or emotional bonds with users, whether or not such systems also provide information.” Similarly, the Virginia bill would ban minors from accessing chatbots “capable of … offering mental health therapy.” Regardless of the pros and cons of minors accessing such information, the prohibitions are based on the content of the outputs — not on merely nonsubstantive aspects of the speech.

Because these bills are content-based, the court would apply strict scrutiny. The government would have to prove the bills are narrowly tailored to advance a compelling governmental interest and that they are the least restrictive means of serving that interest. Banning minors from accessing chatbots arguably advances “a compelling interest in protecting the physical and psychological well-being of minors” by “shielding minors from the influence of” obscene outputs.

Strict scrutiny, however, requires lawmakers to use a less restrictive means than bans to protect minors. Lawmakers could, for example, require AI companies to provide parental controls or strict safeguards preventing their models from engaging in sexually explicit conversations with young users. In fact, AI companies already have policies and features to protect minor users. Because these bills aren’t narrowly tailored, a court would strike them down for violating the First Amendment.

Banning minors from using chatbots is also bad policy. Last October, California Governor Gavin Newsom vetoed the state’s proposed ban, stating, “AI is already shaping the world, and it is imperative that adolescents learn how to safely interact with AI systems … We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether.”

Most U.S. teens use AI chatbots. These young users have a First Amendment right to receive the information the AIs output, which is generally protected speech. Prohibiting access to chatbots would violate minors’ constitutional rights and deprive them of the vast benefits of AI.

Andy Jung is associate counsel at TechFreedom, a nonprofit, nonpartisan think tank focused on technology law and policy.

Posted on Techdirt - 5 June 2024 @ 01:34pm

Drake vs. Kendrick Lamar Proves AI Music Is Regulated

In the last year, the Canadian rap artist Drake has embroiled himself in several high profile controversies involving AI-generated music. The ongoing saga underscores how existing laws apply to artificial intelligence, dispelling the myth that AI, including AI music, is unregulated.

In April 2023, TikTok user ghostwriter977 released “Heart on My Sleeve,” featuring AI-generated vocals of Drake and The Weeknd. The song went viral, racking up millions of listens. In response, Drake’s record label filed a takedown notice, and streaming services removed the song.

Bloomberg disparaged “Heart on My Sleeve” as “unregulated AI music, which has driven a wedge through multiple intellectual property rights.” In fact, intellectual property law clearly applies to AI-generated music. The current beef between Drake and Kendrick Lamar proves it.

This April, Drake released “Taylor Made Freestyle,” featuring AI-generated impersonations of Tupac and Snoop Dog. The irony was palpable. The following week, Tupac’s estate sent Drake a cease and desist letter alleging “unauthorized use of Tupac’s voice and personality” and “a flagrant violation of Tupac’s publicity and the estate’s legal rights.” Drake removed the song.

Intellectual property law and state law are at play in Drake’s ongoing AI feud. Last year, when internet users uploaded songs featuring AI-generated vocals of Drake, Universal Music Group used copyright law — specifically, the DMCA notice and takedown process — to remove the allegedly infringing content. Universal also contacted streaming platforms like Spotify and Apple, demanding the services block AI companies from scraping musical elements like melodies from copyrighted songs.

The AI content creators could have filed DMCA counter-notices contesting Universal’s copyright claims, perhaps arguing, for example, that “Heart on My Sleeve” is fair use. In response, to maintain the takedown, the label would have had to file a copyright infringement suit in court. But the creators did not contest, and the songs were removed.

A year later, Drake himself released an allegedly illegal AI-generated song, and Tupac’s estate threatened to sue. The estate invoked Tupac’s right of publicity, an IP right protecting against the misappropriation of a person’s likeness — in this case, the late rapper’s voice — for commercial benefit. Drake could have left the song up and forced the estate to litigate; instead, he removed it, probably at the behest of his lawyers. Meanwhile, Kendrick Lamar waived copyright claims on his diss tracks aimed at Drake, allowing content creators to monetize reaction videos and remixes.

Ultimately, the extent to which existing laws apply to AI music depends on the jurisdiction of the legal challenge. California, for example, has strict publicity rights favoring artists. Law Professor Mark Bartholomew indicated that Drake likely violated the law “because the rights holders [Tupac’s estate] are in California, and California has a pretty vigorous right to your identity in various forms that extends years after death.” But “if we were talking about a celebrity who is from a different state, we’d have a different analysis.”

How exactly an artist uses AI to craft a song is also relevant to the legal analysis, especially under copyright law. Copyright applies to both the melody and lyrics of a song. ghostwriter977, for example, declined to clarify which elements of “Heart on My Sleeve” were AI-generated versus self-written. Although the beat and lyrics appear original, the song featured a producer tag from Metro Boomin, which Universal considered an unauthorized sample.

Record labels would love to see more regulation of AI music. Last July, for example, UMG urged the Senate Judiciary Committee “to enact a federal Right of Publicity statute.” But stricter IP laws would hurt content creators, handing record labels yet another tool to squash creative, fair uses. If anything, Congress should consider legislation clarifying how the fair use doctrine applies to AI.

Unfortunately, Congress appears receptive to the labels’ pleas. Earlier this spring, Senator Thom Tillis (R-NC) opened his testimony before a Senate subcommittee on IP by playing Drake’s AI-Tupac verse. Tillis called for “legislation addressing the misuse of digital replicas” in order to ensure AI-generated music is “under control.”

Everything is under control. This April, just as last April, existing law was sufficient to resolve Drake’s AI-related legal disputes, providing concrete remedies despite relatively novel facts involving new technologies. The saga underscores the legal system’s ability to cleanly manage fact patterns involving AI. There may be gaps in the law, but the fact remains: AI music is already regulated.

Andy Jung is associate counsel at TechFreedom, a nonprofit, nonpartisan think tank focused on technology law and policy.

Posted on Techdirt - 19 July 2022 @ 10:46am

California’s Social Media Bill Flies In The Face Of The First Amendment

California has officially joined the growing list of states attempting to regulate how social media companies run their platforms. The state’s proposed legislation, however, faces a major legal obstacle: the Constitution.

California lawmakers are marching ahead with AB 2408, the Social Media Platform Duty to Children Act. On June 28, the Judiciary Committee unanimously passed an amended version of the bill, tweaking several provisions. Next, AB 2408 must pass the Senate Appropriations Committee and the California Senate before governor Gavin Newsom may sign the bill into law.

AB 2408 would impose a duty on social media platforms to avoid addicting minor users. Although protecting minors is a noble cause, regulating how social media design their services likely violates the First Amendment, which protects platforms’ right to curate content based on their editorial discretion.

As with most bills, the devil’s in the details. AB 2408’s structure and prohibitions would limit platforms’ abilities to arrange and moderate content for minors.

AB 2408 defines “Addict” as the act of “knowingly or negligently caus[ing] addiction through any act or omission.” The bill defines “Addiction” as “use of one or more social media platforms” resulting in “preoccupation or obsession with, or withdrawal or difficulty to cease or reduce use” in addition to “physical, mental, emotional, developmental, or material harms to the user.”

The bill allows the Attorney General to sue social media platforms for implementing “a design, feature, or affordance” which leads to addiction. To prevail under AB 2408, a plaintiff must prove that a minor “became addicted and was therefore harmed,” that a design or feature on the platform “was a substantial factor” in the addiction, and that it “was reasonably foreseeable” that the design or feature would lead to addiction.

A recent amendment removed a private right of action which would have allowed minor users and parents to sue platforms directly. Lawmakers also changed the definition of “social media platform.” The amendments, however, do little to change the bill’s constitutionality.

In short, AB 2408 aims to prohibit social media platforms from building features which the platforms know, or ought to know, will result in “addiction” for minors.

In general, social media platforms design features to make their platforms more useful or enjoyable. For example, any internet platform worth its salt uses algorithms to display, recommend, and tailor content based on a user’s browsing activity and interests. By restricting how social media companies build and use these features, AB 2408 interferes with their editorial discretion by limiting how platforms display and amplify content.

AB 2408 appears less objectionable than the social media regulations currently brewing in Texas and Florida, which are geared towards forcing platforms to host conservative content. Ultimately, however, all three bills seek to regulate how social media platforms moderate content. It’s unlikely these bills withstand First Amendment challenges.

Texas’s and Florida’s social media bills are already running into trouble in court. On May 31, the Supreme Court suspended Texas’s HB20, reimposing a preliminary injunction on enforcement of the legislation.

Just eight days earlier, the U.S.federal Court of Appeals for the Eleventh Circuit held that Florida’s social media bill violates the First Amendment. Circuit Judge Kevin Newsom explained: “Put simply, with minor exceptions, the government can’t tell a private person or entity what to say or how to say it.”

The court concluded that social media platforms’ “‘content-moderation’ decisions constitute protected exercises of editorial judgment, and that the provisions of the new Florida law that restrict large platforms’ ability to engage in content moderation unconstitutionally burden that prerogative.”

Proponents of AB 2408 argue that the bill only regulates business conduct, not speech. But limiting platforms’ abilities to build features used to display content implicates their constitutionally protected editorial judgment.

In Reno v. ACLU, the Supreme Court applied the First Amendment to the Internet, striking down provisions of the 1996 Communications Decency Act which criminalized the intentional transmission of “obscene or indecent” messages and information depicting or describing “sexual or excretory activities or organs” in an “offensive” manner. The Court found “no basis for qualifying the level of First Amendment scrutiny that should be applied to” the Internet.

Twenty years earlier, in Miami Herald v. Tornillo, the Supreme Court held that the government cannot regulate a newspaper’s “choice of material” or “the decisions made as to limitations on the size and content of the paper.”

Social media features designed to display content to users are analogous to newspaper editors dictating the size and content of their paper. Just as it protects newspapers, the First Amendment likely limits California’s authority to punish Internet platforms for their editorial decisions related to displaying and arranging content on their services. Consequently, AB 2408 faces the same First Amendment roadblocks as the Texas and Florida bills.

Protecting children is important. That’s undeniably true. Lawmakers, however, must pursue these policy objectives within the confines of the Constitution.

Andy Jung is a Legal Fellow at TechFreedom, a non-profit, non-partisan think tank focused on technology law and policy. Andy received his law degree from Antonin Scalia Law School in Arlington, VA. Before law school, Andy worked for software startup companies in California.

Posted on Techdirt - 7 April 2022 @ 03:31pm

Shifting Sands In The Tech Sector

In the U.S., politicians are itching to disrupt Big Tech. In January, the Senate Judiciary Committee approved the American Innovation and Choice Online Act, introduced by Senator Klobuchar in October 2021, which would prohibit large technology companies like Amazon, Apple, Facebook, and Google from preferencing their own products and services.

Senator Grassley celebrated the bill for helping “level the playing field for small businesses and entrepreneurs.” But what if Congress could offer small businesses an even better deal than a “level playing field?”

Enter the regulatory sandbox.

Regulatory sandboxes are regulatory frameworks which allow qualifying companies to sell products and services without complying with the red tape governing that industry. Sandbox companies are not exempt from all regulations. For example, regulatory sandboxes may provide for consumer protections like product liability. Regulatory sandboxes may expire after a set period of time, and companies exit the sandbox once they outgrow the criteria.

Legislatures create regulatory sandboxes to reduce legal pressure on growing businesses in order to encourage experimentation and innovation. Regulators then collaborate with sandbox companies to collect data on the industry. In turn, lawmakers use the sandbox data to inform legislative changes and better serve the business community.

In 2018, Arizona created the first, successful regulatory sandbox in the U.S. Since then, Wyoming, Utah, Kentucky, Nevada, Vermont, Hawaii, Florida, South Dakota, West Virginia, and North Carolina have developed sandboxes of their own in a variety of industries.

Arizona’s regulatory sandbox for fintech companies defines the sandbox as a program “that allows a person to temporarily test an innovation on a limited basis without otherwise being licensed or authorized to act under the laws of this state.” Through this simple definition, Arizona’s sandbox lifts licensing and authorization requirements, lowering barriers to entry for innovative companies.

The federal government has already experimented with a regulatory sandbox for drones. Under the Unmanned Aircraft Systems Integration Pilot Program, the Federal Aviation Administration (FAA) and Department of Transportation allowed state, local, and tribal governments to partner with the private sector to test low-altitude drone operations. The FAA currently allows drone pilots to apply for Part 107 Waivers, which allow qualified pilots to perform certain flight activities prohibited to the general public.

A regulatory sandbox for technology startups could mirror the Part 107 waiver system, with the federal government or an agency issuing waivers to growing and innovative tech companies. To specify which companies qualify, Congress could repurpose the term used by current antitrust bills: “covered platform.”

In the context of the sandbox, “covered platform” would apply to online platforms or services with less than a certain number of users and a limited market capitalization. Lawmakers could tailor the definition as needed by adjusting the user and revenue limits.

A federal regulatory sandbox for technology could include provisions covering consumer protection, data privacy, and environmental considerations. Additionally, Congress could work with sandbox companies to collect data on content moderation and disinformation. In doing so, lawmakers would receive insight into hot button issues.

Through this bottom-up approach, the government would encourage more competitors to enter the technology marketplace, promoting competition in a very literal sense. Congress should prioritize the creation of new technology companies rather than attempting to dismantle successful incumbent firms.

Andy Jung is a Legal Fellow at TechFreedom, a non-profit, non-partisan think tank focused on technology law and policy. Andy received his law degree from Antonin Scalia Law School in Arlington, VA. Before law school, Andy worked for software startup companies in California.