Highlights From Former Rep. Chris Cox's Amicus Brief Explaining The History And Policy Behind Section 230

from the future-reference dept

The Copia Institute was not the only party to file an amicus brief in support of Airbnb and Homeaway’s Ninth Circuit appeal of a district court decision denying them Section 230 protection. For instance, a number of Internet platforms, including those like Glassdoor, which hosts specialized user expression, and those like eBay, which hosts transactional user expression, filed one pointing out how a ruling denying Airbnb and Homeaway would effectively deny it to far more platforms hosting far more kinds of user speech than just those platforms behind the instant appeal.

And then there was this brief, submitted on behalf of former Congressman Chris Cox, who, with then-Representative Ron Wyden, had been instrumental in getting Section 230 on the books in the first place. With this brief the Court does not need to guess whether Congress intended for Section 230 to apply to platforms like Airbnb and Homeaway; the statute’s author confirms that it did, and why.

In giving insight into the statutory history of Section 230 the brief addresses the two main issues raised by the Airbnb appeal ? issues that are continuing to come up over and over again in Section 230-related litigation in state and federal courts all over the country: does Section 230 apply to platforms intermediating transactional user expression, and does Section 230’s pre-emption language preclude efforts by state and local authorities to hold these platforms liable for intermediating the consummation of the transactional speech. Cox’s brief describes how Congress intended both these questions to be answered in the affirmative and thus may be relevant to these other cases. With that in mind, we are archiving ? and summarizing ? the brief here.

To illustrate why Section 230 should apply in these situations, first the brief explains the historical context that prompted the statute in the first place:

In 1995, on a flight from California to Washington, DC during a regular session of Congress, Representative Cox read a Wall Street Journal article about a New York Superior Court case that troubled him deeply. The case involved a bulletin board post on the Prodigy web service by an unknown user. The post said disparaging things about an investment bank. The bank filed suit for libel but couldn?t locate the individual who wrote the post. So instead, the bank sought damages from Prodigy, the site that hosted the bulletin board. [page 3]

The Stratton Oakmont v. Prodigy decision alarmed Cox for several reasons. One, it represented a worrying change in judicial attitudes towards third party liability:

Up until then, the courts had not permitted such claims for third party liability. In 1991, a federal district court in New York held that CompuServe was not liable in circumstances like the Prodigy case. The court reasoned that CompuServe ?ha[d] no opportunity to review [the] contents? of the publication at issue before it was uploaded ?into CompuServe?s computer banks,? and therefore was not subject to publisher liability for the third party content.” [page 3-4]

It had also resulted in a damage award of $200 million dollars against Prodigy. [page 4]. Damage awards like these can wipe technologies off the map. If platforms had to fear the crippling effect that even one such award, arising from just one user, could have on their developing online services, it would dissuade them from being platforms at all. As the brief observes:

The accretion of burdens would be especially harmful to smaller websites. Future startups, facing massive exposure to potential liability if they do not monitor user content and take responsibility for third parties? legal compliance, would encounter significant obstacles to capital formation. Not unreasonably, some might abjure any business model reliant on third-party content. [page 26]

Then there was also a third, related concern: according to the logic of Stratton Oakmont, which had distinguished itself from the earlier Cubby v. Compuserve case, unlike Compuserve, Prodigy had “sought to impose general rules of civility on its message boards and in its forums.” [page 4].

The perverse incentive this case established was clear: Internet platforms should avoid even modest efforts to police their sites. [page 4]

The essential math was stark: Congress was worried about what was going on the Internet. It wanted platforms to be an ally in policing it. But without protection for platforms, they wouldn’t be. They couldn’t be. So Cox joined with Senator Wyden to craft a bill that would trump the Stratton Oakmont holding. The result was the Internet Freedom and Family Empowerment Act, H.R. 1978, 104 Cong. (1995), which, by a 420-4 vote reflecting significant bipartisan support, became an amendment to the Communications Decency Act ? Congress’s attempt to address the less desirable material on the Internet ? which then came into force as part of the Telecommunications Act of 1996. [page 5-6]. The Supreme Court later gutted the indecency provisions of the CDA in Reno v. ACLU, but the parts of the CDA at Section 230 have stood the test of time. [page 6 note 2].

The statutory language provided necessary relief to platforms in two important ways. First, it included a “Good Samaritan” provision, meaning that “[i]f an Internet platform does review some of the content and restricts it because it is obscene or otherwise objectionable, then the platform does not thereby assume a duty to monitor all content.” [page 6]. Because keeping platforms from having to monitor was the critical purpose of the statute:

All of the unique benefits the Internet provides are dependent upon platforms being able to facilitate communication among vast numbers of people without being required to review those communications individually. [page 12]

The concerns were practical. As other members of Congress noted at the time, “There is no way that any of that any of those entities, like Prodigy, can take the responsibility [for all of the] information that is going to be coming in to them from all manner of sources.? [page 14]

While the volume of users [back when Section 230 was passed] was only in the millions, not the billions as today, it was evident to almost every user of the Web even then that no group of human beings would ever be able to keep pace with the growth of user-generated content on the Web. For the Internet to function to its potential, Internet platforms could not be expected to monitor content created by website users. [page 2]

Thus Section 230 established a new rule expressly designed to spare platforms from having to attempt this impossible task in order to survive:

The rule established in the bill […] was crystal clear: the law will recognize that it would be unreasonable to require Internet platforms to monitor content created by website users. Correlatively, the law will impose full responsibility on the website users to comply with all laws, both civil and criminal, in connection with their user-generated content. [But i]t will not shift that responsibility to Internet platforms, because doing so would directly interfere with the essential functioning of the Internet. [page 5]

That concern for the essential functioning of the Internet also explains why Section 230 was not drawn narrowly. If Congress had only been interested in protecting platforms from liability for potentially defamatory speech (as was at issue in the Stratton Oakmont case) it could have written a law that only accomplished that end. But Section 230’s language was purposefully more expansive. If it were not more expansive, while platforms would not have to monitor all the content it intermediated for defamation, they would still have to monitor it for everything else, and thus nothing would have been accomplished with this law:

The inevitable consequence of attaching platform liability to user-generated content is to force intermediaries to monitor everything posted on their sites. Congress understood that liability-driven monitoring would slow traffic on the Internet, discourage the development of Internet platforms based on third party content, and chill third-party speech as intermediaries attempt to avoid liability. Congress enacted Section 230 because the requirement to monitor and review user-generated content would degrade the vibrant online forum for speech and for e-commerce that Congress wished to embrace. [page 15]

Which returns to why Section 230 was intended to apply to transactional platforms. Congress didn’t want to be selective about which types of platforms could benefit from liability protection. It wanted them all to:

[T]he very purpose of Section 230 was to obliterate any legal distinction between the CompuServe model (which lacked the e-commerce features of Prodigy and the then-emergent AOL) and more dynamically interactive platforms. ? Congress intended to ?promote the continued development of the Internet and other interactive computer services? and ?preserve the vibrant and competitive free market? that the Internet had unleashed. Forcing web sites to a Compuserve or Craigslist model would be the antithesis of the congressional purpose to ?encourage open, robust, and creative use of the internet? and the continued ?development of e-commerce.? Instead, it will slow commerce on the Internet, increase costs for websites and consumers, and restrict the development of platform marketplaces. This is just what Congress hoped to avoid through Section 230. [page 23-24]

And it wanted them all to be protected everywhere because Congress also recognized that they needed to be protected everywhere in order to be protected at all:

A website [?] is immediately and uninterruptedly exposed to billions of Internet users in every U.S. jurisdiction and around the planet. This makes Internet commerce uniquely vulnerable to regulatory burdens in thousands of jurisdictions. So too does the fact that the Internet is utterly indifferent to state borders. These characteristics of the Internet, Congress recognized, would subject this quintessentially interstate commerce to a confusing and burdensome patchwork of regulations by thousands of state, county, and municipal jurisdictions, unless federal policy remedied the situation. [page 27]

Congress anticipated that states and local authorities would be tempted to impose liability on platforms, and in doing so interfere with the operation of the Internet by forcing platforms to monitor after all and thus cripple their operation:

Other state, county, and local governments would no doubt find that fining websites for their users? infractions is more convenient than fining each individual who violates local laws. Given the unlimited geographic range of the Internet, unbounded by state or local jurisdiction, the aggregate burden on an individual web platform would be multiplied exponentially. While one monitoring requirement in one city may seem a tractable compliance burden, myriad similar-but-not-identical regulations could easily damage or shut down Internet platforms. [page 25]

So, “[t]o ensure the quintessentially interstate commerce of the Internet would be governed by a uniform national policy” of sparing platforms the need to monitor, Congress deliberately foreclosed the ability of state and local authorities to interfere with that policy with Section 230’s pre-emption provision. [page 10]. Without this provision, the statute would be useless:

Were every state and municipality free to adopt its own policy concerning when an Internet platform must assume duties in connection with content created by third party users, not only would compliance become oppressive, but the federal policy itself could quickly be undone. [page 13]

This pre-emption did not make the Internet a lawless place, however. Laws governing offline analogs to the services starting to flourish on the web would continue to apply; Section 230 simply prevented platforms from being held derivatively liable for user generated content that violated them. [page 9-10].

Notably, none of what Section 230 proposed was a controversial proposition:

When the bill was debated, no member from either the Republican or Democratic side could be found to speak against it. The debate time was therefore shared between Democratic and Republican supporters of the bill, a highly unusual procedure for significant legislation. [page 11]

It was popular because it advanced Congress’s overall policy to foster the most beneficial content online, and the least detrimental.

Section 230 by its terms applies to legal responsibility of any type, whether under civil or criminal state statutes and municipal ordinances. But the fact that the legislation was included in the CDA, concerned with offenses including criminal pornography, is a measure of how serious Congress was about immunizing Internet platforms from state and local laws. Internet platforms were to be spared responsibility for monitoring third-party content even in these egregious cases.

A bipartisan supermajority of Congress did not support this policy because they wished to give online commerce an advantage over offline businesses. Rather, it is the inherent nature of Internet commerce that caused Congress to choose purposefully to make third parties and not Internet platforms responsible for compliance with laws generally applicable to those third parties. Platform liability for user-generated content would rob the technology of its vast interstate and indeed global capability, which Congress decided to ?embrace? and ?welcome? not only because of its commercial potential but also ?the opportunity for education and political discourse that it offers for all of us.? [page 11-12]

As the brief explains elsewhere, Congress’s legislative instincts appear to have been born out, and the Internet today is replete with valuable services and expression. [page 7-8]. Obviously not everything the Internet offers is necessarily beneficial, but the challenges the Internet’s success pose don’t negate the policy balance Congress struck. Section 230 has enabled those successes, and if we want its commercial and educational benefit to continue to accrue, we need to make sure that the statute’s critical protection remains available to all who depend on it to realize that potential.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Highlights From Former Rep. Chris Cox's Amicus Brief Explaining The History And Policy Behind Section 230”

Subscribe: RSS Leave a comment
Anonymous Coward says:

Is Section 230 CDA to benefit The Public or corporations?

Obviously The Public by providing speech outlets. You could not find a single politician who’d say it’s to benefit corporations. — Though it’s entirely possible that was and is the intent: a stealthy form of censorship.

Corporations have PR departments with large budgets to get their message out. It’s only individual "natural" persons who need a way to state their views.

Masnick (or this alleged lawyer) highlights that CDA 230 has an immunity clause that allows corporations to HOST content without the liability of PUBLISHING it:

"No provider or user of an interactive computer service shall be held liable on account of-

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…"

Note first that states causes valid in common law. It’s a requirement for simple decency. Not controversial so far…

Anonymous Coward says:

Corporatists try to twist CDA into control of OUR PUBLISHING!

Yes, I’m aware this is slightly off-topic. But the ultimate purpose of the above EFF (funded by Google) blather is to sweep on to:

They claim that corporations can use that "restrict access or availability of" clause to, whenever wish — even explicitly over YOUR First Amendment Rights — step in and EDIT comments, to point of becoming THE publisher, even to PREVENT we "natural" persons from publishing on "their" platforms at all!

But those are OUR platforms, The Public’s, NOT theirs.

Corporations are allowed merely to operate the machinery which is to convey The Public’s views. Corporations are NOT to control who gets on, nor WHAT The Public publishes, except under OUR clear common law terms.

But Masnick is for corporations CONTROLLING the speech and outlets of "natural" persons. That’s repeated often here, can’t be mistaken.

Such control (by ANY entity) to remove First Amendment Rights from The Public CANNOT be purpose of ANY statute. It’d be null and void because directly UN-Constitutional.

It’s NOT law, only the assertion of corporatists: no court has yet supported what Masnick claims. Corporatists are reaching for the moon without even a step-ladder. It’s simply a trick Fascists are trying to pull.

The provisions over-riding First Amendment Rights ONLY apply if done in "good faith" for The Public’s purposes. A corporation intent on wiping out YOUR publishing in favor of its own views CANNOT be "good faith" for The Public, but only de facto tyranny and censorship.

John Roddy (profile) says:

Re: Corporatists try to twist CDA into control of OUR PUBLISHING!

Funded by GOOGLE? It’s ALWAYS funded by “Google,” isn’t it? What about Microsoft? They have a search engine too, and they’re particularly bad about violating the natural common persons law thing you keep referencing (I think). You really need to broaden your views and realize that Google does actually have competition. Slag off Yahoo! every now and then or something.

And what about Ask Jeeves? That one has been kinda off the radar for a while. In fact, they would be quite angry at Google specifically, wouldn’t they? And their inferior search engine would lead to less useful results to reference when trying to call out Google’s corporate practices…

Are…are you that butler? That would explain A LOT.

Anonymous Coward says:

So WHY assert that corporations operating "platforms"

are empowered to control the speech of "natural" persons? That’s EXACT OPPOSITE INTENT OF ALL LAW.

Only Mitt Romney and Mike Masnick will say that corporations are "persons" — Romney got roundly hooted for it, and Masnick only does it here in this little walled garden where he’s cultivated vegetables who don’t question him.

Masnick is a total corporatist. Only the mistaken presumption that he acts in "good faith" and shares YOUR views gives him any credibility. — Take away that presumption for a week, and READ what he writes: he’s very open about particularly that "platforms" have an alleged First Amendment Right to arbitrarily control access of we "natural" persons. Masnick believes not only that The Public can be denied access, but since Google controls search, that it can effectively "hide" speech even on alternative smaller outlets you’re forced to use. — Masnick uses "hiding" right here to disadvantage dissenters until they give up and quit commenting. He can thereby claim doesn’t censor.

Corporatists are going for TOTAL control over "natural" persons, period.

Stephen T. Stone (profile) says:

Re: So WHY assert that corporations operating "platforms"

Technically, it is the owners of those platforms that have the First Amendment right of association. The owners can—and often do—delegate the power of deciding who can keep the privilege of posting on a given platform to administrators and moderators.

And if you cannot stand how the Techdirt comments section can hide your comments thanks to the flagging system, maybe find a better platform for your speech than the comments section of a blog you sincerely hate but keep reading because you apparently like to self-harm.

Bergman (profile) says:

Re: So WHY assert that corporations operating "platforms"

Corporations are not governments. The rights-protecting prohibitions in the Constitution do not apply to anyone but the government.

Suppose I were to go to your house, and paint political slogans across the front of it. Do you have a right to paint over them? If a corporation has no right to censor the speech of a person, then you have no right to paint over my speech even though it’s your property.

That One Guy (profile) says:


I mean really, what does one of the people who wrote it know as far as it’s intent? Clearly politicians in the here and now know much better what the previous politicians really meant.

As such it’s obviously a horrendous misreading, or due to a pile of typos that the protections have been considered to be broad and encompassing, rather than very narrowly tailored and extremely limited in scope, such that attempts to re-write/interpret the law more recently in a much more narrow scope are in fact merely attempts to match what the original politicians actually meant, as opposed to the abomination that the courts have thought it meant.

Anonymous Coward says:

Re: So WHY assert that corporations operating "platforms"

I’ve no idea what you’re going on about, but my take is that the real intent here of highlighting benefits is solely to gain credibility to sweep on to the clearly UN-Constitutional assertion that corporations are authorized like royalty to determine what and whether persons of The Public publish.

How exactly would The Public get any benefit from Section 230 if mega-corporations have unlimited control over our publishing? Cannot be fitted into the body of law.

Stephen T. Stone (profile) says:

Re: Re: Re:

If you don’t like Section 230, instead of whining endlessly about it, suggest something better. For all your childish criticism of 230 and corporations, you have never suggested any change to the law that would, in your opinion, improve said law without gutting the legal protections it upholds. Critics want things to be better; without that mindset, you’re just a whiny asshole.

That One Guy (profile) says:

Re: Re: Re: There's shooting your own foot, and then there's C4-ing it...

That really is the ultimate punchline to their ranting on this subject, that what they are arguing for would hand over immense power to the very groups they claim to hate so much.

If platforms were liable for user submitted content then they will, by necessity, either shut down said content, or act as gatekeepers for it all, either of which puts them in a very powerful position and decimates the ability of the public to speak and/or post their creations.

Bergman (profile) says:

Re: Re: So WHY assert that corporations operating "platforms"

Having a right to speak/publish does not grant you an unlimited right to do so anywhere, any time, with any content.

If I own a building and put a bulletin board on the outside wall of it for people to use as an analogue version of Craigslist, and you use it to post political screeds, your rights are not violated if I take them down.

It’s not clearly unconstitutional because your right to speak does not include a right to compel me to associate with you or to provide you a forum to use.

Anonymous Coward says:

Re: Oh, by the way "That One Guy": weren't you cancelled by Twitter?

I seem to recall a piece here on that exact topic.

And wasn’t that a corporation controlling your speech, even whether you had an outlet? YOU of all here should definitely agree with me that the purpose of Section 230 is NOT to empower corporations.

That One Guy (profile) says:

Re: Re: Nope

You really are wrong on every level. That was That Anonymous Coward, and despite the fact that Twitter blocked him temporarily even he didn’t make the claim that he was owed a platform by Twitter, and in fact when you tried the ‘Platforms have no right to block people’ shtick he was pretty clear that he still believed that they absolutely did.

Anonymous Coward says:

Public Knowledge's Approach to Dominant Online Platforms

Rather than immediately wading into a troll-o-rama by responding directly to some of the other comments here, I’ll just point out that some of the commenters here appear to shout loudly for simple answers in a very complex policy space.

A more thoughtful approach has recently been suggested by Public Knowlege:
Due Process, and Our Approach to Dominant Online Platforms”, (Public Knowledge blog, May 24, 2018)

Today [May 24, 2018], Public Knowledge released a paper, “Even Under Kind Masters,” that recommends that dominant internet platforms provide users with due process. . . .

The fundamentals of due process — that users should have notice of and an opportunity to challenge actions that are proposed to be taken against them, and to have their challenge heard by a truly impartial tribunal — are the best way to ensure that arbitrary actions from dominant internet platforms do not inadvertently (or deliberately) cause serious harm to individual users. . . .

Anonymous Coward says:

Re: Re: Re:

… prefers to argue …

And Stephen T. Stone — what does he prefer to discuss?

The paper, for instance, touches upon Munn v Illinois (1876), Marsh V Alabama (1946) and Red Lion Broadcasting v FCC (1969). None of those case stand for the proposition that mere title to property confers an unassailable, ironclad right to override public interests.

Stephen T. Stone (profile) says:

Re: Re: Re: Re:

How, then, should a service the size of Twitter confer “due process” rights vis-á-vis moderation decisions in a way that balances a Twitter admin’s ability moderate the service as they see fit with the opportunity for users who have been shadowbanned/suspended/full-on banned to have their appeals addressed in a timely manner? And how should we determine when a service must begin to strike that balance—userbase size, activity levels, net worth, any combination of the three, or other factors not listed above?

I agree that letting one service grow “too big to fail” or too “dominant” on the Internet risks a lot for everyone who uses that service. The right to speak one’s mind, however, is not one of those things. I voluntarily left Twitter a month ago and I don’t feel like my voice has been silenced as a result. If anything, I feel more free to speak my mind now that I don’t have to worry about The New Internet Hate Machine gnawing at my legs for daring to say anything.

Anonymous Coward says:

Re: Re: Re:2 Re:

And how should we determine when a service must begin to strike that balance…?

The Public Knowledge paper begins to address that question pp.25-9 (pp.26-30 in PDF)

One starting point for determining “dominance” can simply be “market power”—an antitrust-like analysis . . .

Another approach is to have a bright-line rule—some clearly articulated standard that a particular platform meets or does not . . .

Yet another method is to have some sort of body that makes determinations with respect to particular platforms—determinations that may change, but which are binding until they do. . . .

The best approach that takes into account these considerations might be a hybrid one. . . .

Fwiw, I don’t necessarily prefer the Public Knowlege approach. But they’re presenting another policy alternative to both traditional FCC common carrier regulation and to traditional antitrust regulation, as well as to current § 230’s laissez-faire approach.

Stephen T. Stone (profile) says:

Re: Re: Re:3 Re:

CDA 230 encourages hands-on moderation, though. The issue here is the exact point at which a service becomes “big enough” to warrant some form of intervention—possibly even government intervention—in how admins moderate that service.

The flagship instance of the Mastodon protocol, mastodon.social, has over 160k users who have authored more than 6m posts. How many more users/posts must that particular instance have before someone needs to step in and tell the admins “you do this moderation thing our way now”? Who would do the stepping in? How long would they stay to “ensure compliance”? What punishment should be handed out for any failure of adherence to the new moderation standards? How would these standards apply to smaller Mastodon instances that are federated with mastodon.social and/or do not have servers in the United States?

Asking for a change in how a service like Twitter moderates itself is fine. Getting into the details of that change puts you in a situation where decisions get made and things change and nothing can be like it was before. If you cannot hash out the details before you push for the change, nothing will end well for anyone.

Anonymous Coward says:

Re: Re: Re:4 Re:

Getting into the details of that change…

Current 47 USC § 230(f)(2)‘s definition of “interactive computer service” makes no distinctions along the lines of market dominance, essential facilities, or telecommunications service layer. It doesn’t distinguish Comcast from Twitter, AT&T from Facebook.

The term "interactive computer service" means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.

Public Knowledge’s paper, pp.8-11 (pp.9-12 in PDF) recognizes “the difficulty in pinning down with exactitude what a ‘platform’ is”.

Looking at things from another side, over in Karl Bode’s adjacent article discussing California’s proposed SB 822, he mentions that “some legal experts” are prepared to argue that the FCC has abandoned its preemptive authority.

Notwithstanding, though, the D.C. Circuit’s observation yet again that Congress tends not to “hide elephants in mouseholes”, there’s no indication whatsoever that Congress has abandoned Congress’s § 230(e)(3) preemption authority over § 230(c)(2)(A) restrictions.

You don’t really need me to tell you this — the ISPs, as they have in the past, will undoubtedly bring this issue up again.

Anonymous Coward says:

Re: Re: Re:8 Re:

I didn’t ask the paper—I asked you.

If you’d read the Public Knowledge paper, then you might discover that on pp.10-11 (pp.11-12), they motivate their approach, saying:

The approach proposed by this paper may offer another path out of these initial definitional difficulties, since it proposes only that due process (and potentially other requirements) be provided by dominant platforms. . . . [P]ractically speaking, dominance may be an easier concept to pin down than “platform,” with this initial criterion also limiting the universe of entities to consider.

I read that as a fairly straightforward acknowledgement from Public Knowledge that defining “platforms” in statutory language is not quite a trivial exercise.

So Public Knowledge is responding to a question that’s been raised.

Stephen T. Stone (profile) says:

Re: Re: Re:11

When I asked you those questions, I didn’t ask you to quote Public Knowledge’s paper. I asked with the intent of hearing your personal opinions on the matter. If you are either unable or unwilling to answer from your perspective—if all you want to do is keep quoting someone else’s words as if you cannot form an opinion of your own—stop wasting both my time and yours.

Anonymous Coward says:

Re: Re: Re:12 Re:

… hearing your personal opinions on the matter.

Well then, listen very carefully for the personal opinions that I’ve provided in my preceding comments.

It’s no secret that in the last mile, at bottom, my first choice is municipal FTTx.

Beyond that, but again at the base infrastructure layers of the stack, I’m fairly strongly in favor of amended Title II common carrier regulation. Not really current Title II with forbearance, but a comprehensive 21st century update to the Communications Act of 1934 as currently amended by the Telecommunications of 1996.

Oh, and supplementing that, I’m actually in favor of enforcing the Sherman and Clayton Acts and Hart-Scott-Rodino as necessary.

So, while right now those are some of my primary telecommunications policy preferences, I’m listening with solid interest to other approaches and points of view. But I’m not committing yet to support this particular approach put forward by Public Knowledge.

Finally, though, I do welcome their paper as advancing the discussion. In my opinion, it does respond, at least in some part, to a significant question that I’ve been raising publicly for awhile now.

Anonymous Coward says:

Re: Re: Re:12 Re:

stop wasting both my time and yours.

Incidentally, if you get impatient rather easily, then I’d personally recommend that you migh consider whether public policy development is really your cup of tea. You might consider action that moves more quickly —proverbially— —for instance— watching paint dry.

Anonymous Coward says:

The Internet back in the time of the dotcom bubble was this exciting sexy new thing. Everyone wanted to jump on it, convinced it’d make them a millionaire. Why wouldn’t the government try to protect it?

But nowadays, that bubble has burst. The Internet is synonymous with Google, Facebook is synoymous with identity theft and startup is synonymous for bankrupt. Few people became millionaires. Internet defenders are known only as shills, trolls or worse. To the government, the Internet is now this scary thing used by lonely men in their mother’s basement to hack the FBI and post details on Wikileaks. Of course the government wants to control it or cripple it.

Anonymous Coward says:

Re: Re:

The Internet back in the time of the dotcom bubble was this exciting sexy new thing.

Wikipedia dates the dotcom bubble to the years “roughly from 1997 to 2001”. That is, just after the Telecommunication Act of 1996.

Just before the dotcom bubble, the ‘net had a lot of interesting people, many receiving access through their employer or university, most with a fairly high level of education, and posessing a decent technical acumen.

Sure, the trolls were there, and a bit of widespread cynicism, too. But, in the right places, it was relatively easy to have a good public policy discussion.

Some of us thought we were changing the world. We most likely did.

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...