If cops screw up enough, they may get blacklisted by prosecutors. These lists — known as “Brady” or “Giglio” lists (depending on jurisdiction) — inform prosecutors that they may not want to ask these officers to testify due to their long histories of misconduct.
Most lawsuits generated by cops appearing on these lists deal with the lists themselves. Defense attorneys are often denied access to these lists, preventing them from challenging testimony from cops even prosecutors feel are too untrustworthy to provide their insight in normal criminal prosecutions.
This lawsuit is an anomaly. It’s being brought by Manuel Adams, an officer who managed to ascend to the position of captain over his 18 years with the Harahan (Louisiana) Police Department. Louisiana is especially forgiving of its worst cops. So is the Fifth Circuit Court of Appeals, which is the appellate district most likely to award qualified immunity to rights-violating officers.
This decision [PDF] is an exception that proves multiple rules. First, there’s Captain Adams himself, who did enough bad things during his 18-year career that his previously unblemished record was blemished by Chief Robert Walker, who discovered a great many details about Cpt. Adams’ public service after taking office.
HPD Chief of Police Robert Walker (“Chief Walker”) determined that Adams was guilty of numerous offenses, including: (1) Conduct Unbecoming an Officer; (2) Unsatisfactory Performance; and (3) False Statement.
Captain Adams took full advantage of his rights as a law enforcement officer. He appealed this determination as allowed by state law. But this didn’t happen until Chief Walker forwarded Adams’ name to the county district attorney’s office, which added it to its “Giglio list,” marking Captain Adams as someone whose testimony wasn’t to be trusted.
Adams sued, claiming his inclusion on this list meant the end of his law enforcement career.
Adams alleges that an officer’s inclusion on the Giglio list is effectively a “death knell to a career in law enforcement.” Because the Giglio list is at JPDA’s discretion, a successful appeal by Adams would not force JPDA to remove his name from the list. Faced with no guaranteed way to get his name off of the Giglio list, Adams sued the City.
Adams overstates his case. First, being included on a prosecutor’s no-call list does not often end officers’ careers. Instead, it possibly encourages them to commit more rights violations, seeing as they won’t be called to testify and, given prosecutors’ steady refusal to turn over these lists to defense attorneys, their lack of input won’t make much difference to prosecutions.
Second, there shouldn’t be any “guaranteed” way to remove your name from a Giglio list. Nothing but several years of pristine public service should allow an officer to clear their name from these lists. Trust is easily destroyed. That it takes years to rebuild this destroyed trust isn’t a cause for action.
The lower court made a mess of this. While it ruled that the PD did not violate Adams’ due process rights by forwarding his name to the DA, it did rule that there was some sort of “right to have the job one prefers.”
It rejected Adams’s assertions that the City unconstitutionally violated his property interest because he was afforded due process when he exercised his right to appeal Chief Walker’s determinations. It then evaluated whether the City violated his liberty interests. Notably, it recognized Adams’s “liberty interest in his occupation as a law enforcement officer.” It reasoned that the Supreme Court supported its conclusion that Adams has a right “to engage in any of the common occupations of life.” Kerry v. Din, 576 U.S. 86, 94 (2015). It then held that the City violated his right by failing to provide him the “opportunity to be heard at a meaningful time and in a meaningful manner” before reporting his disciplinary charges to JPDA.
This is all wrong, says the Fifth Circuit. While there may be some common law right to not be prevented from obtaining a job, there’s no right guaranteeing your current employment, much less an eternally greased wheel ensuring you never face any difficulty whatsoever in carrying out your duties.
Here, Adams fails to establish that he has a liberty interest in his continued employment in law enforcement that is protected by procedural due process
That’s a fact. There is no constitutional right to employment. There are constitutional rights that prevent people from withholding employment, but there’s nothing in the law that guarantees a job for everyone.
Furthermore, there’s nothing on the record that suggests Captain Adams was unable to continue being employed as a law enforcement officer. He simply claimed that being placed on the Giglio list was a “death knell” for his career. And he alleged that despite not being fired by his employer, which decided to retain his services despite his utter uselessness in criminal court.
Captain Adams made his “right to be a cop without being censured” pitch in the cop-friendliest court in the land. And he walks away emptyhanded.
We have never held that an individual has a liberty interest in his right to engage in a specific field of employment that is protected by procedural due process. Accordingly, we decline to recognize such an interest now.
If you’re looking for more anecdotal proof that cops think they’re on a higher plane than the people they serve, read that conclusion again. This cop argued he was constitutionally entitled to be a cop, even though it has been clear since the beginning of the free market system jobs are not guaranteed by the Bill of Rights. The Constitution, its amendments, and several federal laws may provide solace when you’ve been fired or denied employment for numerous reasons, but none of those things guarantee that public servants can sail through their careers without being disciplined or criticized.
This case was never going to go anywhere. That this cop persisted suggests there’s a sense of entitlement innate to law enforcement officers — one that actively encourages good cops to be bad, bad cops to be worse, and officers like this one to sue when they’re told they’re no longer worthy of the public’s trust.
Back in March, we discussed a fairly silly request, made by several film producers who are suing RCN for not being their copyright police, that the court subpoena Reddit to unmask 9 users of that site. There were several aspects of the request that made it all very dumb: half the Reddit users never mentioned RCN, most referenced Comcast being their ISP, most of the remaining users never mentioned anything about piracy, and the one user who did mention RCN and piracy in context together had done so nearly a decade prior to the lawsuit. Given the First Amendment implications and hurdles involved in a request like this, the desire for the subpoena seemed doomed to fail.
Reddit doesn’t have to identify eight anonymous users who wrote comments in piracy-related threads, a judge in the US District Court for the Northern District of California ruled on Friday. US Magistrate Judge Laurel Beeler quashed a subpoena issued by film studios in an order that agrees with Reddit that the First Amendment protects the users’ right to speak anonymously online.
Reddit has no involvement in the underlying case, which is a copyright lawsuit in a different federal court against cable Internet service provider RCN. Bodyguard Productions, Millennium Media, and other film companies sued RCN in the US District Court in New Jersey over RCN customers’ alleged downloads of 34 movies such as Hellboy, Rambo: Last Blood, Tesla, and The Hitman’s Bodyguard.
It’s the right decision, to be sure. While the studios’ assertions were questionable generally, the standard the court applied in this instance was weighing essentially whether the anonymous comments, and commenters by extension, served as a primary or only source of the information they sought for the RCN trial. The court then goes through on a user by user basis to analyze whether that was the case, finding in all instances that it was not. Below is one example.
The user “compypaq” said that RCN would sometimes remotely reset his modem. The plaintiffs contend that this comment helps show that RCN can monitor and control its customers’ conduct, because the ability to reset a modem implies the ability to turn off a modem. This argument only reinforces that the plaintiffs can obtain the information they seek from RCN. It isn’t necessary to subpoena the identities of RCN customers from a third party to determine whether RCN can disable its customers’ internet access.
In other words, the request only makes sense as a fishing expedition, in which the plaintiffs aren’t actually after the information they claim to be. And because of that, the court quashed the subpoena.
If those plaintiffs want the actual information they sought to enter into evidence from these Reddit users, they will have to get it through the normal discovery process at the RCN trial.
U.S. telecom monopolies like AT&T and Comcast spent millions of dollars and several decades quite literally buying shitty, protectionist laws in around twenty states that either banned or heavily hamstrung towns and cities from building their own broadband networks. Even in instances and areas where AT&T and Comcast have repeatedly failed to upgrade or expand their broadband networks.
This dance of dysfunction was particularly interesting in Colorado. While lobbyists for Comcast and CenturyLink managed to convince state leaders to pass such a law (SB 152) in 2005, the legislation contained a provision that let individual Colorado towns and cities ignore the measure with a simple referendum. So they’ve been doing exactly that, en masse, for years.
“Today, the state took a big step in establishing a competitive economy for generations to come. SB23-183 removes the biggest obstacle to achieving the Governor’s goal to connect 99% of Colorado households by the end of 2027,” said Colorado Broadband Office Executive Director Brandy Reitter. “Each local government is in a unique position or different phase of connecting residents to high-speed internet, and this bill allows them to establish broadband plans that meet the needs of their communities.”
That leaves sixteen states with laws banning community broadband, after both Arkansas and Washington state eliminated their restrictions in 2021. Such laws are almost always ghost written by heavily taxpayer subsidized telecom monopolies, which have routinely tried to portray grass roots annoyance at monopolization and market failure as some form of vile socialist boondoggle.
It’s a scenario where giant telecoms get to have their cake and eat it too; often refusing to upgrade their networks in under-served or minority areas (particularly true among telcos offering DSL), while simultaneously ghost writing shitty laws preventing these underserved towns from doing anything about it — even if local voters, long struggling to gain access to broadband, vote in favor of it.
These communities wouldn’t be building broadband networks if they weren’t annoyed by decades of market failure. Or decades of federal regulators too captured to embrace policies that competitively challenge the nation’s biggest telecom monopolies.
Telecom monopolies could have responded to this by providing better, cheaper service, but given the nature of widespread U.S. corruption, it was simply easier to buy terrible state laws.
The home schooling and telecommuting boom of peak COVID put the counterproductive stupidity of these bills in very stark relief. As has the $42 billion in looming broadband funds made possible by the recent passing of the infrastructure bill. If that money is to be spent effectively, eliminating pointless, monopoly-backed restrictions on creative broadband alternatives is a good first step.
It seems like every day this week a new bill has been introduced in Congress with the grandstanding politicians behind the bill insisting that it’s necessary to protect the children online. It seems like no elected official wants to be left behind on this particular moral panic train. The latest, from Senators Ed Markey and Bill Cassidy is being called COPPA 2.0 in that it updates the original Children’s Online Privacy Protect Act, which was passed in 1998.
And… look, COPPA has long had problems, many of which we’ve called out in the past. COPPA is the main reason why lots of parents teach their kids it’s okay to lie about your age, because many websites say you have to be over 13 to use them. That’s not because websites do a careful assessment of what age is the right age to use a website, but because COPPA applies stringent regulations to any website that targets those under the age of 13.
As danah boyd explained well over a decade ago, COPPA fails parents, educators, and (most importantly) the kids themselves.
While many parents do not believe that social network sites like Facebook and MySpace are suitable for young children, they often want their children to have access to other services that have age restrictions (email, instant messaging, video services, etc.). Often, parents cite that these tools enable children to connect with extended family; Skype is especially important to immigrant parents who have extended family outside of the US. Grandparents were most frequently cited as the reason why parents created accounts for their young children. Many parents will create accounts for children even before they are literate because the value of connecting children to family outweighs the age restriction. When parents encourage their children to use these services, they send a conflicting message that their kids eventually learn: ignore some age limitations but not others.
By middle school, communication tools and social network sites are quite popular among tweens who pressure their parents for permission to get access to accounts on these services because they want to communicate with their classmates, church friends, and friends who have moved away. Although parents in the wealthiest and most educated segments of society often forbid their children from signing up to social network sites until they turn 13, most parents support their children’s desires to acquire email and IM, precisely because of familial use. To join, tweens consistently lie about their age when asked to provide it. When I interviewed teens about who taught them to lie, the overwhelming answer was parents. I interviewed parents who consistently admitted to helping their children circumvent the age restriction by teaching them that they needed to choose a birth year that would make them over 13. Even in households where an older sibling or friend was the educator, parents knew their children had email and IM and social network sites accounts. Interestingly, in households where parents forbid Facebook but allow email, kids have started noting the hypocritical stance of their parents. That’s not a good outcome of this misinterpretation.
When I asked parents about how they felt about the age restrictions presented by social websites, parents had one of two responses. When referencing social network sites, parents stated that they felt that the restrictions were justified because younger children were too immature to handle the challenges of social network sites. Yet, when discussing sites and services that they did not believe were risky environments or that they felt were important for family communication, parents often felt as though the limitations were unnecessarily restrictive. Those who interpreted the restriction as a maturity rating did not understand why the sites required age confirmation. Some other parents felt as though the websites were trying to tell them how to parent. Some were particularly outraged by what they felt was a paternal attitude by websites, making statements like: “Who are they to tell me how to be a good parent?”
That was written 12 years ago. We’ve had a dozen years to make COPPA better, and nothing has been done.
So, instead, Senators Markey and Cassidy have decided to make COPPA worse. As laid out by the sponsors, the bill would do the following:
Build on COPPA by prohibiting internet companies from collecting personal information from users who are 13 to 16 years old without their consent;
Ban targeted advertising to children and teens;
Revise COPPA’s “actual knowledge” standard, covering platforms that are “reasonably likely to be used” by children and protecting users who are “reasonably likely to be” children or minors;
Create an “Eraser Button” for parents and kids by requiring companies to permit users to eliminate personal information from a child or teen when technologically feasible;
Establish a “Digital Marketing Bill of Rights for Teens” that limits the collection of personal information of teens; and
Establish a Youth Marketing and Privacy Division at the FTC.
So… again, out of the best intentions danger lies. This bill would require age verification, and all the assorted problems we’ve discussed about that. Which is kind of ironic, given that the bill is pitched as protecting privacy. But since it has special rules for different age groups, that’s going to require websites to determine how old their visitors are, which is privacy invasive, and potentially dangerous as well.
Getting rid of the “actual knowledge” standard raises some big 1st Amendment issues (you can’t be punished for speech you don’t know about), and switching to a “reasonably likely to be used by…” standard is massively destructive, especially when combined with the increase in age to 16. Again, we write about issues that impact high schoolers here on Techdirt, so we’d switch from a site that is currently not covered by COPPA, since we’re not targeting children under 13 to one that might now be covered, because some high schoolers may be “reasonably likely” to want to come to Techdirt to read about how their Senators are trying to destroy the internet and the ways in which they communicate with friends and family today.
Banning “targeted advertising” is one of those ideas that sounds good if you don’t know what you’re talking about, have no experience with internet advertising, and don’t know how anything works. Again, we already have the issue of age verification associated with this, but also, in many ways all advertising is targeted in some form or another. The bill purports to exclude “contextual” advertising from the definition of “targeted marketing,” but just the fact that they have to do that suggests they haven’t really thought this through and had to slot in this random exclusion at the end to avoid breaking basically everything.
The “consent” part also gets tricky, as we’ve discussed with various state bills that require parental consent for kids. Not all kids have a good relationship with their parents. Their parents may be estranged. Or a child may want to visit a website that the parent disagrees with but is important to that child. Perhaps a child is LGBTQ and wishes to find a welcoming community, while their parents disapprove.
There are so many problems with bills like this, and literally no evidence whatsoever that any of this will actually help or protect children. It’s all built on the moral panic that the internet is — full stop — bad for kids. Even as the evidence says that’s not true at all, and it’s actually quite useful for most kids.
For all the talk of internet companies “experimenting” on our children, why do we keep letting politicians do these experiments on children not backed up by any understanding or evidence?
Forget all of that. North Carolina is going to go its own way, following the mandate laid out by 70s coke icons Fleetwood Mac back in the day when getting a wiretap warrant meant someone actually had to do something beyond click “ACCEPT” on the law enforcement end user license agreement.
Police could track people’s cell phones in real time — without a warrant — under a bill that passed a state House committee Wednesday.
The bill is intended to help law enforcement more quickly to find kidnapping victims or runaway children.
Ah. THE CHILDREN. The non-voters who always seem to play a part in government expansions of power. Too young to voice their opinion but young enough to be exploited by adults for their own ends. You know, adults like this child exploitation expert:
“This just gives the SBI another tool in the toolbox,” said Republican Rep. Dudley Greene, the retired sheriff of McDowell County who is leading the push for the bill. “But it’s not just a tool. It’s an emergency tool, in very limited circumstances.”
“SBI” is the State Bureau of Investigation. The ex-cop points to a single state agency, insinuating the law is limited to a single law enforcement entity when it actually isn’t. And if you think this will be limited to only the most serious of crimes, well, then you probably helped Rep. Greene get elected. Mission creep is a thing. So is the natural tendency to abuse power that demands we, the governed, throw our voting wrenches into the government machinery every couple of years.
As both proponents and opponents note, the bill would not allow for warrantless wiretaps. What it would do is allow cops to track cell phones in real time, as well as obtain information about cell phones their targets interact with.
The latter is usually covered by pen register orders, which require less probable cause than warrants because the Third Party Doctrine leaves information “voluntarily” shared with third parties (read: telcos, cell service providers) unprotected. But location data is something else entirely, seeing as it gives the government the power these legislators want to codify: the ability to track anyone at any time in real time without a warrant.
The state’s court system already appears to be completely wrong about this:
As for cell phone tracking, North Carolina’s appellate courts have already signed off on police getting people’s historical location data from phone companies without a warrant. But real-time warrantless tracking has not been included.
The Supreme Court’s Carpenter decision explicitly forbade long-term tracking of individuals via historical cell site location data. And its reading of the Third Party Doctrine and the Fourth Amendment suggested real-time acquisition of location data might run afoul of the Fourth Amendment if this tracking went on for long enough.
According to this reporting, the state’s courts have decided the Carpenter decision doesn’t apply to North Carolina law enforcement. And it has yet to arrive at a decision one way or the other about real-time tracking. An absence of contrary case law is a permission slip for law enforcement. Hell, even precedentialdecisions are rarely enough to deter law enforcement from engaging in rights violations. This law, which has sailed through the state House with almost zero opposition, encourages further abuses of tech that has yet to be fully addressed by courts covering this jurisdiction.
And the mission creep has already begun. The state rep quoted above claimed the law would help cops track down the worst of the worst criminals: those targeting children for nefarious means. But the revamped law — which at least now requires law enforcement to make a warrant sales pitch to a judge within 48 hours of engaging in real-time location tracking — has already been rewritten to ensure cops can use it whenever, wherever. It’s not just pedophiles and kidnappers. It’s the proverbial fast food thief (NOT A HYPOTHETICAL!) that can expect to be tracked in real time by cops with plenty of tech but no probable and no warrants.
It would allow a judge to find probable cause or reasonable suspicion that the suspect had committed any felony, or more minor crimes like a class 1 or A1 misdemeanor.
So, if passed intact, this law would allow cops to engage in real-time tracking of anyone suspected of almost any crime. Within 48 hours, they might need to make a probable cause showing in front of a judge. But even then, a judge could decide the pervasive surveillance is justified by assertions made after the fact by cops with a two-day head start. And if an arrest is effected before the clock runs out on the warrantless surveillance, there’s no need to ask the court for a second opinion on this codified interpretation of the Third Party Doctrine. No harm (that will be recognized by a NC court), no foul.
Hopefully this bill will die the death it deserves. But if legislators and the state’s courts have deluded themselves into thinking location info wants to be free (of warrant requirements), it seems unlikely this proposal will get kicked to the curb by the governor. After all, the rest of the government thinks it’s a good idea. And they know what’s best for everybody, even if the “everybody” they’re supposed to represent disagrees with them.
This LED Bluetooth speaker is built to last and strike a chord with your friends. With its 1800mAh battery capacity, you can enjoy music for up to 10 hours of continuous playtime, making it perfect for outdoor use. The speaker also has a TF memory card slot, allowing you to listen to your favorite beats anywhere you go. And with a built-in microphone, Bluetooth connectivity, and power-saving function, this speaker is a must-have accessory for every beach day. It’s on sale for $37.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
A few weeks ago I wrote about an interview that Substack CEO Chris Best did about his company’s new offering, Substack Notes, and his unwillingness to answer questions about specific content moderation hypotheticals. As I said at the time, the worst part was Best’s unwillingness to just own up to what he was saying were the site’s content moderation plans, which was that they would be quite open to hosting the speech of almost anyone, no matter how terrible. That’s a decision that you can make (in the US at least), but if you’re going to do that, you have to be willing to own the decision that you’re making and be clear about it, which Best was unwilling to do.
I compared it the “Nazi bar” problem that has been widely discussed on social media in the past, where if you own a bar, and don’t kick the Nazis out up front, you get the reputation as a “Nazi bar” that is difficult to get rid of.
It was interesting to see the response to this piece. Some people got mad, claiming it was unfair to call Best a Nazi, even though I was not doing that. As in the story of the Nazi bar, no one is claiming that the bar owner is a Nazi, just that the public reputation of his bar would be that it’s a Nazi bar. That was the larger point. Your reputation is what you allow, and if you’re taking a stance that you don’t want to get involved at all, and you want to allow such things, that’s the reputation that’s going to stick.
I wasn’t calling Best a Nazi or a Nazi sympathizer. I was saying that if he can’t answer a straightforward question like the one that Nilay Patel asked him, Nazis are going to interpret that as he’s welcoming them in, and they will act accordingly. So too will people who don’t want to be seen hanging out at the Nazi bar. The vaunted “marketplace of ideas” includes the ability for a large group of people to say “we don’t want to be associated with that at all…” and to find somewhere else to go.
And this brings us to Bluesky. I’ve written a bunch about Bluesky going back to Jack Dorsey’s initial announcement which cited my paper among others as part of the inspiration for betting on protocols.
As Bluesky has gained a lot of attention over the past week or so, there have been a lot of questions raised about its content moderation plans. A lot of people, in particular, seem confused by its plans for composable moderation, which we spoke about a few weeks ago. I’ve even had a few people suggest to me that Bluesky’s plans represented a similar kind of “Nazi bar” problem as Best’s interview did, in particular because their initial reference implementation shows “hate speech” as a toggle.
I’ve also seen some people claim (falsely) that Bluesky would refuse to remove Nazis based on this. I think there is some confusion here, and it’s important to go deeper on how this might work. I have no direct insight into Bluesky’s plans. And they will likely make big mistakes, because everyone in this space makes mistakes. It’s impossible not to. And, who knows, perhaps they will run into their own Nazi bar problem, but I think there are some differences that are worth exploring here. And those differences suggest that Bluesky is better positioned not to be the Nazi bar.
The first is that, as I noted in the original piece about Best, there’s a big difference between a centralized service and its moderation choices, and a decentralized protocol. Bluesky is a bit confusing to some as it’s trying to do both things. Its larger goal is to build, promote, and support the open AT Protocol as an open social media protocol for a decentralized social media system with portable identification. Bluesky itself is a reference app for the protocol, showing how things can be done — and, as such it has to do content moderation tasks to avoid Bluesky itself running into the Nazi bar problem. And, at least so far, it seems to be doing that.
The team at Bluesky seems to recognize this. Unlike Best, they’re not refusing to answer the question, they’re talking openly about the challenges here, but so far have been willing to remove truly disruptive participants, as CEO Jay Graber notes here:
But, they definitely also recognize that content moderation at scale is impossible to do well, and believe that they need a different approach. And, again, the team at Bluesky recognizes at least some of the challenges facing them:
But, this is where things get potentially more interesting. Under a traditional centralized social media setup, there is one single decision maker who has to make the calls. And then you’re in a sort of benevolent dictator setup (or at least you hope so, as the malicious dictator threat becomes real).
And this is where we go on a little tangent about content moderation: again, it’s not just difficult. It’s not just “hard” to do. It’s impossible to do well. The people who are moderated, with rare exceptions, will disagree with your moderation decisions. And, while many people think that there are a whole bunch of obvious cases and just a few that are a little fuzzy, the reality (this is part of the scale part) is that there are a ton of borderline cases that all come down to very subjective calls over what does or does not violate a policy.
To some extent, going straight to the “Nazi” example is unfair, because there’s a huge spectrum between the user who is a hateful bigot, deliberately trying to cause trouble, and the good helpful user who is trying to do well. There’s a very wide range in the middle and where people draw their own lines will differ massively. Some of them may include inadvertent or ignorant assholery. Some of it may just include trolling. Or sometimes there are jokes that some people find funny, and others find threatening. Sometimes people are just scared and lash out out of fear or confusion. Some people feel cornered, and get defensive when they should be looking inward.
Humans are fucking messy.
And this is where the protocol approach with composable moderation becomes a lot more interesting. On the most extreme calls, the ones where there are legal requirements, such as child sexual abuse material and copyright infringement, for example, those can be removed at the protocol level. But as you start moving up into the more murky areas, where many of the calls are subjective (not so much: “is this person a Nazi” but more along the lines of “is this person deliberately trolling, or just uninformed…”) the composable moderation system begins to let (1) the end users make their own rules and (2) enable any number of 3rd parties to build tools to work with those rules.
Some people may (for perfectly good reasons, bad reasons, or no reasons at all) just not have any tolerance for any kind of ignorance. Others may be more open to it, perhaps hoping to guide ignorance to knowledge. Just as an example, outside of the “hateful” space, we’ve talked before about things like “eating disorder” communities. One of the notable things there was that when those communities were on more mainstream services, people who had gotten over an eating disorder would often go back to those communities and provide help and support to those who needed it. When those communities were booted from the mainstream services, that actually became much more difficult, and the communities became angrier and more insulated, and there was less ability for people to help those in need.
That is, there will still need to be some decision making at the protocol level (this is something that people who insist on “totally censorship proof” systems seem to miss: if you do this, eventually the government is going to shut you down for hosting CSAM), but the more of the decision making that can be pushed to a different level and the more control put in the hands of the user, the better.
This allows for more competition for better moderation, first of all, but also allows for the variance in preferences, which is what you see in the simple version that Bluesky implemented. The biggest decisions can be made at the protocol level, but above that, let there be competitive approaches and more user control. It’s unclear exactly where Bluesky the service will come down in the end, but the early indications from what’s been said so far are that the service level “Bluesky” will be more aggressive in moderating, while the protocol level “AT Protocol” will be more open.
And… that’s probably how it should be. Even the worst people should be able to use a telephone or email. But, enabling competition at the service level AND at the moderation level, creates more of the vaunted “marketplace of ideas” where (unlike what some people think the marketplace of ideas is about), if you’re regularly a disruptive, disingenuous, or malicious asshole, you are much more likely to get less (or possibly no) attention from the popular moderation services and algorithms. Those are the consequences of your own actions. But you don’t get banned from the protocol.
To some extent, we’ve already seen this play out (in a slightly different form) with Mastodon. Truly awful sites like Gab, and ridiculously pathetic sites like Truth Social, both use the underlying ActivityPub and open source Mastodon code, but they have been defederated from the rest of the fediverse. They still get to use the underlying technology, but they don’t get to use it to be obnoxiously disruptive to the main userbase who wants nothing to do with them.
With AT Protocol, and the concept of composable moderation, this can get taken even further. Rather than just having to choose your server, and be at the whims of that server admin’s moderation choices (or the pressure from other instances which keeps many instances in check and aligned), the AT Protocol setup allows for a more granular and fluid system, where there can be a lot more user empowerment, without having to resort to banning certain users from using the technology entirely.
This will never satisfy some people, who will continue to insist that the only way to stop a “bad” person is to ban them from basically any opportunity to use communications infrastructure. However, I disagree for multiple reasons. First, as noted above, outside of the worst of the worst, deciding who is “good” and who is “bad” is way more complicated and fraught and subjective than people like to note, and where and how you draw those lines will differ for almost everyone. And people who are quick to draw those lines should realize that… some other day, someone who dislikes you might be drawing those lines too. And, as the eating disorder case study demonstrated, there’s a lot more complexity and nuance than many people believe.
That’s why a decentralized solution is so much better than a centralized one. With a decentralized system you don’t have to be worrying about yourself getting cut out either. Everyone gets to set their own rules and their own conditions and their own preferences. And, if you’re correct that the truly awful people are truly awful, then it’s likely that most moderation tools and most servers will treat them as such, and you can rely on that, rather than having them cut off at the underlying protocol level.
It’s also interesting to also see how the decentralized social media protocol nostr is handling this as well. While it appears that some of the initial thinking behind it was the idea that nothing should ever be taken down, it appears that many are recognizing how impossible that is, and they’re now having really thoughtful discussions on “bottom up content moderation” specifically to avoid the “Nazi bar” problem.
Eventually in the process, thoughtful people recognize that a community needs some level of norms and rules. The question is how are those created, how are they implemented, and how are they enforced and by whom. A decentralized system allows for much greater control by end users to have the systems and communities that more closely match their own preferences, rather than requiring the centralized authority handle everything, and be able to live up to everyone’s expectations.
As such, you may end up with results like Mastodon/ActivityPub, where “Nazi bar” areas still form, but they are wholly separated from other users. Or you may end up with a result where the worst users are still there, shouting into the wind with no one bothering to listen, because no one wants to hear them. Or, possibly, it will be something else entirely as people experiment with new approaches enabled by a composable moderation system.
I’ll add one other note on that, because there are times when I’ve discussed this that people highlight that there are other forms of harassment or other kinds of risks beyond direct harassment. And just blocking a user does not stop them from harassing or encouraging or directing harassment against another. This is absolutely true. But, this kind of setup does also allow for better tooling for potentially monitoring such a thing without having to be exposed to it directly. This could take the form of Block Party’s “lockout folder” where you can have a trusted third party review the harassing messages you’ve been receiving rather than having to go through it yourself, or, conceivably. other monitoring and warning services could pop up, that could track people who are doing awful things, try to keep them from succeeding, and alert the proper people if things require escalation.
In short, decentralizing things, and allowing many different approaches, and open systems and tooling doesn’t solve all problems, but it presents some creative ways to handle the Nazi Bar problem that seem likely to be a lot more effective than living in denial and staring blankly into the Zoom screen as a reporter asks you a fairly basic question about how you’ll handle racist assholes on your platform.
The cable and broadband industry spent the better part of a decade pretending that “cord cutting” (ditching traditional television in favor for streaming or antenna-based alternatives) either didn’t actually exist or was a fad that would end when Millennials started procreating.
Now, they like to pretend they saw the trend coming all along. With most of them still not competently adapting to shifting consumer demands.
The latest data from both Leichtman research and SambiaTV shows that 5.9 million Americans canceled cable TV in 2022, the equivalent of 16,000 American residents canceling cable TV every single day. Most are going to streaming, some are going to over the air antennas, and many are just getting bored with television entirely and spending more time on TikTok or YouTube.
While cord cutting storms forth, the cable industry has responded with some modest adaptation, but simply can’t help jacking up prices even further via an assortment of obnoxious and sneaky fees, accelerating the trend even further:
With the growth of fees and cable TV bills, it is easy to understand why. Recently it was announced that Comcast would be raising the fees on a wide range of plans. Comcast’s Xfinity Broadcast TV Fee is going up 11% this year to $21.30. Back in 2016, these fees were just $5 a month.
RSN fees are also going up in Philadelphia to $13.35 a month, a 5% jump. This is up from just $3 a month back in 2016.
Traditional cable TV is an unsustainable mess and has been for a long while.
As Cord Cutter News notes, there were 95.2 million Americans who paid for a pay-tv subscription in 20212. Even including live TV streaming services that number has dropped to 70.2 million in 2023. And while some cable execs tried to claim the trend was supposed to slow down dramatically by now, that’s not only not the case, it’s been accelerating in part due to their continued bad decisions.