Stop Blaming Algorithms For Misinformation And Threats To Democracy; The Real Problem Is Societal

from the fixing-the-wrong-thing dept

For quite some time now, we’ve pointed out that we should stop blaming technology for problems that are actually societal. Indeed, as you look deeper at nearly every “big tech problem,” you tend to find the problem has to do with people, not technology. And “fixing” technology isn’t really going to fix anything when it’s not the real problem. Indeed, many proposals to “fix” the tech industry seem likely to exacerbate the problems we’re discussing.

Of course, the “techlash” narrative is incredibly powerful, and the media has really run with it of late (as have politicians). So, it’s nice to see at least Wired is starting to push back on the narrative. A new cover story makes it clear that “Bad Algorithms Didn’t Break Democracy.” It’s a great article, by Gideon Lewis-Kraus. It acknowledges the narrative, and even that the techlash narrative is appealing at a surface level:

It?s easy to understand why this narrative is so appealing. The big social media firms enjoy enormous power; their algorithms are inscrutable; they seem to lack a proper understanding of what undergirds the public sphere. Their responses to widespread, serious criticism can be grandiose and smarmy. ?I understand the concerns that people have about how tech platforms have centralized power, but I actually believe the much bigger story is how much these platforms have decentralized power by putting it directly into people?s hands,? said Mark Zuckerberg, in an October speech at Georgetown University. ?I?m here today because I believe we must continue to stand for free expression.?

If these corporations spoke openly about their own financial interest in contagious memes, they would at least seem honest; when they defend themselves in the language of free expression, they leave themselves open to the charge of bad faith.

But as the piece goes on to highlight, this doesn’t really make much sense — and despite many attempts to support it with actual evidence, the evidence is completely lacking:

Over the past few years, the idea that Facebook, YouTube, and Twitter somehow created the conditions of our rancor?and, by extension, the proposal that new regulations or algorithmic reforms might restore some arcadian era of ?evidential argument??has not stood up well to scrutiny. Immediately after the 2016 election, the phenomenon of ?fake news? spread by Macedonian teenagers and Russia?s Internet Research Agency became shorthand for social media?s wholesale perversion of democracy; a year later, researchers at Harvard University?s Berkman Klein Center concluded that the circulation of abjectly fake news ?seems to have played a relatively small role in the overall scheme of things.? A recent study by academics in Canada, France, and the US indicates that online media use actually decreases support for right-wing populism in the US. Another study examined some 330,000 recent YouTube videos, many associated with the far right, and found little evidence for the strong ?algorithmic radicalization? theory, which holds YouTube?s recommendation engine responsible for the delivery of increasingly extreme content.

The article has a lot more in it — and you should read the whole thing — but it’s nice to see it recognizes that the real issue is people. If there’s a lot of bad stuff on Facebook, it’s because that’s what its users want. You have to be incredibly paternalistic to assume that the best way to deal with that is to have Facebook deny users what they want.

In the end, as it becomes increasingly untenable to blame the power of a few suppliers for the unfortunate demands of their users, it falls to tech?s critics to take the fact of demand?that people?s desires are real?even more seriously than the companies themselves do. Those desires require a form of redress that goes well beyond ?the algorithm.? To worry about whether a particular statement is true or not, as public fact-checkers and media-literacy projects do, is to miss the point. It makes about as much sense as asking whether somebody?s tattoo is true. A thorough demand-side account would allow that it might in fact be tribalism all the way down: that we have our desires and priorities, and they have theirs, and both camps will look for the supply that meets their respective demands.

Just because you accept that preferences are rooted in group identity, however, doesn?t mean you have to believe that all preferences are equal, morally or otherwise. It just means our burden has little to do with limiting or moderating the supply of political messages or convincing those with false beliefs to replace them with true ones. Rather, the challenge is to persuade the other team to change its demands?to convince them that they?d be better off with different aspirations. This is not a technological project but a political one.

Perhaps it’s time for a backlash to the techlash. And, at the very least, it’s time that instead of just blaming the technology, we all take a closer look at ourselves. If it’s a political or societal problem, we’re not going to fix it (at all) by blaming Facebook.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Stop Blaming Algorithms For Misinformation And Threats To Democracy; The Real Problem Is Societal”

Subscribe: RSS Leave a comment
80 Comments

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Wendy Cockcroft (profile) says:

Re: Section 230 puts the blame on posters, not platforms

Section 230 is not responsible for that, the gangs and their members are. Litmus test: who uploads and posts the internet fighting items — the platform or the gang members?

If you’re blaming the platform for not taking the posts down quickly enough you’re basically asking for every internet user to have their comments checked before their posts go live. That means yours. Is that what you want?

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

Look up "cyber gangbanging" for why Section 230 is killing people.

They did that before the internet. Why is the internet and Section 230 suddenly responsible for it? As Wendy pointed out, it’s not the platforms posting that stuff online, it’s the gangs. If the gangs stop, the problem goes away.

Gangs now fight on the internet and then kill each other IRL.

Uh, what? Fighting online has no direct effect on real life. The only thing that would get someone killed in real life is if someone steps away from their computer and goes out and kills someone. That’s their choice, the internet has nothing to do with it.

This comment has been deemed insightful by the community.
Thad (profile) says:

If there’s a lot of bad stuff on Facebook, it’s because that’s what its users want.

But there’s certainly a feedback loop at work, too. Stories gain prominence because a lot of people are clicking on them, but a lot of people are clicking on them because they’re displayed prominently.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

The whole premise that if it’s on Facebook, people must want it, is questionable. (Ads would be an obvious counterexample.) Facebook’s algorithms are designed to show people whatever will make them interact with FB as much as possible, not what will provide them useful knowledge or improve society. They’re also flawed like many algorithms, in that they only work with data that’s easy to collect. FB can easily know whether you clicked. They don’t know whether it provided enlightenment, or what the long-term effect on society will be.

FB et al. aren’t the problem, but their algorithms likely play a part. And there’s no fundamental reason why everyone’s Facebook interactions need to be governed by the same algorithm, or even by an algorithm written by FB.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:

What algorithm would you write that would solve the problem of not knowing whether a bit provided enlightenment or would improve society? How would you implement that?

Stating a problem doesn’t automatically make that problem solvable with a computer. Some problems cannot be solved by a computer.

This comment has been deemed insightful by the community.
crade (profile) says:

Re: Re: Re:2 Re:

Which is exactly what happened. People chose and continue to choose. Thats how the current market leaders became and remain such. As soon as someone else does it better, Google adapts or becomes Yahoo or Lycose and Facebook adapts or becomes geocities or myspace.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:3 Re:

Which is exactly what happened. People chose and continue to choose. Thats how the current market leaders became and remain such.

That’s "choice" in the same way as choosing an ISP or a politician. A thin veneer of competition is not the same as meaningful control. This case is actually worse—unlike ISPs, I can’t even switch platforms unless the people I communicate with also switch.

Anonymous Coward says:

Re: Re: Re: Re:

Stating a problem doesn’t automatically make that problem solvable with a computer. Some problems cannot be solved by a computer.

That’s the point. Determining what people want (or more abstractly what they should want) is not solvable by computer. So we shouldn’t be claiming that Facebook shows people what they want, as if that is a solved problem.

As for the more general "what would you do" question, it’s an open research question I’m not qualified to write an algorithm for. But right now, only Facebook employees even have the chance. Letting university researchers provide new algorithms for people to try on their Facebook data could produce valuable research. Allowing interface tweaks, like feedback more nuanced that "like" (eg: ranking education and humor separately), could help.

This comment has been deemed insightful by the community.
Rocky says:

Re: Re:

The root of the problem is NOT people clicking and believing stupid stuff. The root IS societal, since we as a society doesn’t seem to be able to educate people in factual and critical thinking.

As long as a large part of the population doesn’t get the "mental" tools to dismiss bullshit, this problem will persist regardless of any algorithms deployed.

Rocky says:

Re: Re: Re: Re:

False equivalence, Thad.

To get a driving-license you actually have to take a test which weeds out the persons that are totally unqualified to use a vehicle. And if you do something particularly stupid and cause an accident you can loose your license or even your freedom.

The worst that can happen on the internet is that someone moderates your rant or kick you off their platform all the while you are consuming junk science and fake news.

Thad (profile) says:

Re: Re: Re:2 Re:

False equivalence, Thad.

It’s not equivalence, it’s an analogy.

To get a driving-license you actually have to take a test which weeds out the persons that are totally unqualified to use a vehicle. And if you do something particularly stupid and cause an accident you can loose your license or even your freedom.

All of which is true but completely irrelevant to the point of the analogy. Which is that just because something is some kind of intermediary step and not the root cause of a problem doesn’t mean that there’s no value in anticipating and mitigating problems that occur at that intermediary step.

Do you have any criticisms of that statement, which was the entire purpose of the analogy? Or are you just going to nitpick ways in which the two things are dissimilar but which are completely irrelevant to the point of comparison? Because if all you can address is the latter, that suggests you can’t think of any criticisms of the former.

The worst that can happen on the internet is that someone moderates your rant or kick you off their platform all the while you are consuming junk science and fake news.

Wait, are you arguing that platforms should moderate, or are you arguing that they shouldn’t bother because the problem is human nature, not social media?

Scary Devil Monastery (profile) says:

Re: Re: Re:3 Re:

"Which is that just because something is some kind of intermediary step and not the root cause of a problem doesn’t mean that there’s no value in anticipating and mitigating problems that occur at that intermediary step."

There is when the medium of contention can not be held as analogues of one another.

Example; Car manufacturers can build ABF brakes, air bags, seat belts, parking cameras, collision detectors, and GPS into their cars.

Let me know when a social platform has the ability to tell you when you’re getting too close to another poster, about to collide with another, and can mitigate or prevent the damage from the resulting impact without a massive amount of 3rd-party interference.

Otoh a social platform can decide you won’t be posting on their platform no more. Let me know when ford can sell you a car then tell you you aren’t going to drive it any longer because they think you’re a twit.

In the one case you’re operating a device you bought and own with a transparent set of safeguards included.
In the other case you’re walking into another party’s living room and using a bullhorn to trash talk assorted 3rd parties.

So let’s be clear about this; Comparing Car OEM’s with online social platforms is not good.

"Wait, are you arguing that platforms should moderate, or are you arguing that they shouldn’t bother because the problem is human nature, not social media?"

Why would this be an either/or from your view?

Yes, it’s a human problem, not the problem of social media.
Yes, platforms should probably moderate.

If anything a more proper analogy is that of a bar where people sometimes brawl in the corners and the bouncers are kept busy trying to keep the guy who keeps taking off his pants and shitting on the dance floor out.

Depending on the type of bar there are varying degrees of corners, brawls, bouncers, and dance floor shitters.

This comment has been deemed insightful by the community.
Comboman says:

... and guns don't kill people.

While technically true, this argument is about as intellectually honest as the "Guns don’t kill people; People kill people" argument that the gun lobby use against gun regulation. Every tool has the ability to be misused, but some are far more dangerous and prone to misuse than others. The answer for both tech and guns is more oversight and regulation.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: ... and guns don't kill people.

While technically true, this argument is about as intellectually honest as the "Guns don’t kill people; People kill people" argument that the gun lobby use against gun regulation.

The fundamental idea of the argument is sound. The gun is just a tool. It doesn’t choose what to shoot at. The person with the murderous intent is the actual problem. This article is making a similar point. The algorithm is the tool. The person with the malicious intent is the actual problem.

That said, I don’t agree with how the gun lobby chooses to use the argument, because it seems they only proceed in a half-measure. "Guns don’t kill people, so guns should be unrestricted." Unfortunately, this ignores the "People kill people" side of the argument, for which (as possible examples) we could identify and correct cultural/societal issues that contribute to violent crime, try to provide a troubled person with the help they need so that they never resort to killing, and/or recognize that a particular person would simply be far too dangerous in possession of a gun, so they shouldn’t be allowed to have one.

urza9814 (profile) says:

Re: Re: Re: ... and guns don't kill people.

…are you implying that people wouldn’t be killing each other if nobody had guns?

The gun analogy seems perfect to me. People kill each other. Some of them use guns, some of them don’t. Nobody buys a gun and goes "Well, I wasn’t planning on killing anyone before I bought this, but I guess I’ll have to now!"

People believe awful things. Some of them get/post those awful things from/to social media, some of them don’t. Nobody gets on social media and goes "Well, I wasn’t a white supremacist before, but now that I’ve got Facebook I sure will be!"

This comment has been deemed insightful by the community.
crade (profile) says:

Re: Re: Re:2 ... and guns don't kill people.

No. From what I can tell from the summary in this post the article seems to be contending that people are NOT being radicalized through social media. The article seems to be claiming that not only is social media not "to blame" but it isn’t much of a factor as it isn’t really having much effect on whether people are radicalized or not so it’s not even a tool that people are using for radicalization (like a gun is a tool used to kill people). Instead its just a tool to reveal how people are.

"abjectly fake news “seems to have played a relatively small role in the overall scheme of things"

"A recent study by academics in Canada, France, and the US indicates that online media use actually decreases support for right-wing populism in the US"

Anonymous Coward says:

the real problem, in my opinion, is governments! they are so scared of the people finding out what the lying, cheating, self- interested, two-faced fuckers in them are up to and who they are doing the biding of, who they are helping, instead of the people, simply to keep a certain few in total control of the planet, not just countries! nothing other than greed, control and the fear of change is responsible for the crap we’re in now, with a world getting quicker at destroying itself and us along with it! what good is it going to do being the richest guy on the Planet, having billions of dollars, when there is NO PLANET? and what makes things worse are that not only do we have the means to hold the Planet together, for the good of all, but those governments and ‘friends’ have employed the world’s security services to help them blame everyone else but them for all the world’s troubles, and then continue to fuck things up for all! where the hell is the sense in that? unless aliens have handed over the tech to enable fleeing to another Planet (so we can fuck that one up as well!) those who will leave need to remember, having all the money there is wont clean the crappers out, grow the food or mend the sick. with none of us ‘plebs’ with them, they’ll have to get their own hands dirty for change!!

Koby (profile) says:

Blame

With regard to the 2016 election, not blaming Facebook for allowing a few memes published by Russian trolls means that Democrats would blame themselves for their poor candidates and ideas. In the future, not blaming Facebook for engaging in censorship will mean that Republicans must blame themselves for their poor candidates and ideas. These two things seem so unlikely that I just don’t forsee it happening. It’s just too easy to "blame Facebook".

Anonymous Coward says:

Re: Blame

With regard to the 2016 election, not blaming Facebook for allowing a few memes published by Russian trolls means that Democrats would blame themselves for their poor candidates and ideas.

Exactly! They took perhaps the most toxic candidate of all time — someone so abjectly awful that she ended up losing to freaking Donald Trump of all people! — allowed her to flat-out steal a primary that Bernie Sanders was the clear winner of before the DNC put its thumb on the scale, and then were somehow surprised when she lost. And they’ve spent the last 3 and a half years blaming anything and everything else, frantically trying to avoid the truth: they lost because they ran a horrible candidate who never should have been there in the first place.

Stephen T. Stone (profile) says:

Re: Re:

You can’t blame only Hillary and the DNC for the failures of 2016. Doing so implies that things like the Republicans running interference for Trump with the investigations into Clinton, the Russian mis- and disinformation campaigns, and the James Comey “October surprise” had absolutely no influence in the results of that election. To believe such implications is irresponsible at best and willfully ignorant at worst.

I’m no fan of Hillary Clinton or the DNC. They ran a bad campaign against a candidate that should have been easy to defeat. And yes, those failings were a big factor in Clinton’s loss — but they were not the only factor.

This comment has been flagged by the community. Click here to show it.

Zof (profile) says:

Re: Re: Re: Re:

The biggest factor to Hillary Clinton losing the most winnable election in history I think is all the lies, and when CNN got caught lying for her when half the DNC convention walked out, and CNN tried to pretend it wasn’t happening, despite it being all over social media with video and picture evidence. Oh, and she would just be President right now if she had removed her head from her huge ass and made Sanders her VP pick. No question.

Scary Devil Monastery (profile) says:

Re: Re: Blame

"They took perhaps the most toxic candidate of all time — someone so abjectly awful that she ended up losing to freaking Donald Trump of all people!"

Not that bad. Hillary wasn’t worse than most other presidential candidates in the last twenty years…but certainly not much better either.

The primary problem was that there are plenty of young, dedicated, skilled female politicians in the democratic party they could have pushed, but instead they went for the one most guaranteed never to challenge party policy.

She was pushed, in other words, because the dems had nothing except the vague idea that since a black president worked they might as well go for a repeat performance with a woman.

Bernie would have won them the white house for two terms without issue but the man’s incorruptible and stubborn which means to the democratic party he’s the choice which must never be allowed to win.

Anonymous Coward says:

Technology amplifies human actions, including actions that cause harm. The fallibility of human nature, and our capacity for evil has proven to be an intractable problem that we’ve been dealing with for centuries — and human nature hasn’t really changed that much, for all that effort. But it’s certainly rhetorically handy to insist that we fundamentally improve human behavior before making any other attempt to reduce harm.But apparently harm reduction is completely off the table until we figure out how to make people not be greedy, selfish, and dishonest.

Ultimately, that’s what baffles me about Masnick’s constant drumbeat of do-nothingism: his utter and complete dismissal of any attempt to reduce the harm that results from the technological amplification of bad actions by bad actors.Why promote this view? Why insist on digging in to preserve a harmful status quo? Just what is your game, Masnick? Simple motivated reasoning? An attempt to salvage an techno-optimist philosophy left in tatters by the world’s steady decline into (a technologically-enable) dystopian future?

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

his utter and complete dismissal of any attempt to reduce the harm that results from the technological amplification of bad actions by bad actors

If you can think of a way to reduce said harm that doesn’t…

  1. cause more harm than it prevents
  2. infringe upon civil rights
  3. attack a symptom instead of the disease

…well, champ, now would be the time to offer your thoughts.

Anonymous Coward says:

Re: Re: Re:

I should probably have mentioned that I generally agree in principle with the idea that over-reacting can go badly. My chief problem is with Masnick’s obstinacy on this specific subject.

In terms of what specific actions I’d take (assuming you’re asking in good faith, and this isn’t some attempt to draw me into some round-and-round of deflection and nitpicking) — well, that’s really a better question for someone with access to a thinktank, but off the top of my head, it might be worth revisiting Facebook’s 2011 consent decree and the many privacy violations they’ve committed since then. They’ve done a lot of wrong (as Masnick has repeatedly stated) but they have never faced any approaching a serious consequence for any of it. Tiny fines (relative to their revenues anyway) can be easily brushed off; you’ve heard of nuisance lawsuits? Well, Facebook is big enough to think in terms of "nuisance regulations". But we’ve got to start somewhere — Masnick seems not to even want to start.

What’s so galling about this is that Techdirt has historically been so good about holding ISPs, Telcos, and copyright monopolists to account, but social media gets a huge shrug for some reason. I hate to see a valuable source of news and insight periodically interrupted by these plaintive defenses of massive companies that already have armies of lawyers and PR flacks working on that job.

bob says:

Re: Re: Re: Re:

Doing something for the sake of something is a waste of resources. Especially when your limited resources have to be spread between multiple issues. So doing nothing might be a better choice.Doing nothing allows you to instead concentrate on other bigger issues with your limited resources.

I like your idea of actually having meaningful punishment for companies that abuse their power/position. But from what I’ve seen there isn’t currently anyone in a regulatory position to hand out those meaningful punishments.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re: Re:

In terms of what specific actions I’d take (assuming you’re asking in good faith, and this isn’t some attempt to draw me into some round-and-round of deflection and nitpicking) — well, that’s really a better question for someone with access to a thinktank, but off the top of my head, it might be worth revisiting Facebook’s 2011 consent decree and the many privacy violations they’ve committed since then.

The 2019 consent decree literally did exactly that. I wrote about it and did a two part podcast series literally walking through the new consent decree (which is an update to the earlier consent decree directly calling out the privacy violations since the original one). So, uh… done.

They’ve done a lot of wrong (as Masnick has repeatedly stated) but they have never faced any approaching a serious consequence for any of it. Tiny fines (relative to their revenues anyway) can be easily brushed off; you’ve heard of nuisance lawsuits? Well, Facebook is big enough to think in terms of "nuisance regulations". But we’ve got to start somewhere — Masnick seems not to even want to start.

$5 billion is not a tiny fine. It is more than all other FTC fines of that nature combined. Separately, the new consent decree has many, many more parameters that are very much in the vein of "serious consequences." We went over this line by line in the podcast.

What’s so galling about this is that Techdirt has historically been so good about holding ISPs, Telcos, and copyright monopolists to account, but social media gets a huge shrug for some reason.

Because there needs to be actual evidence — and so far I haven’t seen evidence that makes sense and how any of these approaches actually reduces harm, rather than locks in the giants.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re:

Technology amplifies human actions, including actions that cause harm.

Agreed.

The fallibility of human nature, and our capacity for evil has proven to be an intractable problem that we’ve been dealing with for centuries — and human nature hasn’t really changed that much, for all that effort. But it’s certainly rhetorically handy to insist that we fundamentally improve human behavior before making any other attempt to reduce harm.

I did not say that we should improve human nature before we reduce harm. I questioned how much "harm" is actually caused by tech.

But apparently harm reduction is completely off the table until we figure out how to make people not be greedy, selfish, and dishonest.

That is not what I said, nor implied. I said that the problem is that we blame technology for human problems — and if you focus "harm reduction" on technology, it will inevitably fail, as it is not targeting the actual problem.

Ultimately, that’s what baffles me about Masnick’s constant drumbeat of do-nothingism

I have never, ever suggested "do nothingism"

his utter and complete dismissal of any attempt to reduce the harm that results from the technological amplification of bad actions by bad actors

I have not "dismissed" such attempts. I have pointed out why they will often cause more damage than good. Present me with a plan that ACTUALLY reduces harm, and I’m eager to hear it out. I’ve talked, for example, about why I think the protocols approach would reduce any harm. I also, similarly, appreciate Cory Doctorow’s idea of adverse interoperability.

My complaint is that most "harm reduction" seems to actually be designed in a manner to simply lock up the market for Facebook and Google.

Why insist on digging in to preserve a harmful status quo?

I have never done this.

Scary Devil Monastery (profile) says:

Re: Re:

"But apparently harm reduction is completely off the table until we figure out how to make people not be greedy, selfish, and dishonest."

As usual, Baghdad Bob, all you’ve got here is a wordwall wrapped around the conceptual turd that free speech is bad and people shouldn’t be trusted with it.

No, harm reduction is very much ON the table, but your idea of having the platforms be responsible for everything 3rd parties might say isn’t it.

The analogy to your oft-touted suggestions would be to have people selectively banned from speaking unless they could find someone to verify that what they intended to say wasn’t offensive to a vested interest.

You really think you will EVER try to post that garbage around here and not have saner minds correcting you immediately, Bobmail? Or are you wearing the Jhon Smith jacket today?

This comment has been deemed insightful by the community.
That Anonymous Coward (profile) says:

An entire generation raised with the ideals of nothing is your fault, someone else should fix it, & they should pay for your idiocy.

Well that man wouldn’t have mailed bombs to congresspeople if FB had limited his exposure to all the hateful rhetoric!!
Then launch into screaming fits when someone suggests their bleach enemas cure autism group gets flagged.

No one is coming to save you, stop expecting it.
No one is responsible for you, except you.

Yes its hard to learn to filter all the information coming at you, but refusing to try & demanding tech magically fix it for you leads to more stupid people.

People scream at Twitter b/c they let the nazis, racists, etc etc roam free… they mass report them… then are SHOCKED just shocked when they end up mass reported.

It is not a game you can "win". You don’t have to crush your enemies underfoot and hear the lamentations of their women.
But human nature is "me me me me me" (disbelieve? watch people trying to merge into traffic & the battle of wills over someone feeling that someone getting ahead of them means they lost… so they will cause larger traffic snarls to "win". They end up making themselves even later/slower but inside they think they won b/c they stayed ahead of the other guy)

It is easier to blame tech than to admit some people suck & move on.
Its a war & winning is all that matters… even if winning means creating more tools that can be used against us in ways we never considered & then whine about.

Stop abdicating responsibility outside of yourself for yourself.

This comment has been deemed insightful by the community.
That Anonymous Coward (profile) says:

Re: Re: Get Off My Lawn

Except I have the receipts….

Its not MY fault I drove recklessly.

https://www.techdirt.com/articles/20180613/16333340033/section-230-cant-save-snapchat-lawsuit-involving-speed-filter.shtml

Apple has a patent, they should pay us.

https://www.techdirt.com/articles/20161229/10472736367/victims-car-crash-sue-apple-not-preventing-distracted-driver-hitting-their-vehicle.shtml

Happy Meals made my kids fat, not me for refusing to parent them and say no.

https://www.latimes.com/archives/la-xpm-2010-nov-02-la-fi-happy-meals-20101103-story.html

You have the deep pockets!!! PAY ME for what my crazy Ex did!!

https://www.techdirt.com/articles/20190118/01175141419/herrick-v-grindr-section-230-case-thats-not-what-youve-heard.shtml

Its not MY fault I slept with the 13 yr old!!!

https://www.techdirt.com/articles/20150316/17500630332/no-you-cant-sue-grindr-because-it-hooked-you-up-with-13-year-old-sex.shtml

Its your fault, pay us!!!!!!!!!

https://www.techdirt.com/articles/20191209/20241743537/losing-streak-continues-litigants-suing-social-media-companies-over-violence-committed-terrorists.shtml

Any Backpage lawsuit. We were trafficked!!!!!
No no one from the company did it, but they have money pay us!!

Lets sue the hotels!! They didn’t pimp us out, but our former pimp doesn’t have lots of cash. We never asked for help, but they should have known we needed it.

https://www.nhpr.org/post/victim-sex-trafficking-files-federal-suit-against-hotel-chains#stream/0

ECA (profile) says:

FB

FB is interesting, and there are option in the program to NOT see certain adverts..Even YT has them.
Throw out a few adverts to see the response and if its passed around, then see if others like it.

The biggest thing about the net comes with the idea that its interactive. And if there is a Address to send comments, DO IT..
Those sites that Lock down when you goto them and demand you register and pay money are idiots. And sites that dont listen to ‘Customers’ or give them access to email or phone are even more idiotic.

This comment has been deemed insightful by the community.
urza9814 (profile) says:

The obvious question

I’m a bit curious about the methodology here and what precisely they’re looking at.

Facebook’s algorithm is obviously tuned to provide whatever will keep you on Facebook. That’s profitable for them. In a sense, that’s giving you what you want. But what you want in order to keep browsing Facebook is not necessarily the same as what you want in life in general. Someone with a strong enough compulsion to try to correct idiots will stay on Facebook forever if you keep feeding them posts from idiots, but that probably isn’t actually how they want to spend the rest of their life. If you start from an assumption that what people want is exactly what they click and spend time viewing, then you’re already measuring it wrong.

So, are the studies mentioned measuring what content people actually desire to consume, or are they measuring what content will keep people tethered to their current activity? I don’t think these are the same thing, and if they’re measuring the same (incorrect) value that social media optimizes for, then obviously their research would indicate that social media isn’t the issue.

People can have conflicting desires. People like to be lazy; people also like the sense of fulfillment that comes from being productive. People like to eat double bacon cheeseburgers but also want to be fit and healthy. These things don’t have to create desires out of nothing in order to be harmful; they can do plenty of damage simply by amplifying the parts of yourself that you’d rather suppress.

That’s not to say I’m in favor of banning social media or anything, although that IS why I haven’t really touched it myself in 2-3 years. In my ideal world, we’d all be using diaspora* or something, and could experiment a lot more in terms of what truly makes a good social networking platform. But if we’re going to stick with these monopolistic walled gardens, there does need to be some regulation. They’re looking very much like a drug to me right now, so maybe we ought to regulate them as one. Not Schedule I — nothing should be regulated like Schedule I — but maybe more like Advil.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

laminar flow (profile) says:

If people weren’t so credulous and gullible there wouldn’t be a problem. Politicians are attacking tech because they don’t want to acknowledge this reality. The reason is obvious: politicians, their enablers in the media, and the government in general need such an unquestioning public to swallow the endless deluge of bullshit they produce with as little resistance as possible.

Christenson says:

Forbidden Planet Movie

All this reminds me of the 1950s(?) movie Forbidden Planet…

… in which the Krell mind machine gave its owners unlimited power to do anything…without moral consequences…and the krell were destroyed overnight.

The internet has the same problem… it amplifies lots of communication and lots of biases, both good and bad, and we haven’t yet figured out how to deal with that yet, as the net transitions from a small elite to the entire world. On top of that, we have a generation (at least in the US) growing up poorer than its baby boomer parents, so this is a recipe for trouble.

Sometimes it’s hard to tell if the cops are getting worse or our information is getting better, for one simple example…

This comment has been flagged by the community. Click here to show it.

Edward Bernays says:

what its users "want"

"If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their knowing it In almost every act of our daily lives, whether in the sphere of politics or business, in our social conduct or our ethical thinking, we are dominated by the relatively small number of persons who understand the mental processes and social patterns of the masses. It is they who pull the wires which control the public mind."
-Edward Bernays

"there’s a lot of bad stuff on Facebook, it’s because that’s what its users want"

-Mike Masnick

Leave a Reply to Scary Devil Monastery Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...