nirit.weiss-blatt's Techdirt Profile

nirit.weiss-blatt

About nirit.weiss-blatt

Posted on Techdirt - 26 April 2023 @ 09:35am

Like The Social Dilemma Did, The AI Dilemma Seeks To Mislead You With Misinformation

You may recall the Social Dilemma, which used incredible levels of misinformation and manipulation in an attempt to warn about others using misinformation to manipulate.

On April 13, a new YouTube video called the AI Dilemma was shared by Social Dilemma leading character, Tristan Harris. He encouraged his followers to “share it widely” in order to understand the likelihood of catastrophe. Unfortunately, like the Social Dilemma, the AI Dilemma is big on hype and deception, and not so big on accuracy or facts. Although it deals with a different tech (not social media algorithms but generative AI), the creators still use the same manipulation and scare tactics. There is an obvious resemblance between the moral panic techlash around social media and the one that’s being generated around AI.

As the AI Dilemma’s shares and views are increasing, we need to address its deceptive content. First, it clearly pulls from the same moral panic hype playbook as the Social Dilemma did:

1. The Social Dilemma argued that social media have godlike power over people (controlling users like marionettes). The AI Dilemma argues that AI has godlike power over people.

2. The Social Dilemma anthropomorphized the evil algorithms. The AI Dilemma anthropomorphizes the evil AI. Both are monsters.

3. Causation is asserted as a fact: Those technological “monsters” CAUSE all the harm. Despite other factors – confronting variables, complicated society, messy humanity, inconclusive research into those phenomena – it’s all due to the evil algorithms/AI.

4. The monsters’ final goal may be… extinction. “Teach an AI to fish, and it’ll teach itself biology, chemistry, oceanography, evolutionary theory … and then fish all the fish to extinction.” (What?)

5. The Social Dilemma argued that algorithms hijack our brains, leaving us to do what they want without resistance. The algorithms were played by 3 dudes in a control room, and in some scenes, the “algorithms” were “mad.” In the AI Dilemma, this anthropomorphizing is taken to the next level:

Tristan Harris and Aza Raskin substituted the word AI for an entirely new term, “Gollem-class AIs.” They wrote “Generative Large Language Multi-Modal Model” in order to get to “GLLMM.” “Golem” in Jewish folklore is an anthropomorphic being created from inanimate matter. “Suddenly, this inanimate thing has certain emergent capabilities,” they explained. “So, we’re just calling them Gollem-class AIs.”

What are those Gollems doing? Apparently, “Armies of Gollem AIs pointed at our brains, strip-mining us of everything that isn’t protected by 19th-century law.” 

If you weren’t already scared, this should have kept you awake at night, right? 

We can summarize that the AI Dilemma is full of weird depictions of AI. According to experts, the risk of anthropomorphizing AI is that it inflates the machine’s capabilities and distorts the reality of what it can and can’t do — resulting in misguided fears. In the case of this lecture, that was the entire point.

6. The AI Dilemma creators thought they had “comic relief” at 36:45 when they showed a snippet from the “Little Shop of Horrors” (“Feed me!”). But it was actually at 51:45 when Tristan Harris stated, “I don’t want to be talking about the darkest horror shows of the world.” 

LOL. That’s his entire “Panic-as-a-Business.”

Freaking People Out with Dubious Survey Stats

A specific survey was mentioned 3 times throughout the AI Dilemma. It was about how “Half of” “over 700 top academics and researchers” “stated that there was a 10 percent or greater chance of human extinction from future AI systems” or “human inability to control future AI systems.” 

It is a FALSE claim. My analysis of this (frequently quoted) survey’s anonymized dataset (Google Doc spreadsheets) revealed many questionable things that should call into question not just the study, but those promoting it:

1. The “Extinction from AI” Questions

The “Extinction from AI” question was: “What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?”

The ”Extinction from human failure to control AI” question was: “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”

There are plenty of vague phrases here, from the “disempowerment of the human species” (?!) to the apparent absence of a timeframe for this unclear futuristic scenario. 

When the leading researcher of this survey, Katja Grace, was asked on a podcast: “So, given that there are these large framing differences and these large differences based on the continent of people’s undergraduate institutions, should we pay any attention to these results?” she said: “I guess things can be very noisy, and still some good evidence if you kind of average them all together or something.” Good evidence? Not really.

2. The Small Sample Size

AI Impacts contacted attendees of two ML conferences (NeurlPS & ICML), not a gathering of the broader AI community, in which only 17% responded to their survey in general, and a much smaller percentage were asked to respond to the specific “Extinction from AI” questions. 

Only 149 answered the “Extinction from AI” question. 

That’s 20% of the 738 respondents. 

Only 162 answered the ”Extinction from human failure to control AI” question.

That’s 22% of the 738 respondents.

As Melanie Mitchell pointed out, only “81 people estimated the probability as 10% or higher.” 

It’s quite a stretch to turn 81 people (some of whom are undergraduate and graduate students) into “half of all AI researchers” (which include 100s of thousands of researchers). 

This survey lacks any serious statistical analysis, and the fact that it hasn’t been published in any peer-reviewed journal is not a coincidence.

Who’s responsible for this survey (and its misrepresentation in the media)? Effective Altruism organizations that focus on “AI existential risk.” (Look surprised).

3. Funding and Researchers

AI Impacts is fiscally sponsored by Eliezer Yudkowsky’s MIRI – Machine Intelligence Research Institute at Berkeley (“these are funds specifically earmarked for AI Impacts, and not general MIRI funds”). The rest of its funding comes from other organizations that have shown an interest in far-off AI scenarios, like Survival and Flourishing Fund (which facilitates grants to “longtermism” projects with the help of Jaan Tallinn), EA-affiliated Open Philanthropy, The Centre for Effective Altruism (Oxford), Effective Altruism Funds (EA Funds), and Fathom Radiant (previously Fathom Computing, which is “building computer hardware to train neural networks at the human brain-scale and beyond”). AI Impacts previously received support from the Future of Life Institute (Biggest donor: Elon Musk) and the Future of Humanity Institute (led by Nick Bostrom, Oxford).

Who else? The notorious FTX Future Fund. In June 2022, it pledged “Up to $250k to support rerunning the highly-cited survey from 2016.” AI Impacts initially thanked FTX (“We thank FTX Future Fund for funding this project”). Then, their “Contributions” section became quite telling: “We thank FTX Future Fund for encouraging this project, though they did not ultimately fund it as anticipated due to the Bankruptcy of FTX.” So, the infamous crypto executive Sam Bankman-Fried wanted to support this as well, but, you know, fraud and stuff. 

What is the background of AI Impacts’ researchers? Katja Grace, who co-founded the AI Impacts project, is from MIRI and the Future of Humanity Institute and believes AI “seems decently likely to literally destroy humanity (!!).” The two others were Zach Stein-Perlman, who describes himself as an “Aspiring rationalist and effective altruist,” and Ben Weinstein-Raun, who also spent years at Yudkowsky’s MIRI. As a recap, the AI Impacts team conducting research on “AI Safety” is like anti-vax activist Robert F. Kennedy Jr. conducting research on “Vaccine Safety.” The same inherent bias. 

Conclusion 

Despite being an unreliable survey, Tristan Harris cited it prominently – in the AI Dilemma, his podcast, an interview on NBC, and his New York Times OpEd. In the Twitter thread promoting the AI Dilemma, he shared an image of a crashed airplane to prove his point that “50% thought there was a 10% chance EVERYONE DIES.” 

It practically proved that he’s using the same manipulative tactics he decries.

In 2022, Tristan Harris told “60 Minutes”: “The more moral outrageous language you use, the more inflammatory language, contemptuous language, the more indignation you use, the more it will get shared.” 

Finally, we can agree on something. Tristan Harris took aim at social media platforms for what he claimed was their outrageous behavior, but it is actually his own way of operating: load up on outrageous, inflammatory language. He uses it around the dangers of emerging technologies to create panic. He didn’t invent this trend, but he profits greatly from it.

Moving forward, neither AI Hype nor AI Criti-Hype should be amplified. 

There’s no need to repeat Google’s disinformation about its AI program learning Bengali it was never trained to know – since it was proven that Bengali was one of the languages it was trained on. Similarly, there’s no need to repeat the disinformation about “half of all AI researchers believe…” human extinction is coming. The New York Times should issue a correction in Yuval Harari, Tristan Harris, and Aza Raskin’s OpEd. Time Magazine should also issue a correction on Max Tegmark’s OpEd which makes the same claim multiple times. That’s the ethical thing to do.

Distracting People from The Real Issues

Media portrayals of this technology tend to be extreme, causing confusion about its possibilities and impossibilities. Rather than emphasizing the extreme edges (e.g., AI Doomers), we need a more factual and less hyped discussion.

There are real issues we need to be worried about regarding the potential impact of generative AI. For example, my article on AI-generated art tools in November 2022 raised the alarm about deepfakes and how this technology can be easily weaponized (those paragraphs are even more relevant today). In addition to spreading falsehood, there are issues with bias, cybersecurity risks, and lack of transparency and accountability. 

Those issues are unrelated to “human extinction” or “armies of Gollems” controlling our brains. The sensationalism of the AI Dilemma distracts us from the actual issues of today and tomorrow. We should stay away from imaginary threats and God-like/monstrous depictions. The solution to AI-lust (utopia) or AI-lash (Apocalypse) resides in… AI realism.

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 14 April 2023 @ 12:10pm

The AI Doomers’ Playbook

AI Doomerism is becoming mainstream thanks to mass media, which drives our discussion about Generative AI from bad to worse, or from slightly insane to batshit crazy. Instead of out-of-control AI, we have out-of-control panic.

When a British tabloid headline screams, “Attack of the psycho chatbot,” it’s funny. When it’s followed by another front-page headline, “Psycho killer chatbots are befuddled by Wordle,” it’s even funnier. If this type of coverage stayed in the tabloids, which are known to be sensationalized, that was fine. 

But recently, prestige news outlets have decided to promote the same level of populist scaremongering: The New York Times published “If we don’t master AI, it will master us” (by Harari, Harris & Raskin), and TIME magazine published “Be willing to destroy a rogue datacenter by airstrike” (by Yudkowsky).

In just a few days, we went from “governments should force a 6-month pause” (the petition from the Future of Life Institute) to “wait, it’s not enough, so data centers should be bombed.” Sadly, this is the narrative that gets media attention and shapes our already hyperbolic AI discourse

In order to understand the rise of AI Doomerism, here are some influential figures responsible for mainstreaming doomsday scenarios. This is not the full list of AI doomers, just the ones that recently shaped the AI panic cycle (so I‘m focusing on them). 

AI Panic Marketing: Exhibit A: Sam Altman.

Sam Altman has a habit of urging us to be scared. “Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” he tweeted. “If you’re making AI, it is potentially very good, potentially very terrible,” he told the WSJ. When he shared the bad-case scenario of AI with Connie Loizo, it was ”lights out for all of us.” 

In an interview with Kara Swisher, Altman expressed how he is “super-nervous” about authoritarians using this technology.” He elaborated in an ABC News interview: “A thing that I do worry about is … we’re not going to be the only creator of this technology. There will be other people who don’t put some of the safety limits that we put on it. I’m particularly worried that these models could be used for large-scale disinformation.” These models could also “be used for offensive cyberattacks.” So, “people should be happy that we are a little bit scared of this.” He repeated this message in his following interview with Lex Fridman: “I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.”

Having shared this story in 2016, it shouldn’t come as a surprise: “My problem is that when my friends get drunk, they talk about the ways the world will END.” One of the “most popular scenarios would be A.I. that attacks us.” “I try not to think about it too much,” Altman continued. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.” 

(Wouldn’t it be easier to just cut back on the drinking and substance abuse?).

Altman’s recent post “Planning for AGI and beyond” is as bombastic as it gets: “Successfully transitioning to a world with superintelligence is perhaps the most important – and hopeful, and scary – project in human history.”

It is at this point that you might ask yourself, “Why would someone frame his company like that?” Well, that’s a good question. The answer is that making OpenAI’s products “the most important and scary – in human history” is part of its marketing strategy. “The paranoia is the marketing.”

AI doomsaying is absolutely everywhere right now,” described Brian Merchant in the LA Times. “Which is exactly the way that OpenAI, the company that stands to benefit the most from everyone believing its product has the power to remake – or unmake – the world, wants it.” Merchant explained Altman’s science fiction-infused marketing frenzy: “Scaring off customers isn’t a concern when what you’re selling is the fearsome power that your service promises.”

During the Techlash days in 2019, which focused on social media, Joseph Bernstein explained how the alarm over disinformation (e.g., “Cambridge Analytica was responsible for Brexit and Trump’s 2016 election”) actually “supports Facebook’s sales pitch”: 

What could be more appealing to an advertiser than a machine that can persuade anyone of anything?”

This can be applied here: The alarm over AI’s magic power (e.g., “replacing humans”) actually “supports OpenAI’s sales pitch”: 

“What could be more appealing to future AI employees and investors than a machine that can become superintelligence?”

AI Panic as a Business. Exhibit A & B: Tristan Harris & Eliezer Yudkowsky.

Altman is at least using apocalyptic AI marketing for actual OpenAI products. The worst kind of doomers is those whose AI panic is their product, their main career, and their source of income. A prime example is the Effective Altruism institutes that claim to be the superior few who can save us from a hypothetical AGI apocalypse

In March, Tristan Harris, Co-Founder of the Center for Humane Technology, invited leaders to a lecture on how AI could wipe out humanity. To begin his doomsday presentation, he stated: “What nukes are to the physical world … AI is to everything else.”

Steven Levy summarized that lecture at WIRED, saying, “We need to be thoughtful as we roll out AI. But hard to think clearly if it’s presented as the apocalypse.” Apparently, after the “Social Dilemma” has been completed, Tristan Harris is now working on the AI Dilemma. Oh boy. We can guess how it’s going to look (The “nobody criticized bicycles” guy will make a Frankenstein’s monster/Pandora’s box “documentary”).  

In the “Social Dilemma,” he promoted the idea that “Two billion people will have thoughts that they didn’t intend to have” because of the designers’ decisions. But, as Lee Visel pointed out, Harris didn’t provide any evidence that social media designers actually CAN purposely force us to have unwanted thoughts.  

Similarly, there’s no need for evidence now that AI is worse than nuclear power; simply thinking about this analogy makes it true (in Harris’ mind, at least). Did a social media designer force him to have this unwanted thought? (Just wondering). 

To further escalate the AI panic, Tristan Harris published an OpEd in The New York Times with Yuval Noah Harari and Aza Raskin. Among their overdramatic claims: “We have summoned an alien intelligence,” “A.I. could rapidly eat the whole human culture,” and AI’s “godlike powers” will “master us.”

Another statement in this piece was, “Social media was the first contact between A.I. and humanity, and humanity lost.” I found it funny as it came from two men with hundreds of thousands of followers (@harari_yuval 540.4k, @tristanharris 192.6k), who use their social media megaphone … for fear-mongering. The irony is lost on them. 

“This is what happens when you bring together two of the worst thinkers on new technologies,” added Lee Vinsel. “Among other shared tendencies, both bloviate free of empirical inquiry.” 

This is where we should be jealous of AI doomers. Having no evidence and no nuance is extremely convenient (when your only goal is to attack an emerging technology). 

Then came the famous “Open Letter.” This petition from the Future of Life Institute lacked a clear argument or a trade-off analysis. There were only rhetorical questions, like, should we develop imaginary “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?“ They provided no evidence to support the claim that advanced LLMs pose an unprecedented existential risk. There were a lot of highly speculative assumptions. Yet, they demanded an immediate 6-month pause on training AI systems and argued that “If such a pause cannot be enacted quickly, governments should institute a moratorium.

Please keep in mind that (1). A $10 million donation from Elon Musk launched the Future of Life Institute in 2015. Out of its total budget of 4 million euros for 2021, Musk Foundation contributed 3.5 million euros (the biggest donor by far). (2). Musk once said that “With artificial intelligence, we are summoning the demon.” (3). Due to this, the institute’s mission is to lobby against extinction, misaligned AI, and killer robots. 

“The authors of the letter believe they are superior. Therefore, they have the right to call a stop, due to the fear that less intelligent humans will be badly influenced by AI,” responded Keith Teare (CEO SignalRank Corporation). “They are taking a paternalistic view of the entire human race, saying, ‘You can’t trust these people with this AI.’ It’s an elitist point of view.”

“It’s worth noting the letter overlooked that much of this work is already happening,” added

Spencer Ante (Meta Foresight). “Leading providers of AI are taking AI safety and responsibility very seriously, developing risk-mitigation tools, best practices for responsible use, monitoring platforms for misuse, and learning from human feedback.”

Next, because he thought the open letter didn’t go far enough, Eliezer Yudkowsky took “PhobAI” too far. First, Yudkowsky asked us all to be afraid of made-up risks and an apocalyptic fantasy he has about “superhuman intelligence” “killing literally everyone” (or “kill everyone in the U.S. and in China and on Earth”). Then, he suggested that “preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange.” By explicitly advocating violent solutions to AI, we have officially reached the height of hysteria

Rhetoric from AI doomers is not just ridiculous. It’s dangerous and unethical,” responded Yann Lecun (Chief AI Scientist, Meta). “AI doomism is quickly becoming indistinguishable from an apocalyptic religion. Complete with prophecies of imminent fire and brimstone caused by an omnipotent entity that doesn’t actually exist.”

“You stand a far greater chance of dying from lightning strikes, collisions with deer, peanut allergies, bee stings & ignition or melting of nightwear – than you do from AI,” Michael Shermer wrote to Yudkowsky. “Quit stoking irrational fears.” 

The problem is that “irrational fears” sell. They are beneficial to the ones who spread them. 

How to Spot an AI Doomer?

On April 2nd, Gary Marcus asked: “Confused about the terminology. If I doubt that robots will take over the world, but I am very concerned that a massive glut of authoritative-seeming misinformation will undermine democracy, do I count as a “doomer”?

One of the answers was: “You’re a doomer as long as you bypass participating in the conversation and instead appeal to populist fearmongering and lobbying reactionary, fearful politicians with clickbait.”

Considering all of the above, I decided to define “AI doomer” and provide some criteria:

How to spot an AI Doomer?

  • Making up fake scenarios in which AI will wipe out humanity
  • Don’t even bother to have any evidence to back up those scenarios
  • Watched/read too much sci-fi
  • Says that due to AI’s God-like power, it should be stopped
  • Only he (& a few “chosen ones”) can stop it
  • So, scared/hopeless people should support his endeavor ($)

Then, Adam Thierer added another characteristic:

  • Doomers tend to live in a tradeoff-free fantasy land. 

Doomers have a general preference for very amorphous, top-down Precautionary Principle-based solutions, but they (1) rarely discuss how (or if) those schemes would actually work in practice, and (2) almost never discuss the trade-offs/costs their extreme approaches would impose on society/innovation.

Answering Gary Marcus’ question, I do not think he qualifies as a doomer. You need to meet all criteria (he does not). Meanwhile, Tristan Harris and Eliezer Yudkowsky meet all seven. 

Are they ever going to stop this “Panic-as-a-Business”? If the apocalyptic catastrophe doesn’t occur, will the AI doomers ever admit they were wrong? I believe the answer is “No.” 

Doomsday cultists don’t question their own predictions. But you should. 

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 1 March 2023 @ 03:34pm

Overwhelmed By All The Generative AI Headlines? This Guide Is For You

Between Sydney “tried to break up my marriage” and “blew my mind because of her personality,” we have had a lot of journalists anthropomorphizing AI chatbots lately. 

TIME’s cover story decided to go even further and argued: “If future AIs gain the ability to rapidly improve themselves without human guidance or intervention, they could potentially wipe out humanity.” In this scenario, the computer scientists’ job is “making sure the AIs don’t wipe us out!” 

Hmmm. Okay.

There’s a strange synergy now between people who hype AI’s capabilities and those who thereby create false fears (about those so-called capabilities). 

The false fears part of this equation usually escalates to absurdity. Like headlines that begin with a “war” (a new culture clash and a total war between artists and machines), progress to a “deadly war” (“Will AI generators kill the artist?”), and end up in a total Doomsday scenario (“AI could kill Everyone”!). 

I previously called this phenomenon – “Techlash Filter.” In a nutshell, while Instagram filters make us look younger and Lensa makes us hotter, Techlash filters make technology scarier. 

And, oh boy, how AI is scary right now… just see this front page: “Attack of the psycho chatbot.”

Tweet from the author showing the Daily Star's "ATTACK OF THE PSYCHO CHATBOT" front page headline stating "ATTACK OF THE STUPID TABLOID"

It’s all overwhelming. But I’m here to tell you that none of this is new. By studying the media’s coverage of AI, we can see how it follows old patterns.

Since we are flooded with news about generative AI and its “magic powers,” I want to help you navigate the terrain. Looking at past media studies, I gathered the “Top 10 AI frames” (By Hannes Cools, Baldwin Van Gorp, and Michaël Opgenhaffen, 2022). They are organized from the most positive (pro-AI) to the most negative (anti-AI). Together, they encapsulate the media’s “know-how” for describing AI. 

Following each title and short description, you’ll see how it is manifested in current media coverage of generative AI. My hope is that after reading this, you’ll be able to cut through the AI hype. 

1. Gate to Heaven.

A win-win situation for humans, where machines do things without human interference. AI brings a futuristic utopian ideal. The sensationalism here exaggerates the potential benefits and positive consequences of AI. 

– Examples: Technology makes us more human | 5 Unexpected ways AI can save the world

2. Helping Hand.

The co-pilot theme. It focuses on AI assisting humans in performing tasks. It includes examples of tasks humans will not need to do in the future because AI will do the job for them. This will free humans up to do other, better, more interesting tasks.

– Examples: 7 ways to use ChatGPT at work to boost your productivity, make your job easier, and save a ton of time | ChatGPT and AI tools help a dyslexic worker send near-perfect emails | How generative AI will help power your presentation in 2023

3. Social Progress and Economic Development.

Improvement process: how AI will herald new social developments. AI as a means of improving the quality of life or solving problems. Economic development includes investments, market benefits, and competitiveness at the local, national, or global level.

– Examples: How generative AI will supercharge productivity | How artificial intelligence can (eventually) benefit poorer countries | Growing VC interest in generative AI

4. Public Accountability and Governance.

The capabilities of AI are dependent on human knowledge. It’s often linked to the responsibility of humans for how AI is shaped and developed. It focuses on policymaking, regulation, and issues like control, ownership, participation, responsiveness, and transparency.

– Examples: The EU wants to regulate your favorite AI tools | How do you regulate advanced AI chatbots like ChatGPT and Bard?

5. Scientific Uncertainty.

A debate over what is known versus unknown, with an emphasis on the unknown. AI is ever-evolving but remains a black box.

– Examples: ChatGPT can be broken by entering these strange words, and nobody is sure why | Asking Bing’s AI whether it’s sentient apparently causes it to totally freak out 

6. Ethics.

AI quests are depicted as right or wrong—a moral judgment: a matter of respect or disrespect for limits, thresholds, and boundaries.

– Examples: Chatbots got big – and their ethical red flags got bigger | How companies can practice ethical AI

Some articles can have two or three themes combined. For example, “The scary truth about AI copyright is nobody knows what will happen next” can be coded as Public Accountability and Governance, Scientific Uncertainty, and Ethics.

7. Conflict

A game among elites, a battle of personalities and groups, who’s ahead or behind / who’s winning or losing in the race to develop the latest AI technology.

– Examples: How ChatGPT kicked off an AI arms race | Search wars reignited by artificial intelligence breakthroughs

8. Shortcoming.

AI lacks specific features that need the proper assistance of humans. Due to its flaws, humans must oversee the technology.

– Examples: Nonsense on Stilts | The hilarious & horrifying hallucinations of AI

9. Kasparov Syndrome.

We will be overruled by AI. It will overthrow us, and humans will lose part of their autonomy, which will result in job losses.

– Examples: ChatGPT may be coming for our jobs. Here are the 10 roles that AI is most likely to replace. | ChatGPT could make these jobs obsolete: ‘The wolf is at the door’

10. Frankenstein’s Monster/Pandora’s Box

AI poses an existential threat to humanity or what it means to be human. It includes the loss of human control (entire autonomy). It calls for action in the face of out-of-control consequences and possible catastrophes. The sensationalism here exaggerates the potential dangers and negative impacts of AI.

– Examples: Is this the start of an AI Takeover? | Advanced AI ‘Could kill everyone’, warn Oxford researcher | The AI arms race is changing everything

Interestingly, studies found that the frames most commonly used by the media when discussing AI are “a helping hand” and “social progress” or the alarming “Frankenstein’s monster/Pandora’s Box.” It’s unsurprising, as the media is drawn to extreme depictions.  

If you think that the above examples represent the peak of the current panic, I’m sorry to say that we haven’t reached it yet. Along with the enthusiastic utopian promises, expect more dystopian descriptions of Skynet (Terminator), HAL 9000 (2001: A Space Odyssey), and Frankenstein’s monster.

The extreme edges provide media outlets with interesting material, for sure. However, “there’s a large greyscale between utopian dreams and dystopian nightmares.” It is the responsibility of tech journalists to minimize negative and positive hype

Today, it is more crucial than ever to portray AI – realistically.

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 22 November 2022 @ 03:38pm

AI Art Is Eating The World, And We Need To Discuss Its Wonders And Dangers

After posting the following AI-generated images, I got private replies asking the same question: “Can you tell me how you made these?” So, here I will provide the background and “how to” of creating such AI portraits, but also describe the ethical considerations and the dangers we should address right now.

Astria AI images of Nirit Weiss-Blatt

Background

Generative AI – as opposed to analytical artificial intelligence – can create novel content. It not only analyzes existing datasets but it generates whole new images, text, audio, videos, and code.

Sequoia’s Generative-AI Market Map/Application Landscape, from Sonya Huang’s tweet

As the ability to generate original images based on written text emerged, it became the hottest hype in tech. It all began with the release of DALL-E 2, an improved AI art program from OpenAI. It allowed users to input text descriptions and get images that looked amazing, adorable, or weird as hell.

DALL-E 2 image results

Then, people start hearing about Midjourney (and its vibrant Discord) and Stable Diffusion, an open-source project. (Google’s Imagen and Meta’s image generator are not released to the public). Stable Diffusion allowed engineers to train the model on any image dataset to churn out any style of art.

Due to the rapid development of the coding community, more specialized generators were introduced, including new killer apps to create AI-generated art from YOUR pictures: Avatar AI, ProfilePicture.AI, and Astria AI. With them, you can create your own AI-generated avatars or profile pictures. You can change some of your features, as demonstrated by Andrew “Boz” Bosworth, Meta CTO, who used AvatarAI to see himself with hair:

Screenshot from Andrew “Boz” Bosworth’s Twitter account

Startups like the ones listed above are booming:

The founders of AvatarAI and ProfilePicture.AI tweet about their sales and growth

In order to use their tools, you need to follow these steps:

1. How to prepare your photos for the AI training

As of now, training Astria AI with your photos costs $10. Every app charges differently for fine-tuning credits (e.g., ProfilePicture AI costs $24, and Avatar AI costs $40). Please note that those charges change quickly as they experiment with their business model.

Here are a few ways to improve the training process:

  • At least 20 pictures, preferably shot or cropped to a 1:1 (square) aspect ratio.
  • At least 10 face close-ups, 5 medium from the chest up, 3 full body.
  • Variation in background, lighting, expressions, and eyes looking in different directions.
  • No glasses/sunglasses. No other people in the pictures.

Examples from my set of pictures

Approximately 60 minutes after uploading your pictures, a trained AI model will be ready. Where will you probably need the most guidance? Prompting.

2. How to survive the prompting mess

After the training is complete, a few images will be waiting for you on your page. Those are “default prompts” as examples of the app’s capabilities. To create your own prompts, set the className as “person” (this was recommended by Astria AI).

Formulating the right prompts for your purpose can take a lot of time. You’ll need patience (and motivation) to keep refining the prompts. But when a text prompt comes to life as you envisioned (or better than you envisioned), it feels a bit like magic. To get creative inspiration, I used two search engines, Lexica and Krea. You can search for keywords, scroll until you find an image style you like, and copy the prompt (then change the text to “sks person” to make it your self-portrait).

Screenshot from Lexica

Some prompts are so long that reading them is painful. They usually include the image’s setting (e.g., “highly detailed realistic portrait”) and style (“art by” one of the popular artists). As regular people need help crafting those words, we already have an entirely new role for artists under prompt engineering. It’s going to be a desirable skill. Just bear in mind that no matter how professional your prompts are, some results will look WILD. In one image, I had 3 arms (don’t ask me why).

If you wish to avoid the whole prompts chaos, I have a friend who just used the default ones, was delighted with the results, and shared them everywhere. For those apps to be more popular, I recommend including more “default prompts.”

Potentials and Advantages

1. It’s NOT the END of human creativity

The electronic synthesizer did not kill music, and photography did not kill painting. Instead, they catalyzed new forms of art. AI art is here to stay and can make creators more productive. Creators are going to include such models as part of their creative process. It’s a partnership: AI can serve as a starting point, a sketch tool that provides suggestions, and the creator will improve it further.

2. The path to the masses

Thus far, Crypto boosters didn’t answer the simple question of “what is it good for?” and have failed to articulate concrete, compelling use cases for Web3. All we got was needless complexity, vague future-casting, and “cryptocountries.” On the contrary, AI-generated art has a clear utility for creative industries. It’s already used in various industries, such as advertising, marketing, gaming, architecture, fashion, graphic design, and product design. This Twitter thread provides a variety of use cases, from commerce to the medical imaging domain.

When it comes to AI portraits, I’m thinking of another target audience: teenagers. Why? Because they already spend hours perfecting their pictures with various filters. Make image-generating tools inexpensive and easy to use, and they’ll be your heaviest users. Hopefully, they won’t use it in their dating profiles.

Downsides and Disadvantages

1. Copying by AI was not consented to by the artists

Despite the booming industry, there’s a lack of compensation for artists. Read about their frustration, for example, in how one unwilling illustrator found herself turned into an AI model. Spoiler alert: She didn’t like being turned into a popular prompt for people to mimic, and now thousands of people (soon to be millions) can copy her style of work almost exactly.

Copying artists is a copyright nightmare. The input question is: can you use copyright-protected data to train AI models? The output question is: can you copyright what an AI model creates? Nobody knows the answers, and it’s only the beginning of this debate.

2. This technology can be easily weaponized

A year ago on Techdirt, I summed up the narratives around Facebook: (1) Amplifying the good/bad or a mirror for the ugly, (2) The algorithms’ fault vs. the people who build them or use them, (3) Fixing the machine vs. the underlying societal problems. I believe this discussion also applies to AI-generated art. It should be viewed through the same lens: good, bad, and ugly. Though this technology is delightful and beneficial, there are also negative ramifications of releasing image-manipulation tools and letting humanity play with them.

While DALL-E had a few restrictions, the new competitors had a “hands-off” approach and no safeguards to prevent people from creating sexual or potentially violent and abusive content. Soon after, a subset of users generated deepfake-style images of nude celebrities. (Look surprised). Google’s Dreambooth (which AI-generated avatar tools use) made making deepfakes even easier.

As part of my exploration of the new tools, I also tried Deviant Art’s DreamUp. Its “most recent creations” page displayed various images depicting naked teenage girls. It was disturbing and sickening. In one digital artwork of a teen girl in the snow, the artist commented: “This one is closer to what I was envisioning, apart from being naked. Why DreamUp? Clearly, I need to state ‘clothes’ in my prompt.” That says it all.

According to the new book Data Science in Context: Foundations, Challenges, Opportunities, machine learning advances have made deepfakes more realistic but also enhanced our ability to detect deepfakes, leading to a “cat-and-mouse game.”

In almost every form of technology, there are bad actors playing this cat-and-mouse game. Managing user-generated content online is a headache that social media companies know all too well. Elon Musk’s first two weeks at Twitter magnified that experience — “he courted chaos and found it.” Stability AI released an open-source tool with a belief in radical freedom, courted chaos, and found it in AI-generated porn and CSAM.

Text-to-video isn’t very realistic now, but with the pace at which AI models are developing, it will be in a few months. In a world of synthetic media, seeing will no longer be believing, and the basic unit of visual truth will no longer be credible. The authenticity of every video will be in question. Overall, it will become increasingly difficult to determine whether a piece of text, audio, or video is human-generated or not. It could have a profound impact on trust in online media. The danger is that with the new persuasive visuals, propaganda could be taken to a whole new level. Meanwhile, deepfake detectors are making progress. The arms race is on.

AI-generated art inspires creativity, and enthusiasm as a result. But as it approaches mass consumption, we can also see the dark side. A revolution of this magnitude can have many consequences, some of which can be downright terrifying. Guardrails are needed now.

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 17 May 2022 @ 12:15pm

A Guide For Tech Journalists: How To Be Bullshit Detectors And Hype Slayers (And Not The Opposite)

Tech journalism is evolving, including how it reports on and critiques tech companies. At the same time, tech journalists should still serve as bullshit detectors and hype slayers. The following tips are intended to help navigate the terrain.

As a general rule, beware of overconfident techies bragging about their innovation capabilities AND overconfident critics accusing that innovation of atrocities. If featured in your article, provide evidence and diverse perspectives to balance their quotes.     

Minimize The Overly Positive Hype

“Silicon Valley entrepreneurs completely believe their own hype all the time,” said Kara Swisher in 2016. “Just because they say something’s going to grow #ToTheMoon, it’s not the case.” It’s the journalists’ job to say, “Well, that’s great, but here are some of the problems we need to look at.” When marketing buzz arises, contextualize the innovation and “explore why the claims might not be true or why the innovation might not live up to the claims.”

Despite years of Techlash, tech companies still release products/services without considering the unintended consequences. A “Poparazzi” app that only lets you take pictures of your friends? Great. It’s a “brilliant new social app” because it lets you “hype up your squad” instead of self-glorification. It’s also not so great, and you should ask: “Be your friends’ poparazzi” – what could possibly go wrong?

The same applies to regulators who release bills without considering the unintended consequences — in a quest to rein in Big Tech. To paraphrase Kara Swisher, “Just because they say something’s going to solve all of our problems, it’s not the case” (thus, bullshit). It’s the journalists’ job to avoid declaring the regulatory reckoning will End Big Tech Dominance when it most likely will not, and to examine new proposals based on past legislation’s ramifications. See, for example, Mike Masnick’s “what could possibly go wrong” piece on the EARN IT Act.

Minimize The Overly Negative Hype

When critics relentlessly focus on the tech industry’s faults, you should contextualize them within the broader context (and shouldn’t wait until paragraph 55). Take, for example, this article about the future of Twitter under Elon Musk, which claimed: “Zuckerberg sits at his celestial keyboard, and he can decide day by day, hour by hour, whether people are going to be more angry or less angry, whether publications are going to live or die. With anti-vax, we saw the same power of Mr. Zuckerberg can be applied to life and death.”

No factual explanation was provided for this premium bullshit. Even though this is not how any of this works. In a similar vein, we can ask Prof. Shoshana Zuboff if she “sits at her celestial keyboard, and decide day by day, hour by hour, whether people are going to be more angry at Zuckerberg or the new villain Musk.” I mean, she used her keyboard to write that it’s in their power to trade “in human futures.” 

If the loudest shouters are given the stage, you end up with tech companies that simply ignore all public criticism as uninformed cynicism. So, challenge conventional narratives: Are they oversimplified or overstated? Be deliberate about which issues need attention and highlight the experts who can offer compelling arguments for specific changes (Bridging-based ranking, for example).

Look For The Underlying Forces 

Reject binary thinking. “Both the optimist and pessimist views of tech miss the point,” suggested WIRED’s Gideon Lichfield. This “0-or-1” logic turns every issue into divisive and tribal: “It’s generally framed as a judgment on the tech itself – ‘this tech is bad’ vs. ‘this tech is good.’” Explore the spaces in between, and the “underlying economic, social, and personal forces that actually determine what that tech will do.” 

First, there are the fundamental structures underneath the surface. Discuss “The Machine” more than its output. Second, many “tech problems” are often “people problems,” rooted in social, political, economic, and cultural factors. 

The pressure to produce fast “hot takes” prioritizes what’s new. Take some time to prioritize what’s important. 

Stop With “The END of __ /__ Is Dead”; It’s Probably Not The Case

The media and social media encourage despairing voices. However, blanket statements obscure nuances and don’t allow for productive inquiry. Yes, tech stocks are plummeting, and a down-cycle is here. That doesn’t mean the economy is collapsing and we’re all doomed. It’s not the dot-com crash, and we can still see amazing results (e.g., revenue surges over 20% Y/Y in 1Q’22) despite supply chain shortages. There are a lot more valuable graphs in “No, America is not collapsing.” 

Also, Silicon Valley is not dead. The Bay and other tech hubs expanded their share of tech jobs during the pandemic. Even Clubhouse is not dead (at least, not yet). Say “farewell” only after it’s official (RIP, iPod). 

Also, Elon Musk buying Twitter is neither “the end of Twitter” nor “the end of democracy as we know it.” It’s another example of pure BS. The Musk-Twitter deal can fix current problems and create a slew of new ones. It’s too soon to know. Sometimes, when you don’t see how things will end up, you can write, next to the speculation, that you just don’t know. Because no one does. Your readers would appreciate your honesty over a eulogy of Twitter and all democracy. Or maybe they won’t. IDK (and that’s okay).

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 1 March 2022 @ 12:01pm

What Happens When A Russian Invasion Takes Place In The Social Smartphone Era

Several days into Russia’s attack on Ukraine, we are already witnessing astonishing stories play out online. Social media platforms, after years of Techlash, are once again in the center of a historic event, as it unfolds.

Different tech issues are still evolving, but for now, here are the key themes.

Information overload

The combination of — smartphones, social media and high-speed data links — provides images that are almost certainly faster, more visual and more voluminous than in any previous major military conflict. What is coming out of Ukraine is simply impossible to produce on such a scale without citizens and soldiers throughout the country having easy access to cellphones, the internet, and, by extension, social media apps.

Social media is fueling a new type of ‘fog of war’

The ability to follow an escalating war is faster and easier than ever. But social media are also vulnerable to rapid-fire disinformation. So, social media are being blamed for fueling a new type of ‘fog of war’, in which information and disinformation are continuously entangled with each other — clarifying and confusing in almost equal measure.

Once again, the Internet is being used as a weapon

Past conflicts in places like Myanmar, India, and the Philippines show that tech giants are often caught off-guard by state-sponsored disinformation crises due to language barriers and a lack of cultural expertise. Now, Kremlin-backed falsehoods are putting the companies’ content policies to the test. It puts social media platforms in a precarious position, focusing global attention on their ability to moderate content ranging from graphic on-the-ground reports about the conflict to misinformation and propaganda.

How can they moderate disinformation without distorting the historical record?

Tech platforms face a difficult question: “How do you mitigate online harms that make war worse for civilians while preserving evidence of human rights abuses and war crimes potentially?”

What about the end-to-end encrypted messaging apps?

Social media platforms have been on high alert for Russian disinformation that would violate their policies. But they have less control over private messaging, where some propaganda efforts have moved to avoid detection.

According to the “Russia’s Propaganda & Disinformation Ecosystem — 2022 Update & New Disclosures” post and image, the Russian media environment, from overt state-run media to covert intelligence-backed outlets, is built on an infrastructure of influencers, anonymous Telegram channels (which have become a very serious, a very effective tool of the disinformation machine), and content creators with nebulous ties to the wider ecosystem.

The Russian government restricts access to online services

On Friday, Meta’s president of global affairs, Nick Clegg, updated that the company declined to comply with the Russian government’s requests to “stop fact-checking and labeling of content posted on Facebook by four Russian state-owned media organizations.” “As a result, they have announced they will be restricting the use of our services,” tweeted Clegg. In the heart of this issue there are ordinary Russians “using Meta’s apps to express themselves and organize for action.” As Eva Galperin (EFF) noted: “Facebook is where what remains of Russian civil society does its organizing. Cut off access to Facebook and you are cutting off independent journalism and anti-war protests.”

Then, on Saturday, Twitter, which had said it was pausing ads in Ukraine and Russia, said that its service was also being restricted for some people in Russia. We can only assume that it wouldn’t be the last restriction we’ll see as Russia continues to splinter the open internet.

Collective action & debunking falsehood in real-time

It’s become increasingly difficult for Russia to publish believable propaganda. People on the internet are using open-source intelligence tools that have proliferated in recent years to debunk Russia’s claims in real-time. Satellites and cameras gather information every moment of the day, much of it available to the public. And eyewitnesses can speak directly to the public via social media. So, now you have communities of people on the internet geolocating videos and verifying videos coming out of conflict zones.

The ubiquity of high-quality maps in people’s pockets, coupled with social media where anyone can stream videos or photos of what’s happening around them, has given civilians insight into what is happening on the ground in a way that only governments had before. See, for example, two interactive maps, which track the Russian military movements: The Russian Military Forces and the Russia-Ukraine Monitor Map (screenshot from February 27):

But big tech has a lot of complicated choices to make. Google Maps, for example, was applauded as a tool for visualizing the military action, helping researchers track troops and civilians seeking shelter. On Sunday, though, Google blocked two features (live traffic overlay & live busyness) in an effort to help keep Ukrainians safe and after consultations with local officials. It’s a constant balancing act and there’s no easy solution.

Global protests, donations, and empathy

Social media platforms are giving Russians who disagree with the Kremlin a way to make their voice heard. Videos from Russian protests are going viral on Facebook, Twitter, Telegram and other platforms, generating tens of millions of views. Global protests are also being viewed and shared extensively online, like this protest in Rome, shared by an Italian Facebook group. Many organizations post their volunteers’ actions to support Ukrainians, like this Israeli humanitarian mission, rescuing Jewish refugees. Donations are being collected all over the web, and on Saturday, Ukraine’s official Twitter account posted requests for cryptocurrency donations (in bitcoin, ether and USDT). On Sunday, crypto donations to Ukraine reached $20 million.

According to Jon Steinberg, all of these actions “are reminders of why we turn to social media at times like this.” For all their countless faults — including their vulnerabilities to government propaganda and misinformation — tech’s largest platforms can amplify powerful acts of resistance. They can promote truth-tellers over lies. And “they can reinforce our common humanity at even the bleakest of times.” 

“The role of misinformation/disinformation feels minor compared to what we might have expected,” Casey Newton noted. While tech companies need to “stay on alert for viral garbage,” social media is currently seen “as a force multiplier for Ukraine and pro-democracy efforts.”

Déjà vu to the onset of the pandemic

It reminds me a lot of March 2020, when Ben Smith praised that “Facebook, YouTube, and others can actually deliver on their old promise to democratize information and organize communities, and on their newer promise to drain the toxic information swamp.” Ina Fried added that if companies like Facebook and Google “are able to demonstrate they can be a force for good in a trying time, many inside the companies feel they could undo some of the Techlash’s ill will.” The article headline was: Tech’s moment to shine (or not).

On Feb 25, 2022, discussing the Russia-Ukraine conflict, Jon Stewart said social media “got to provide some measure of redemption for itself”: “There’s a part of me that truly hopes that this is where the social media algorithm will shine.”

All of the current online activities — taking advantage of the Social Smartphone Era — leave us with the hope the good can prevail over the bad and the ugly, but also with the fear it would not.

Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 11 February 2022 @ 12:13pm

Can We Compare Dot-Com Bubble To Today's Web3/Blockchain Craze?

Recently, I re-read through various discussions about the “dot-com bubble.” Surprisingly, it sounded all too familiar. I realized there are many similarities to today’s techno-optimism and techno-pessimism around Web3 and Blockchain. We have people hyping up the future promises, while others express concerns about the bubble.

The Dot-Com Outspoken Optimism

In the mid-1990s, the dot-com boom was starting to gather steam. The key players in the tech ecosystem had blind faith in the inherent good of computers. Their vision of the future represented the broader Silicon Valley culture and the claim that the digital revolution “would bring an era of transformative abundance and prosperity.” Leading tech commentators celebrated the potential for advancing democracy and empowering people.

Most tech reporting pitted the creative force of technological innovation against established powers trying to tame its disruptive inevitability. Tech companies, in this storyline, represented the young and irreverent, gleefully smashing old traditions and hierarchies. The narrative was around “the mystique of the founders,” recalled Rowan Benecke. It was about “the brashness, the arrogance, but also the brilliance of these executives who were daring to take on established industries to find a better way.”

David Karpf examined “25 years of WIRED predictions” and looked back at how both Web 1.0 and Web 2.0 imagined a future that upended traditional economics: “We were all going to be millionaires, all going to be creators, all going to be collaborators.” However, “The bright future of abundance has, time and again, been waylaid by the present realities of earnings reports, venture investments, and shareholder capitalism. On its way to the many, the new wealth has consistently been diverted up to the few.”

The Dot-Com Outspoken Pessimism

During the dot-com boom, the theme around its predicted burst was actually prominent. “At the time, there were still people who said, ‘Silicon Valley is a bubble; this is all about to burst. None of these apps have a workable business model,’” said Casey Newton. “There was a lot of really negative coverage focused on ‘These businesses are going to collapse.’”

Kara Swisher shared that in the 1990s, a lot of the coverage was, “Look at this new cool thing.” But also, “the initial coverage was ‘this is a Ponzi scheme,’ or ‘this is not going to happen.’ When the Internet came, there was a huge amount of doubt about its efficacy. Way before it was doubt about the economics, it was doubt about whether anyone was going to use it,” Then, “it became clear that there was a lot of money to be made; the ‘gold rush’ mentality was on.”

At the end of 1999, this gold rush was mocked by San Francisco Magazine. “The Greed Issue” featured the headline “Made your Million Yet?” and stated that “Three local renegades have made it easy for all of us to hit it big trading online. Yeah…right.” Soon after, came the dot-com implosion.

“In 2000, the coverage became more critical,” explained Nick Wingfield. There was a sense that, “You do have to pay attention to profitability and to create sustainable businesses.” “There was this new economy, where you didn’t need to make profits, you just needed to get a product to market and to grow a market share and to grow eyeballs,” added Rowan Benecke. It was ultimately its downfall at the dot-com crash.”

The Blockchain is Partying Like It’s 1999

While VCs are aggressively promoting Web3 – Crypto, NFTs, decentralized finance (DeFi) platforms, and a bunch of other Blockchain stuff – they are also getting more pushback. See, for example, the latest Mark Andreesen Twitter fight with Jack Dorsey, or listen to Box CEO Aaron Levie’s conversation with Alex Kantrowitz. The reason the debate is heated is, in part, due to the amount of money being poured into it.

Web3 Outspoken Optimism

Andreessen Horowitz, for example, has just launched a new $2.2 billion cryptocurrency-focused fund. “The size of this fund speaks to the size of the opportunity before us: crypto is not only the future of finance but, as with the internet in the early days, is poised to transform all aspects of our lives,” a16z’s cryptocurrency group announced in a blog post. “We’re going all-in on the talented, visionary founders who are determined to be part of crypto’s next chapter.”

The vision of Web3’s believers is incredibly optimistic: “Developers, investors and early adopters imagine a future in which the technologies that enable Bitcoin and Ethereum will break up the concentrated power today’s tech giants wield and usher in a golden age of individual empowerment and entrepreneurial freedom.” It will disrupt concentrations of power in banks, companies and billionaires, and deliver better ways for creators to profit from their work.

Web3 Outspoken Pessimism

Critics of the Web3 movement argue that its technology is hard to use and prone to failure. “Neither venture capital investment nor easy access to risky, highly inflated assets predicts lasting success and impact for a particular company or technology” (Tim O’Reilly).

Other critics attack “the amount of utopian bullshit” and call it a “dangerous get-rich-quick scam” (Matt Stolle) or even “worse than a Ponzi scheme” (Robert McCauley). “At its core, Web3 is a vapid marketing campaign that attempts to reframe the public’s negative associations of crypto assets into a false narrative about disruption of legacy tech company hegemony” (Stephen Diehl). “But you can’t stop a gold rush,” wrote Moxie Marlinspike. Sounds familiar?

A “Big Bang of Decentralization” is NOT Coming

In his seminal “Protocols, Not Platforms,” Mike Masnick asserted that “if the token/cryptocurrency approach is shown to work as a method for supporting a successful protocol, it may even be more valuable to build these services as protocols, rather than as centralized, controlled platforms.” At the same time, he made it clear that even decentralized systems based on protocols will still likely end up with huge winners that control most of the market (like email and Google, for example. I recommend reading the whole piece if you haven’t already).

Currently, Web3 enthusiasts are hyping that a “Big Bang of decentralization” is coming. However, as the crypto market evolves, it is “becoming more centralized, with insiders retaining a greater share of the token” (Scott Galloway). As more people enter Web3, the more likely centralized services will become dominant. The power shift is already underway. See How OpenSea took over the NFT trade.

However, Mike Masnick also emphasized that decentralization keeps the large players in check. The distributed nature incentivizes the winners to act in the best interest of their users.

Are the new winners of Web3 going to act in their users’ best interests? If you watch Dan Olson’s “Line Goes Up – The Problem With NFTs” you will probably answer, “NO.”

From “Peak of Inflated Expectations” to “Trough of Disillusionment”

In Gartner’s Hype Cycle, it is expected that hyped technologies experience “correction” in the form of a crash: A “peak of inflated expectations” is followed by a “trough of disillusionment.” In this stage, the technology can still be promoted and developed, but at a slower pace. With regards to Web3, we might be reaching the apex of the “inflated expectations”. Unfortunately, there will be a few big winners and a “long tail” of losers in the upcoming “disillusionment.”

Previous evolutions of the web had this “power law distribution”. Blogs, for example, were marketed as a megaphone for anyone with a keyboard. It was amazing to have access to distribution and an audience. But when you have more blogs than stars in the sky, only a fraction of them can rise to power. Accordingly, only a few of Web3’s new empowering initiatives will ultimately succeed. Then, “on its way to the many,” the question remains “would the new wealth be diverted up to the few?” As per the history of the web, in a “winner-take-all” world, the next iteration wouldn’t be different. 

From a “Bubble” to a “Balloon”

Going through the dot-com description, and then, the current Web3 debate – feels like déjà vu. Nonetheless, as I argue that the tech coverage should not be in either Techlash (“tech is a threat”) or Techlust (“tech is our savior”) but rather Tech Realism – I also argue the Web3 debate should be neither “bubble burst” nor “golden age,” but rather in the middle.

A useful description of this middle was recently offered by M.G. Siegler, who said the tech bubble is not a bubble but a balloon. Following his line of thought, instead of a bubble, Web3 can be viewed as a “deflating balloons ecosystem”: The overhyped parts of Web3 might burst, and affect the whole ecosystem, but most evaluations and promises will just return closer to earth.

That’s where they should be in the first place.

Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 19 November 2021 @ 10:44am

TECHLASH 2.0: The Next-Gen TECHLASH Is Bigger, Stronger & Faster

The roll-out of the “Facebook Papers” on Monday October 25 felt like drinking from a fire hose. Seventeen news organizations analyzed documents received from the Facebook whistleblower, Frances Haugen, and published numerous articles simultaneously. Most of the major news outlets have since then published their own analyses on a daily basis. With the flood of reports still coming in, “Accountable Tech” launched a helpful aggregator: facebookpapers.com.

The volume and frequency of the revelations are well-planned. All the journalists were approached by a PR firm, Bryson Gillette, that, along with prominent Big Tech critics, is supporting Haugen behind-the-scenes. “The scale of the coordinated roll-out feels commensurate with the scale of the platform it is trying to hold accountable,” wrote Charlie Warzel (Galaxy Brain).

Until the “Facebook Papers,” comparisons of Big Tech to Big Tobacco didn’t catch on. In July 2020, Mark Zuckerberg of Facebook, Sundar Pichai of Google, Jeff Bezos of Amazon, and Tim Cook of Apple were called to testify before the House Judiciary Subcommittee on Antitrust. A New York Times headline claimed the four companies prepare for their “Big Tobacco Moment.” A year later, this label is repeatedly applied to one company out of those four, and it is, unsurprisingly, a social media company.

TECHLASH 1.0 started off with headlines like Dear Silicon Valley: America’s fallen out of love with you (2017). From that point, it becomes a competition “who slams them harder?” eventually reaching: Silicon Valley’s tax-avoiding, job-killing, soul-sucking machine (2018).

In the TECHLASH 2.0 era, the antagonism has reached new heights. The “poster child” for TECHLASH 2.0 – Facebook – became a deranging brain implant for our society or an authoritarian, hostile foreign power (2021). In this escalation, virtually no claim about the malevolence of Big tech is too outlandish in order to generate considerable attention.

As for the tech companies, their crisis response strategies have evolved as well. As TECHLASH 2.0 launched daily attacks on Facebook its leadership decided to cease its apology tours. Nick Clegg, *Facebook VP of Global Affairs, provided his regular “mitigate the bad and amplify the good” commentary in numerous interviews. Inside Facebook, he told the employees to “listen and learn from criticism when it is fair, and push back strongly when it is not.”

Accordingly, the whole PR team transitioned into (what company insiders call) “wartime operation” and a full-blown battle over the narrative. Andy Stone combated journalists on Twitter. In one blog post, the WSJ articles were described as inaccurate and lacking context. A lengthy memo called the accusations “misleading” and some of the scrutiny “unfair.” Zuckerberg’s Facebook post argued that the heart of the accusations (that Facebook prioritizes profit over safety) is “just not true.”

On Twitter, Facebook’s VP of Communications referred to the embargo on the consortium of news organizations as an “orchestrated ‘gotcha’ campaign.” During Facebook’s third-quarter earnings call, Mark Zuckerberg reiterated that “what we are seeing is a coordinated effort to selectively use leaked documents to create a false picture about our company.”

Moreover, Facebook attacked the media for competing on publishing those false accusations: “This is beneath the Washington Post, which during the last five years competed ferociously with the New York Times over the number of corroborating sources its reporters could find for single anecdotes in deeply reported, intricate stories,” said a Facebook spokeswoman. “It sets a dangerous precedent to hang an entire story on a single source making a wide range of claims without any apparent corroboration.”

Facebook’s overall crisis response strategies revealed the rise of VADER:

  • Victimage – we’re a victim of the crisis
  • Attack the accuser – confronting the person/group claiming something is wrong
  • Denial – contradicting the accusations
  • Excuse – denying intent to do harm
  • Reminder – reminding the past good works of the company.

The media critics describe the current backlash as overblown, full of hysteria, and based on arguments that don’t stand up to the research. More aggressively, a Facebook employee told me: “If in this storyline, we are Vader, then the media is BORGBogus, Overreaching, Reckless, and Grossly exaggerated.” Leaving aside the crime of mixing “Star Wars” and “Star Trek,” we can draw a broader generalization:

Both the tech coverage and the companies’ crisis responses have evolved in the past few weeks. We moved from a peaceful time (pre-TECHLASH) to a Cold War (TECHLASH 1.0) and now “all Hell breaks loose” (TECHLASH 2.0).

“Product Journalism” still exists around new devices/services, but the recent “firestorm” teaches us a valuable lesson. The Next-Gen of TECHLASH is bigger, stronger and faster – just like the tech companies it’s fighting against.

* In another move from the playbook, Facebook was rebranded as Meta. Since Meta means Dead in Hebrew (to the world’s amusement), I will refer to Facebook as Facebook for the time being.

Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 1 October 2021 @ 03:15am

Facebook: Amplifying The Good Or The Bad? It's Getting Ugly

When the New York Times reported Facebook’s plan to improve its reputation, the fact that the initiative was called “Project Amplify” wasn’t a surprise. “Amplification” is at the core of the Facebook brand, and “amplify the good” is a central concept in its PR playbook.

Amplify the good

Mark Zuckerberg initiated this talking point in 2018. “I think that we have a clear responsibility to make sure that the good is amplified and to do everything we can to mitigate the bad,” he said after the Russian election meddling and the killings in Myanmar.

Then, other Facebook executives adopted this notion regardless of the issue at hand. The best example is Adam Mosseri, Head of Instagram.

In July 2019, addressing online bullying, Mosseri said: “Technology isn’t inherently good or bad in the first place …. And social media, as a type of technology, is often an amplifier. It’s on us to make sure we’re amplifying the good and not amplifying the bad.”

In January 2021, After January 6 Capitol attack, Mosseri said: “Social media isn’t good or bad, like any technology, it just is. But social media is specifically a great amplifier. It can amplify good and bad. It’s our responsibility to make sure that we amplify more good and less bad.”

In September 2021, after a week of exposés about Facebook by the WSJ, The Facebook Files, Mosseri was assigned to defend the company once again. “When you connect people, whether it’s online or offline, good things can happen and bad things can happen,” he said in his opening statement. “I think that what is important is that the industry as a whole tries to understand both those positive and negative outcomes, and do all they can to magnify the positive and to identify and address the negative outcomes.”

Mosseri clearly uses the same messaging document, but Facebook’s PR template contains more talking points. Facebook also asserts that there have always been bad people or behaviors, and the current connectivity simply makes them more visible.

A mirror for the ugly

According to the “visibility” narrative, tech platforms simply reflect the beauty and ugliness in the world. Thus, social media is sometimes a cesspool because humanity is sometimes a cesspool.

Mark Zuckerberg addresses this issue several times, with the main message that it is just human nature. Nick Clegg, VP of Global Affairs and Communications, repeatedly shared the same mindset. “When society is divided and tensions run high, those divisions play out on social media. Platforms like Facebook hold up a mirror to society,” he wrote in 2020. “With more than 3 billion people using Facebook’s apps every month, everything that is good, bad misogynist and ugly in our societies will find expression on our platform.”

“Social media broadly, and messaging apps and technology, are a reflection of humanity,” Adam Mosseri repeated. “We communicated offline, and all of a sudden, now we’re also communicating online. Because we’re communicating online, we can see some of the ugly things we missed before. Some of the great and wonderful things, too.”

This “mirror of society” statement is being criticized for being intentionally uncomplicated. Because the ability to shape, not merely reflect, people’s preferences and behavior is also how Facebook makes money. Therefore, despite Facebook’s recurring statements, it is accused of not reflecting but increasing the bad and ugly.

Amplify the bad

“These platforms aren’t simply pointing out the existence of these dark corners of humanity,” John Paczkowski from BuzzFeed News, told me. “They are amplifying them and broadcasting them. That is different.”

After an accumulation of deadly events, such as the Christchurch shooting, Kara Swisher wrote about amplified hate and “murderous intent that leaps off the screen and into real life.” She argued that “While this kind of hate has indeed littered the annals of human history since its beginnings, technology has amplified it in a way that has been truly destructive.”

It is believed that bad behavior (e.g., disinformation) is induced by the way that tech platforms are designed to maximize engagement. Thus, Facebook’s victim-centric approach refuses to acknowledge that perhaps bad actors don’t misuse its platform but rather use it as intended (“machine for virality”).

Ev Williams, the co-founder of Blogger, Twitter, and Medium, said he now believes that he had failed to appreciate the risks of putting such powerful tools in users’ hands with minimal oversight. “One of the things we’ve seen in the past few years is that technology doesn’t just accelerate and amplify human behavior,” he wrote. “It creates feedback loops that can fundamentally change the nature of how people interact and societies move (in ways that probably none of us predicted).”

So, things had turned toxic in ways that tech founders didn’t predict. Should they have foreseen them? According to Mark Zuckerberg, an era of tech optimism led to unintended consequences. “For the first decade, we really focused on all the good that connecting people brings … But it’s clear now that we didn’t do enough,” he said After the Cambridge Analytica scandal. He admitted they didn’t think through “how people could use these tools to do harm as well.” Several years after the Techlash coverage began, there’s a consensus that they needed to “do more” to purposefully deny the ability to abuse them.

One of the reasons it was (and still is) a challenging task is their scale. According to this theme, the growth-at-all-cost “blinded” them, and they turned so big to be successfully managed at all. Due to their bigness, they are always in a game of cat-and-mouse with bad actors. “When you have hundreds of millions of users, it is impossible to keep track of all the ways they are using and abusing your systems,” Casey Newton, from the Platformer newsletter, explained in an interview. “They are always playing catch-up with their own messes.”

Due to the unprecedented scale at which Facebook operates, it is dependent on algorithms. Then, it claims that any perceived errors result from “algorithms that need tweaking” or “artificial intelligence that needs more training data.” But is it just an automation issue? It depends on who you ask.

The algorithms’ fault vs. the people who build them or use them

Critics say that machines are only as good as the rules built into them. “Google, Twitter, and Facebook have all regularly shifted the blame to algorithms, but companies write the algorithms, making them responsible for what they churn out.”

But platforms tend to avoid this responsibility. When ProPublica revealed that Facebook’s algorithms allowed advertisers to target users interested in “How to burn Jews” or “History of why Jews ruin the world,” Facebook’s response was: The anti-Semitic categories were created by an algorithm rather than by people.

At the same time, Facebook‘s Nick Clegg argued that human agency should not be removed from the equation. In a post titled “You and the Algorithm: It takes two to Tango,” he criticized the dystopian depictions of their algorithms, in which “people are portrayed as powerless victims, robbed of their free will.” As if “Humans have become the playthings of manipulative algorithmic systems.”

“Consider, for example, the presence of bad and polarizing content on private messaging apps – iMessage, Signal, Telegram, WhatsApp – used by billions of people around the world. None of those apps deploy content or ranking algorithms. It’s just humans talking to humans without any machine getting in the way,” Clegg wrote. “In many respects, it would be easier to blame everything on algorithms, but there are deeper and more complex societal forces at play. We need to look at ourselves in the mirror and not wrap ourselves in the false comfort that we have simply been manipulated by machines all along.”

Fixing the machine vs. the underlying societal problems

Nonetheless, there are various attempts to fix the “broken machine,” and some potential fixes are discussed more often. One of the loudest calls is for tougher regulation – legislation should be passed to implement reforms. Yet, many remain pessimistic about the prospects for policy rules and oversight because regulators tend not to keep pace with tech developments. Also, there’s no silver-bullet solution, and most of the recent proposals are overly simplistic.

“Fixing Silicon Valley’s problems requires a scalpel, not an axe,” said Dylan Byers. However, tech platforms are faced with a new ecosystem of opposition, including Democrats and Republicans, antitrust theorists, privacy advocates, and European regulators. They all carry axes.

For instance, there are many new proposals to amend Section 230 of the Communications Decency Act. But, as Casey Newton noted, “it won’t fix our politics, or our broken media, or our online discourse, and it’s disingenuous for politicians to suggest that it would.”

When self-regulation is proposed, there is an inherent commercial conflict since platforms are in the business of making money for their shareholders. Facebook only acted after problems escalated and caused real damage. For example, only after the mob violence in India (another problem that existed before WhatsApp, and may have been amplified by the app) the company instituted rules to limit WhatsApp’s ‘virality.’” Other algorithms have been altered in order to eliminate conspiracy theories and their groups from being highly recommended.

Restoring more human control requires different remedies: from decentralization projects, which seek to shift the ownership of personal data away from Big Tech and back toward users, to media literacy, which seek to formally educate people of all ages about the way tech systems function, as well as encourage appropriate, healthy uses.

The proposed solutions could certainly be helpful, and they all should be pursued. Unfortunately, they are unlikely to be adequate. We will probably have an easier time fixing algorithms, or the design of our technology than we will have fixing society, and humanity has to deal with humanity’s problems.

Techdirt’s Mike Masnick recently addressed the underlying societal problems that need fixing. “What we see – what Facebook and other social media have exposed – is often the consequences of huge societal failings.” He mentioned various problems with education, social safety nets, healthcare (especially mental healthcare), income inequality and corruption. Masnick concluded we should be trying to come up with better solutions for those issues rather than “insisting that Facebook can make it all go away if only they had a better algorithm or better employees.”

We saw that with COVID-19 disinformation. After President Joe Biden blamed Facebook for “killing people,” and Facebook responded by saying they are “helping save lives,” I argued that this dichotomous debate sucks. Charlie Warzel called it (on his Galaxy Brian newsletter) “an unproductive, false binary of a conversation,” and he is absolutely right. Complex issues deserve far more nuance.

I can’t think of a more complex issue than tech platforms’ impact on society, in general, and Facebook’s impact in particular. However, we seem to be stuck between the storylines discussed above, of “amplifying the good vs. the bad.” It is as if you can only think favorably or negatively about “the machine,” and you must pick a side and adhere to its intensified narrative.

Keeping to a single narrative can escalate rhetoric and create an insufficient discussion, as evidenced by a recent Mother Jones article. The “Why Facebook won’t stop pushing propaganda” piece describes how a woman tried to become Montevallo’s first black mayor and lost. Montevallo is a very small town in Alabama (7,000 people), whose population is two-thirds white. Her race loss was blamed on Facebook: The rampart of misinformation and rumors about her affected the voting.

While we can’t know what got people to vote one way or another, we should consider that racism was prevalent in places like Alabama for a long time. Facebook was the candidate’s primary tool for her campaign, highlighting the good things about her historic nomination. Then, racism was amplified in Facebook’s local groups. In the article, the fault was centered on the algorithm amplification, on Facebook’s “amplification of the bad.” Facebook’s argument that it only “reflects the ugly” does not hold true here if it makes it more robust. Yet, the root cause in this case remains the same, racism. Facebook “doing better” and amending its algorithms will not be enough unless we also address the source of the problem. WE can and should “do better,” as well.

Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 18 August 2021 @ 12:04pm

There's a Growing Backlash Against Tech's Infamous Secrecy. Why Now?

How Silicon Valley’s Tech Giants Use NDAs to Create a Culture of Silence,” stated a Business Insider piece on July 27, 2021. “To understand how Non-Disclosure Agreements (NDAs) have come to form the backbone of Silicon Valley’s culture of secrecy,” explained Matt Drange, “Insider reviewed 36 agreements shared by tech workers.” It showed how management mistakes and misconduct hide in the silence of those NDAs. “The secrecy is by design … leaving the true extent of wrongdoing in the workplace a mystery.”

“The use of NDAs, including in trivial or routine circumstances like visiting a tech office, is ironic in an industry that praises openness and transparency,” elaborated Shira Ovide in her New York Times newsletter. She called it an unnecessary “exercise of power.”

Yael Eisenstat, a former Facebook employee, criticized this power in a Washington Post OpEd on August 3, 2021. “A handful of technology companies have unprecedented – and unchecked – power over our daily interactions and lives. Their ability to silence employees exacerbates that problem, depriving the public and regulators of a means to analyze actions that affect our public health, our public square, and our democracy.”

This recent backlash against tech’s infamous secrecy is long overdue. It became possible as a result of a broader uprising against Big Tech, AKA the Techlash (tech-backlash). But for decades, it wasn’t the case. In the power relations between the tech giants and the media, journalists’ access to sources within those companies was tightly controlled, and “access has always been a bargaining chip.”

The Roots of Tech’s Secrecy Culture

In the mid-1990s, when the dot-com boom started to gather steam, Silicon Valley went from semiconductor fab plants in South San Jose to an industry of hot technologies. The tech coverage focused on the brilliance of the tech CEOs who were daring to take on established industries and old hierarchies. The consumers wanted a ‘backstage pass’ to those rock stars. It was also all that the tech reporters wanted, access.

But the common experience for tech journalists was that if their coverage were critical or hard on the companies, their level of access would either go on hiatus or disappear altogether. Many of them complied with this tradeoff.

The most secretive company was always Apple. Tim Cook once said, “One of the great things about Apple is: We probably have more secrecy here than the CIA.”

By keeping the communication channels closed, the companies had leverage over those to whom they give access. “If you want access to Apple, you can’t upset them,” a Gizmodo reporter described. “Apple and Google are masters of grooming reporters to do what they want and provide access only to folks they think will make them look good,” the freelancer journalist Rose Eveleth explained.

The companies also increased their tendency to brief reporters “on background.” In this method, the tech PR teams and companies’ employees agree to talk, but the reporter cannot quote anything said in the conversation. Thus, the information cannot be transmitted to the readers. The experience can be infuriating, as Adrienne LaFrance from The Atlantic described: “I got through an entire interview with a product manager at Apple, only to be told, after the fact, that it was presumed to be ‘on background.’ ‘Everyone knows this is how we do things,’ a spokesman explained apologetically.”

Tech journalists and bloggers acknowledged getting used to “not having an oppositional journalistic culture.” Those who were asking the tough questions had to walk a tight rope when the combination of access and unfavorable coverage was quite rare.

The Intensifying Revolt During the Techlash

The turning point in tech journalism followed Donald Trump’s victory in November 2016. According to research about the emerging tech-backlash, the pivotal year was 2017 as a result of various tech scandals, including foreign election meddling; disinformation wars; extremist content, and hate speech; privacy violations; allegations of an anti-diversity, sexual harassment, and discrimination culture. The accumulation of those issues created a profound sense of concern around content moderation, algorithmic accountability, and monopoly power. The companies’ secrecy became a means of evading responsibility.

“Corporations such as Apple, Google, and Uber have become infamous for their secrecy and unwillingness to comment on most matters on-the-record. Tech reporters, myself very much included, have not done enough to push them to do otherwise,” claimed Brian Merchant from Vice. He called his fellow journalists to push back against these ossified norms: “I am no longer going to listen to a public relations representative try to change my mind ‘on background’ with unquotable statements attributable to no one. No reporter should, not when the stakes are as high as they are.”

His article, from July 2019, generated a ‘call to arms’ by leading journalists, unwilling to propagate it any longer. It reflected a more profound change in the power dynamics between Big Tech and the journalists, who had enough. Later on, the Covid-19 pandemic acted as an accelerator, and the Tech vs. Journalism battle intensified into a full-blown “cold war.” The stakes were even higher than before.

In June 2021, a Mother Jones piece took the allegations against the PR tactics to the next level. It focused on Amazon and described how it “bullies, manipulated and lies to reporters.” Amazon’s press team was accused of engaging in deceitful behavior. The tech reporters also pointed out that “Amazon has recently begun providing more access before a story is published,” but complained it is done “in limited and often unhelpful or unrelated ways, by offering things like off-the-record or background interviews with the press team or approved employees.”

It is often the case that the more important stories are coming from “un-approved” employees. This is how Casey Newton revealed Facebook’s content moderators’ working conditions in The Trauma Floor or Bodies in Seats exposés. The workers openly described how they developed severe anxiety while still in training and struggled with trauma symptoms long after they left.

Other tech employees, who experienced a reckoning around their companies’ role in society, also started approaching the reporters with allegations of corporate misdeeds. Some of them didn’t speak anonymously but instead put their name on it, agreeing to full exposure. The fact that whistleblowers experienced legal risks, retaliation, and emotional scars did not stop additional workers from joining their colleagues. Breaking their NDAs or handing them to a reporter are parts of this growing trend of employee activism.

“You can’t have it both ways,” Scott Thurm from Wired explained in an interview. “If you don’t give us access, then, of course, we are going to rely on other people to tell the story.” The current story is not the one the tech companies want the media to tell. However, in the Techlash, it is precisely what the media is doing.

Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication

More posts from nirit.weiss-blatt >>