I am a college educator who studies social media systems and other scholarly stuff related to communication. I teach several classes about it. The critique here on the basis of free speech and persuasion is a legitimate argument. I, too, do not believe that the majority of consumers are so gullible (by now) that they simply inject every message into their brains as if it were true. They don't, save for, perhaps TruthSocial members (I speculate). Congress is complaining about the wrong thing, IMO.
But here is where I part ways with the issues of caution. TikTok, like YouTube Shorts and the Facebook shorts equivalent, are "attention crack" that one simply cannot put down easily once immersed. The pattern of engagement with these types of media affects the brain much like practicing mindfulness, but to the opposite effect. Here is what the most recent research indicates:
1 - Attention span and focus: Studies indicate that frequent consumption of short-form video content may contribute to shorter attention spans and more fragmented focus. The rapid pace and constant novelty of these videos may condition the brain to expect constant stimulation and have difficulty maintaining concentration on longer-form content. Ask any school teacher.
2 - Dopamine response: The video recommendation algorithms on these platforms are designed to keep users engaged by providing an endless scroll of content tailored to their interests. This can trigger dopamine release in the brain's reward pathways, potentially leading to addictive-like behaviors and making it harder to disengage.
3 - Cognitive overload: The fast-paced, visually stimulating nature of short-form content may contribute to cognitive overload, making it more difficult for the brain to process information and retain memories effectively.
4 - Reduced reflection and deep thinking: The bite-sized, often superficial nature of short-form content may discourage the type of deep, reflective thinking associated with longer-form content like books or in-depth articles.
None of these issues are justification for banning TikTok, et al. Still, any opportunity to reduce the potential negative impact of these systems on teens and pre-teens I view as a positive protective factor for the well-being of minors - the same as if we were talking about vaping. If Congress succeeds, it will be a nominal strike against free speech, which would be bad. But let's not pretend that these systems are benign, either.
From having been an instructor for fully online college courses for the past 12 years, I am certain that ChatGPT has already infringed on the integrity of my classes - I can tell. Students just don't write with the kind of clarity that comes out of ChatGPT - and I use ChatGPT a lot. It's so obvious!
The real problem will be when students figure out how to make their ChatGPT output sound like their own writing. Until then, I have candid discussions with my students on whether they are using ChatGPT and draw the lines where it is and is not permissible. So far, it's been collegial, but I admit that it is a hopeless effort.
I agree with the other poster here who said that, when asked, students do not admit cheating because they don't think using ChatGPT is cheating.
As I tell my colleagues, the challenge for us college educators is to admit when we are beat by this thing, change the forms of assessment we make, and let go of the parts of the instructional narrative we cannot control.
If you try the same swapparoo of "addictive" with "entertaining" in virtual companion apps, a funny thing happens.
The grim reality is that humans are emotionally vulnerable but too many blame external forces for their pathological responses to it.
From The New Yorker article: https://www.newyorker.com/news/the-political-scene/what-the-twitter-files-reveal-about-free-speech-and-social-media
The most eyebrow-raising revelations in the Twitter Files, documented mostly by Matt Taibbi and Lee Fang, concern the extent to which the F.B.I. and the Pentagon were interested in controlling what was seen on the platform. According to Taibbi’s reporting, there were more than a hundred and fifty e-mails between Roth and the F.B.I. from January, 2020, to November, 2022. Some of these seem to have been more or less normal investigative queries, but many were requests that the company take action to restrict accounts that the F.B.I. had flagged for supplying misinformation. As Taibbi pointed out, some of these requests were absurd—one concerned a parody account of the pro wrestler the Undertaker, which primarily tweeted about soiling himself. (It was banned the same day.) The F.B.I. also flagged cases where the “misinformation” was obviously a joke: “I want to remind republicans to vote tomorrow, Wednesday November 9,” @fromma, the subject of an F.B.I. request to Twitter, tweeted. Down a different archival tunnel, Fang discovered that Twitter had long been coöperating with the Pentagon to help the U.S. government amplify accounts (often in Arabic or Russian) with friendly, and sometimes manufactured, perspectives. You don’t have to be an especially cynical reader of American history to realize that, if there is a new tool that allows for “centralized content moderation” of political information, the F.B.I. is going to take an interest in it. Still, in this context, “centralized content moderation” sounds downright Orwellian.
Looks like the Luxury Surveillance constituency (sarcasm) has had an asymmetric influence on the use of this stuff in the public sphere. Chris Gilliard writes about this:
"Luxury Surveillance - People pay a premium for tracking technologies that get imposed unwillingly on others"
https://reallifemag.com/luxury-surveillance/
Hello "Hyman" or whoever they say you are.
On behalf of the caterwauling lefties (haha), I'd like to state that if you do not have direct experience knowing and living with a person with actual gender dysphoria, keep your snark to yourself. The sh*t is real. I also dislike the whole pronoun argument thing, so don't think all lefties are SJWs on that topic. I couldn't care less about pronouns.
As for NPR being "far-left," do you even know what that means? Far-left calls for the abolishment of private property, the takeover of the means of production by the state, and a state controlled planned economy where prices are not subject to market forces.
Where in the world do get the notion that NPR or anyone else who listens to it has any interest in this??
As a left-leaning voter, I am staunchly capitalist, pro-business, but mindful of the needs of my community through taxation and the development of the common good (education, public space, health care, etc.). I have no problem with guns with the exception of maniacal obsession with assault weapons. And y'all call me a Communist! Pffft!
Will you please cut the crap!?
Now then...
The issue here is NOT as simplistic as whether the government is silencing lies. The issue here is about the misuse of the public airways for knowingly misleading their audience to believe information that is contrary to democracy. It is a legitimate basis for rescinding the license to broadcast. There are plenty of ways in which a broadcaster can broadcast lies - but who cares whether anyone says there are Martians living among us or that the da Vinci Code means anything consequential? That's not what we are talking about.
When a broadcaster knowingly and deliberately promotes misinformation over several years that undermines the perceived integrity of the most important feature of our democracy, that is a threat to democracy itself. I consider that a legitimate case for arguing against the privilege to broadcast according to FCC rules.
Just an FYI that twitter now no longer allows public non-member access to tweets, so linking out to them is a bummer for those of us who have cut the bird. I'm not sure what that means for you as an author but thought you should know.
"Bloated woke jobs"? Such as...
"...The cuts hit product managers, data scientists and engineers who worked on machine learning and site reliability, which helps keep Twitter’s various features online. The monetization infrastructure team, which maintains the services through which Twitter makes money, was reduced to fewer than eight people from 30, a person familiar with the matter said."
https://www.nytimes.com/2023/02/26/technology/twitter-layoffs.html
"Twitter employees from departments including ethical AI, marketing and communication, search, public policy, wellness and other teams had tweeted about having been let go. Members of the curation team, which help elevate reliable information on the platform, including about elections, were also laid off, according to employee posts....Other members of Twitter’s human rights team had been laid off."
https://www.cnn.com/2022/11/03/tech/twitter-layoffs/index.html
"Recession accelerates"?? Hardly, given the job numbers are stable and rising, unemployment is still down, inflation is in decline, etc.
Thank you for your sanctimony, but I have read all of it. My positions are rooted in a decade of research related to several courses I teach on the subject.
You are correct that people create moral panics. Except this is not (just) a moral panic. It is cultural shift driven by business models that cause psychological effects influencing the perception of reality. It creates an endemic distortion field of misinformation on a scale of hundreds of millions.
I do not disagree with the evidence presented by the author. It is not, however, the complete picture,
Sorry, no. This is different.
Yeah, but this time it's different.
J/K, but in all seriousness, it actually is different because, unlike all of the other examples you list...
social media is global, with billions of participants.
it has become a substantial part of the digital economy and the stock market.
it is a consideration in the basis of every business model for marketing.
it is leveraged as a career platform.
the pressure to participate in it is unrelenting because everyone else is on it.
a person's life and career can literally be destroyed by it.
it was instrumental in getting Donald Trump elected.
it is the focal point of superpower governments to fomenting global social and political disruption.
it gathers an infinite amount of data that defines the profile of nearly every human being not in cave.
it is either a direct or indirect means of government surveillance.
our government is woefully incapable of understanding what it does, how it does it, and how it matters in issues of public interest.
it has become more relevant than actual journalism to the extent that journalism is frequently just an aggregation of social media content.
algorithmically calculated content subjectively determines what a person sees, which causes a distortion in the objective perception of reality.
This is not just a moral panic in the conventional sense. (Yes, too many people are overreacting to it). But it is also a leviathan that is controlled by an elite few extraordinarily powerful men whom millions of people worship as geniuses. That, in itself, is enough to panic about.
This is all good and I don't argue with the findings here. But here is where I depart with the author's commentary.
The thing that differentiates social media from TV (and other media) is its level of penetration. There is simply no historical comparison to make of the effects of social media on the same level of ubiquity, given that it functions as a parasite to a society saturated with mobile communication. The intensity of social media's presence and its relevance to minors' lives has no precedent. Thus, even if it "might" be bad for some kids, that is saying a lot more than if TV or comic books "might" be bad for some kids.
Second, and apologies in advance for getting all McLuhan-y on unsuspecting tech blog readers, the argument between the effect of social media and other similarly satanic media (video games, TV, tamagotchis, etc.) is not so much about it being "bad" in some objective sense, as if social media causes MORE psoriasis or scurvy. Rather, it is about how sensory perception is re-organized, given that the social media/mobile phone technology is a uniquely designed extension of some native form of cognition (algorithmically optimized for a business model). It carries with it its own psychological grammar.
Thus: Total immersion in a form of communication that replaces in-person non-verbal sensitivity with mechanized "like" signals in a semantically slim context changes something. Whether anyone can quantify that as bad is up to the scientists to determine, but it's changing something, profoundly.
My Spidey sense tells me that several generations of social media users will have gained something valuable, as has been stated in the research above. But then they will also have lost something as a matter of atrophy or from never having developed it. Thus, most of us don't know how to milk a cow because we can just buy milk in a bottle down the street. No particular loss there. But humanistic interpersonal skills is another matter. That is what I am most concerned about.
Stone: Those left behind will become instructional designers, just like everyone else who hates teaching or got booted out of the video post-production industry.
I'm too lazy and it's too late to read all the comments here first before I post this, but I'll add this hoping someone else hasn't already done it better.
I sense that the WGA is not so much worried about making their jobs easier - they are more concerned that people who do not have their talent will make them irrelevant. It will no longer be a matter of who can write well - it will be a matter of who can write the best prompt, which is a specialized skill that current writers do not have (yet).
I offer this because I spent my main career as a video editor in the TV advertising business in NYC between 1990 and 2007 during a period of time when two major changes occurred: (1) computers got more powerful and cheaper to the point where anyone could afford them and do studio quality work at home, and (2) the invention of streaming media as a zero-cost entry point for marketing meant the end of traditional budgets for TV commercials. Two other lesser factors were: the acceptance of low-quality UGC on YouTube that made professionals irrelevant, and also the 9-11 attack that got all the experienced producers fired (there was no work for a year).
tl;dr: high-paid professional video editors in fancy NYC boutiques were irrelevant. There was less TV commercial work than ever before, yet, paradoxically, there was never more video editing going on in human history - we just weren't the ones doing it.
Thus, with ChatGPT et al., WGA realizes that they, too, will be irrelevant and yet there will be more "writing" going on than ever before - but they won't be the ones doing it. I do agree with their demand to forbid the use of their prior work as ML training.
Since I work in higher education as both an instructional designer (think: architect for building courses according to standards for how humans learn) and as adjunct faculty, I think I can offer some nuance to this "compelled" thing.
First, as a college student in '80s, '90s, and '10s, I was "compelled" to align with a few professors' points of view about certain things, but only as a matter of getting through the course. Unpacked: The basis of assessment was subjective to the professor's POV. In one extreme example in a Politics & Sex course, deviation from the orthodoxy of the professor's POV got you a lower grade - period. That's an unfortunate by-product of tenure and the general tendency of the Academy to attract know-it-all academics. But at least, as stated, these geeks are the exceptions, not the rule.
But even so, I would never claim that I was harmed because of it.
Second, a college worth its salt has academic standards for scholarly discourse which are based on the quality of one's argument regardless of one's position - not whether one aligns to a professor. Colleges that allow orthodoxy over argument deserve all the Conservative grievances they get (as well as mine).
This legislation will fail simply because no one will be able to define when a student has "adopted" such beliefs (When they wrote the paper? When they voted for AOC? Had an abortion? When they joined ANTIFA?). At what point does "harm" occur?
Sounds pretty snowflaky to me.
First, this is the most sharply concise confrontation with a CEO I have ever seen on this issue and I appreciate the author for having curated it.
What upsets me here, however, is Best's cynical claim that efforts to censor hate speech is pointless:
"And my read is that that hasn’t actually worked. That hasn’t been a success. It hasn’t caused those ideas not to exist. It hasn’t built trust. It hasn’t ended polarization. It hasn’t done any of those things. And I don’t think that taking the approach that the legacy platforms have taken and expecting it to have different outcomes is obviously the right answer the way that you seem to be presenting it to be."
First, he is missing the point, completely - it's not about "ending" bad ideology - it's about preventing it from metastasizing. There will always be Nazis. The least we can do is limit its reach through some kind of corporate ethics.
Second, he cannot make any kind of claim that "censorship doesn't work" unless there is some kind of legitimate study that measures some degree of influence (don't ask me how) under conditions "X" versus conditions "Y." He's just being lazy.
Training your brain
I am a college educator who studies social media systems and other scholarly stuff related to communication. I teach several classes about it. The critique here on the basis of free speech and persuasion is a legitimate argument. I, too, do not believe that the majority of consumers are so gullible (by now) that they simply inject every message into their brains as if it were true. They don't, save for, perhaps TruthSocial members (I speculate). Congress is complaining about the wrong thing, IMO. But here is where I part ways with the issues of caution. TikTok, like YouTube Shorts and the Facebook shorts equivalent, are "attention crack" that one simply cannot put down easily once immersed. The pattern of engagement with these types of media affects the brain much like practicing mindfulness, but to the opposite effect. Here is what the most recent research indicates: 1 - Attention span and focus: Studies indicate that frequent consumption of short-form video content may contribute to shorter attention spans and more fragmented focus. The rapid pace and constant novelty of these videos may condition the brain to expect constant stimulation and have difficulty maintaining concentration on longer-form content. Ask any school teacher. 2 - Dopamine response: The video recommendation algorithms on these platforms are designed to keep users engaged by providing an endless scroll of content tailored to their interests. This can trigger dopamine release in the brain's reward pathways, potentially leading to addictive-like behaviors and making it harder to disengage. 3 - Cognitive overload: The fast-paced, visually stimulating nature of short-form content may contribute to cognitive overload, making it more difficult for the brain to process information and retain memories effectively. 4 - Reduced reflection and deep thinking: The bite-sized, often superficial nature of short-form content may discourage the type of deep, reflective thinking associated with longer-form content like books or in-depth articles. None of these issues are justification for banning TikTok, et al. Still, any opportunity to reduce the potential negative impact of these systems on teens and pre-teens I view as a positive protective factor for the well-being of minors - the same as if we were talking about vaping. If Congress succeeds, it will be a nominal strike against free speech, which would be bad. But let's not pretend that these systems are benign, either.
Detectability
From having been an instructor for fully online college courses for the past 12 years, I am certain that ChatGPT has already infringed on the integrity of my classes - I can tell. Students just don't write with the kind of clarity that comes out of ChatGPT - and I use ChatGPT a lot. It's so obvious! The real problem will be when students figure out how to make their ChatGPT output sound like their own writing. Until then, I have candid discussions with my students on whether they are using ChatGPT and draw the lines where it is and is not permissible. So far, it's been collegial, but I admit that it is a hopeless effort. I agree with the other poster here who said that, when asked, students do not admit cheating because they don't think using ChatGPT is cheating. As I tell my colleagues, the challenge for us college educators is to admit when we are beat by this thing, change the forms of assessment we make, and let go of the parts of the instructional narrative we cannot control.
Cue the moral panic over Replika in three... two... one...
If you try the same swapparoo of "addictive" with "entertaining" in virtual companion apps, a funny thing happens. The grim reality is that humans are emotionally vulnerable but too many blame external forces for their pathological responses to it.
From The New Yorker article: https://www.newyorker.com/news/the-political-scene/what-the-twitter-files-reveal-about-free-speech-and-social-media The most eyebrow-raising revelations in the Twitter Files, documented mostly by Matt Taibbi and Lee Fang, concern the extent to which the F.B.I. and the Pentagon were interested in controlling what was seen on the platform. According to Taibbi’s reporting, there were more than a hundred and fifty e-mails between Roth and the F.B.I. from January, 2020, to November, 2022. Some of these seem to have been more or less normal investigative queries, but many were requests that the company take action to restrict accounts that the F.B.I. had flagged for supplying misinformation. As Taibbi pointed out, some of these requests were absurd—one concerned a parody account of the pro wrestler the Undertaker, which primarily tweeted about soiling himself. (It was banned the same day.) The F.B.I. also flagged cases where the “misinformation” was obviously a joke: “I want to remind republicans to vote tomorrow, Wednesday November 9,” @fromma, the subject of an F.B.I. request to Twitter, tweeted. Down a different archival tunnel, Fang discovered that Twitter had long been coöperating with the Pentagon to help the U.S. government amplify accounts (often in Arabic or Russian) with friendly, and sometimes manufactured, perspectives. You don’t have to be an especially cynical reader of American history to realize that, if there is a new tool that allows for “centralized content moderation” of political information, the F.B.I. is going to take an interest in it. Still, in this context, “centralized content moderation” sounds downright Orwellian.
Luxury Surveillance
Looks like the Luxury Surveillance constituency (sarcasm) has had an asymmetric influence on the use of this stuff in the public sphere. Chris Gilliard writes about this: "Luxury Surveillance - People pay a premium for tracking technologies that get imposed unwillingly on others" https://reallifemag.com/luxury-surveillance/
Hello "Hyman" or whoever they say you are. On behalf of the caterwauling lefties (haha), I'd like to state that if you do not have direct experience knowing and living with a person with actual gender dysphoria, keep your snark to yourself. The sh*t is real. I also dislike the whole pronoun argument thing, so don't think all lefties are SJWs on that topic. I couldn't care less about pronouns. As for NPR being "far-left," do you even know what that means? Far-left calls for the abolishment of private property, the takeover of the means of production by the state, and a state controlled planned economy where prices are not subject to market forces. Where in the world do get the notion that NPR or anyone else who listens to it has any interest in this?? As a left-leaning voter, I am staunchly capitalist, pro-business, but mindful of the needs of my community through taxation and the development of the common good (education, public space, health care, etc.). I have no problem with guns with the exception of maniacal obsession with assault weapons. And y'all call me a Communist! Pffft! Will you please cut the crap!? Now then... The issue here is NOT as simplistic as whether the government is silencing lies. The issue here is about the misuse of the public airways for knowingly misleading their audience to believe information that is contrary to democracy. It is a legitimate basis for rescinding the license to broadcast. There are plenty of ways in which a broadcaster can broadcast lies - but who cares whether anyone says there are Martians living among us or that the da Vinci Code means anything consequential? That's not what we are talking about. When a broadcaster knowingly and deliberately promotes misinformation over several years that undermines the perceived integrity of the most important feature of our democracy, that is a threat to democracy itself. I consider that a legitimate case for arguing against the privilege to broadcast according to FCC rules.
... of course I didn't bother to read the other comments first. Can't delete this post either.
Twitter links are dead
Just an FYI that twitter now no longer allows public non-member access to tweets, so linking out to them is a bummer for those of us who have cut the bird. I'm not sure what that means for you as an author but thought you should know.
"Bloated woke jobs"? Such as... "...The cuts hit product managers, data scientists and engineers who worked on machine learning and site reliability, which helps keep Twitter’s various features online. The monetization infrastructure team, which maintains the services through which Twitter makes money, was reduced to fewer than eight people from 30, a person familiar with the matter said." https://www.nytimes.com/2023/02/26/technology/twitter-layoffs.html "Twitter employees from departments including ethical AI, marketing and communication, search, public policy, wellness and other teams had tweeted about having been let go. Members of the curation team, which help elevate reliable information on the platform, including about elections, were also laid off, according to employee posts....Other members of Twitter’s human rights team had been laid off." https://www.cnn.com/2022/11/03/tech/twitter-layoffs/index.html "Recession accelerates"?? Hardly, given the job numbers are stable and rising, unemployment is still down, inflation is in decline, etc.
Idiocracy. This legislation has electrolytes.
Thank you for your sanctimony, but I have read all of it. My positions are rooted in a decade of research related to several courses I teach on the subject. You are correct that people create moral panics. Except this is not (just) a moral panic. It is cultural shift driven by business models that cause psychological effects influencing the perception of reality. It creates an endemic distortion field of misinformation on a scale of hundreds of millions. I do not disagree with the evidence presented by the author. It is not, however, the complete picture, Sorry, no. This is different.
Yeah, but this time it's different. J/K, but in all seriousness, it actually is different because, unlike all of the other examples you list...
- social media is global, with billions of participants.
- it has become a substantial part of the digital economy and the stock market.
- it is a consideration in the basis of every business model for marketing.
- it is leveraged as a career platform.
- the pressure to participate in it is unrelenting because everyone else is on it.
- a person's life and career can literally be destroyed by it.
- it was instrumental in getting Donald Trump elected.
- it is the focal point of superpower governments to fomenting global social and political disruption.
- it gathers an infinite amount of data that defines the profile of nearly every human being not in cave.
- it is either a direct or indirect means of government surveillance.
- our government is woefully incapable of understanding what it does, how it does it, and how it matters in issues of public interest.
- it has become more relevant than actual journalism to the extent that journalism is frequently just an aggregation of social media content.
- algorithmically calculated content subjectively determines what a person sees, which causes a distortion in the objective perception of reality.
This is not just a moral panic in the conventional sense. (Yes, too many people are overreacting to it). But it is also a leviathan that is controlled by an elite few extraordinarily powerful men whom millions of people worship as geniuses. That, in itself, is enough to panic about.This is all good and I don't argue with the findings here. But here is where I depart with the author's commentary. The thing that differentiates social media from TV (and other media) is its level of penetration. There is simply no historical comparison to make of the effects of social media on the same level of ubiquity, given that it functions as a parasite to a society saturated with mobile communication. The intensity of social media's presence and its relevance to minors' lives has no precedent. Thus, even if it "might" be bad for some kids, that is saying a lot more than if TV or comic books "might" be bad for some kids. Second, and apologies in advance for getting all McLuhan-y on unsuspecting tech blog readers, the argument between the effect of social media and other similarly satanic media (video games, TV, tamagotchis, etc.) is not so much about it being "bad" in some objective sense, as if social media causes MORE psoriasis or scurvy. Rather, it is about how sensory perception is re-organized, given that the social media/mobile phone technology is a uniquely designed extension of some native form of cognition (algorithmically optimized for a business model). It carries with it its own psychological grammar. Thus: Total immersion in a form of communication that replaces in-person non-verbal sensitivity with mechanized "like" signals in a semantically slim context changes something. Whether anyone can quantify that as bad is up to the scientists to determine, but it's changing something, profoundly. My Spidey sense tells me that several generations of social media users will have gained something valuable, as has been stated in the research above. But then they will also have lost something as a matter of atrophy or from never having developed it. Thus, most of us don't know how to milk a cow because we can just buy milk in a bottle down the street. No particular loss there. But humanistic interpersonal skills is another matter. That is what I am most concerned about.
Stone: Those left behind will become instructional designers, just like everyone else who hates teaching or got booted out of the video post-production industry.
Quality control
I'm too lazy and it's too late to read all the comments here first before I post this, but I'll add this hoping someone else hasn't already done it better. I sense that the WGA is not so much worried about making their jobs easier - they are more concerned that people who do not have their talent will make them irrelevant. It will no longer be a matter of who can write well - it will be a matter of who can write the best prompt, which is a specialized skill that current writers do not have (yet). I offer this because I spent my main career as a video editor in the TV advertising business in NYC between 1990 and 2007 during a period of time when two major changes occurred: (1) computers got more powerful and cheaper to the point where anyone could afford them and do studio quality work at home, and (2) the invention of streaming media as a zero-cost entry point for marketing meant the end of traditional budgets for TV commercials. Two other lesser factors were: the acceptance of low-quality UGC on YouTube that made professionals irrelevant, and also the 9-11 attack that got all the experienced producers fired (there was no work for a year). tl;dr: high-paid professional video editors in fancy NYC boutiques were irrelevant. There was less TV commercial work than ever before, yet, paradoxically, there was never more video editing going on in human history - we just weren't the ones doing it. Thus, with ChatGPT et al., WGA realizes that they, too, will be irrelevant and yet there will be more "writing" going on than ever before - but they won't be the ones doing it. I do agree with their demand to forbid the use of their prior work as ML training.
I wonder what Pigmeat Markham would have to say about this? (Some y'all too young for that one). https://youtu.be/Ohq9h6QIcHY
Your comment is so spot-on and more succinct than mine. I am humbled.
A floor wax and a dessert topping...
Since I work in higher education as both an instructional designer (think: architect for building courses according to standards for how humans learn) and as adjunct faculty, I think I can offer some nuance to this "compelled" thing. First, as a college student in '80s, '90s, and '10s, I was "compelled" to align with a few professors' points of view about certain things, but only as a matter of getting through the course. Unpacked: The basis of assessment was subjective to the professor's POV. In one extreme example in a Politics & Sex course, deviation from the orthodoxy of the professor's POV got you a lower grade - period. That's an unfortunate by-product of tenure and the general tendency of the Academy to attract know-it-all academics. But at least, as stated, these geeks are the exceptions, not the rule. But even so, I would never claim that I was harmed because of it. Second, a college worth its salt has academic standards for scholarly discourse which are based on the quality of one's argument regardless of one's position - not whether one aligns to a professor. Colleges that allow orthodoxy over argument deserve all the Conservative grievances they get (as well as mine). This legislation will fail simply because no one will be able to define when a student has "adopted" such beliefs (When they wrote the paper? When they voted for AOC? Had an abortion? When they joined ANTIFA?). At what point does "harm" occur? Sounds pretty snowflaky to me.
Make my funk the P-funk!
Oh, this is too good!! The desired effect is what you get when you improve your interplanetary funksmanship. Dig! Dig!
The data
First, this is the most sharply concise confrontation with a CEO I have ever seen on this issue and I appreciate the author for having curated it. What upsets me here, however, is Best's cynical claim that efforts to censor hate speech is pointless: "And my read is that that hasn’t actually worked. That hasn’t been a success. It hasn’t caused those ideas not to exist. It hasn’t built trust. It hasn’t ended polarization. It hasn’t done any of those things. And I don’t think that taking the approach that the legacy platforms have taken and expecting it to have different outcomes is obviously the right answer the way that you seem to be presenting it to be." First, he is missing the point, completely - it's not about "ending" bad ideology - it's about preventing it from metastasizing. There will always be Nazis. The least we can do is limit its reach through some kind of corporate ethics. Second, he cannot make any kind of claim that "censorship doesn't work" unless there is some kind of legitimate study that measures some degree of influence (don't ask me how) under conditions "X" versus conditions "Y." He's just being lazy.