JT is on target.
Don't let the Elites fool you. A Ph.D. doesn't imply that you
- Know anything useful.
- Have any idea about what you don't know.
- Have any common sense.
- Have any respect for those without Ph.Ds.
- Can refrain from trying to force others to live as you see fit.
- Are not a threat to a free society.
--------------------------------------------------------
In one of the most reckless and chilling attacks on free speech, the former chair of the Federal Election Commission (FEC) and Berkeley lecturer Ann Ravel is pushing for a federal crackdown on “disinformation” on the Internet — a term that she conspicuously fails to concretely define. Ravel is pushing a proposal that she laid out in a a paper co-author with Abby K. Wood, an associate professor at the University of Southern California, and Irina Dykhne, a student at USC Gould School of Law. To combat “fake news,” Ravel and her co-authors would undermine the use of the Internet as a forum for free speech. The regulation would include the targeting of people who share stories deemed fake or disinformation by government regulators. The irony is that such figures are decrying Russian interference with our system and responding by curtailing free speech — something Vladimir Putin would certainly applaud.
In addition to new rules on paid ads, Ravel wants fake news to be regulated under her proposal titled Fool Me Once: The Case for Government Regulation of ‘Fake News.” If adopted, a “social media user” would be flagged for sharing anything deemed false by regulators:
“after a social media user clicks ‘share’ on a disputed item (if the platforms do not remove them and only label them as disputed), government can require that the user be reminded of the definition of libel against a public figure. Libel of public figures requires ‘actual malice,’ defined as knowledge of falsity or reckless disregard for the truth. Sharing an item that has been flagged as untrue might trigger liability under libel laws.”
Without clearly defining “disinformation,” Ravel would give bureaucrats the power to label postings as false and harass those who share such information. Of course, this would also involve a massive databanks of collections ads and discussions by the government.
The authors of the proposal see greater government regulation as the solution to what they describe as “informational deficits” in the largely free exchanges of the Internet. There is a far dosage of doublespeak in the article. Rather than refer to the new regulation as guaranteeing greater government control, the authors insist that “government regulations . . . improve transparency.” Rather than talk of government controls over speech, the authors talk about the government “nudging” otherwise ignorant readers and commentators. Here is the worrisome section:
Government regulations to help voters avoid spreading disinformation
Educate social media users. Social media users can unintentionally spread disinformation when they interact with it in their newsfeeds. Depending on their security settings, their entire online social network can see items that they interact with (by “liking” or commenting), even if they are expressing their opposition to the content. Social media users should not interact with disinformation in their feeds at all (aside from flagging it for review by third party fact checkers). Government should require platforms to regularly remind social media users about not interacting with disinformation.
Similarly, after a social media user clicks “share” on a disputed item (if the platforms do not remove them and only label them as disputed), government can require that the user be reminded of the definition of libel against a public figure. Libel of public figures requires “actual malice”, defined as knowledge of falsity or reckless disregard for the truth. Sharing an item that has been flagged as untrue might trigger liability under libel laws.
Nudge social media users to not view disputed content. Lawmakers should require platforms to provide an opt-in (or, more weakly, opt-out) system for viewing disputed content and periodically remind users of their options. We think the courts should uphold this as a constitutional regulation of political speech, but we acknowledge that it is a closer question than the more straightforward disclosure regulations above. The most analogous cases are to commercial speech cases (AdChoices and Do Not Call Registry, which was upheld). Commercial speech receives less protection than political speech.
I have been writing about the threat to free speech coming increasingly from the left, including Democratic politicians. The implications of such controls are being dismissed in the pursuit of new specters of “fake news” or “microaggressions” or “disinformation.” The result has been a comprehensive assault on free speech from college campuses to the Internet to social media. What is particularly worrisome is the targeting of the Internet, which remains the single greatest advancement of free speech of our generation. Not surprisingly, governments see the Internet as a threat while others seeks to control its message.
What Ravel and her co-authors are suggesting is a need to label certain views as “false” while giving not-so-subtle threats of legal action for those who share such information. Once the non-threatening language of “nudges” and “transparency” are stripped away, the proposal’s true meaning is laid bare as a potentially radical change in government regulation over free speech and association.
No comments:
Post a Comment