✊ Martin Gurri über die Gemeinsamkeiten der globalen Welle von Aufständen von den Protesten der Gelbwesten über die Revolte in Hong Kong bis Chile, den Sudan und den Irak: Revolt as Consumer Backlash.
The message of revolt of 2019, mediated by random factors, evidently has met a profound need of the network. In more concrete terms: when the whole world is watching, a local demand for political change can start to go global in an instant. At a certain point, the process becomes self-sustaining and self-reinforcing: that threshold may have been crossed in November, when at least eight significant street uprisings were rumbling along concurrently (Bolivia, Catalonia, Chile, Colombia, Hong Kong, Iraq, Iran, and Lebanon – with France, the Netherlands, Nicaragua, and Venezuela simmering in the background). Whether local circumstances are democratic or dictatorial, prosperous or impoverished, the fashion for revolt is felt to be almost mandatory. The public is now competing with itself in the rush to say No.
💩 Casey Newton über die Content-Moderatoren von Google: The Terror Queue. Ich wiederhole mich: Content Moderator ist der wohl derzeit wichtigste Beruf der Online-Branche, es sind beinahe schon wortwörtlich die Klempner der Informations-Sanitätssystems und befreien unsere Feeds von Dreck und Exkrementen. Dafür werden sie gnadenlos unterbezahlt und kämpfen sie reihenweise mit PTSD.
– Google created a dedicated queue for videos believed to contain violent extremism and staffed it with dozens of low-paid immigrants from the Middle East. Moderators make $18.50 an hour — about $37,000 a year — and have not received a raise in two years.
– Austin moderators are required to view five hours of gruesome video per day. This comes despite the fact that YouTube CEO Susan Wojcicki promised to reduce their burden to four hours per day last year.
– Workers on the site describe feeling anxiety, depression, night terrors, and other severe mental health consequences after doing the job for as little as six months.
🙃 Kyle Chayka auf Vox über das, was ich hier als The Big Flat Now bezeichne: Monokultur, die sich aus partikularen Mikro-Geschmäckern aufbaut, die wiederum algorithmisch zu monolithischem Einheitsbrei aufgetürmt werden. Spotify befeuert Nischengeschmäcker in endlosen Playlists, die stundenlang den gleichen Sound liefern und Netflix bietet Serien für alle Geschmäcker für stundenlanges Binge-Watching der immer gleichen Formate. Es entsteht eine Parallelbewegung aus Fragmentarisierung und Homogenisierung.
Rather than the monoculture dictated by singular auteurs or industry gatekeepers, we are moving toward a monoculture of the algorithm. Recommendation algorithms — on Netflix, TikTok, YouTube, or Spotify — are responsible for much of how we move through the range of on-demand streaming media, since there’s too much content for any one user to parse on their own. We can make decisions, but they are largely confined to the range of options presented to us. The homepage of Netflix, for example, offers only a window into the platform’s available content, often failing to recommend what we actually want. We can also opt out of decision-making altogether and succumb to autoplay. […]
We thought the long tail of the internet would bring diversity; instead we got sameness and the perpetuation of the oldest biases, like gender discrimination. The best indicator of what gets recommended is what’s already popular, according to the investor Matthew Ball, a former head of strategy at Amazon Studios. “Netflix isn’t really trying to pick individual items from obscurity and get you to watch it,” Ball said. “The feedback mechanisms are reiterating a certain homogeneity of consumption.”
Instead of discrete, brand-name cultural artifacts, monoculture is now culture that appears increasingly similar to itself wherever you find it. It exists in the global morass of Marvel movies designed to sell equally well in China and the United States; the style of K-Pop, in music and performance, spreading outside of Korea; or the profusion of recognizably minimalist indie cafes from Australia to everywhere else. These are all forms of monoculture that don’t rely on an enforced, top-down sameness, but create sameness from the bottom up. Maybe the post-internet monoculture is now made up of what is aesthetically recognizable even if it is not familiar — we quickly feel we understand it even if we don’t know the name of the specific actor, musician, show, or director.
✔️ Übermedien über Dilemmata des Factchecking auf Facebook 1: Faktencheck mit Haken: Das Facebook-Dilemma von Correctiv.
✔️ Übermedien über Dilemmata des Factchecking auf Facebook 2: Wieso Gretas Bahn-Foto von Facebook als Fake markiert wurde.
🤝 Alte Weisheit neu bestätigt: Eine Studie hat eine Hirnregion identifiziert, die durch Confirmation Bias aktiviert wird. Die Wissenschaftler schlagen für Debatten vor, zunächst einen gemeinsamen Nenner mit dem Diskussionspartner zu finden.
A Nature Neuroscience study looked at participants’ brains as they made choices while considering a partner’s decisions. The researchers found that a small region toward the front of the brain called the posterior medial prefrontal cortex, associated with judging performance and mistakes, was more active during the task. Specifically, it was active when individuals were processing someone’s agreement with their opinion, but not when they were dealing with an opposing view.
🤖 Symantec hat ein Archiv aus 10 Millionen von 3836 Twitter-Accounts analysiert, die der russischen „Internet Research Agency“ („Putins Troll Army“) zugerechnet werden: Twitterbots: Anatomy of a Propaganda Campaign. Die Analyse bietet nur wenige neue Erkenntnisse: Die Kampagnen wurde von langer Hand geplant, die Accounts im Schnitt ein halbes Jahr vor dem ersten Tweet angelegt, einige wenige Accounts, die vorgaben, authentische User zu sein und manuell bedient wurden, erhielten automatisierte Retweets und Likes von jeder Menge Bots und die verlinkten Target-Inhalte, oft von Fake News-Outlets, enthielten sowohl konservative als auch progressive Talking-Points.
🤜 Inside the hate factory: how Facebook fuels far-right profit: Guardian investigation reveals a covert plot to control some of Facebook’s largest far-right pages and harvest Islamophobic hate for profit
🙃 Facebook May Face Another Fake News Crisis in 2020. And in 2021. And 2022.: Facebook’s fight against misinformation, like its struggle with content moderation, is one that it is unlikely to truly win without fundamental changes to the platform.
☝️ Steven Shapin in einem (zu) langen Stück über die epistemologische Krise der Wahrheit und den Vertrauensverlust in akademische Institutionen, die seines Erachtens nicht durch zu wenig Wissenschaft im öffentlichen Diskurs entstanden, sondern durch ein Zuviel an scheinwissenschaftlicher Methode und Unkenntnis sozial generierten Wissens (also „Wem kann ich vertrauen?“, „Wo und bei wem finde ich vertrauenswürdige Information?“ und so weiter).
The problem we confront is better described not as too little science in public culture but as too much. Given the absurdities and errors abroad in the land, it may seem crazy to say this, yet the point can be pressed. Consider, again, the climate change deniers, the anti-vaxxers, and the creationists. They’re wrong-headed of course, but, like the Moon-landing deniers and the Flat-Earthers, their rejection of Right Thinking is not delivered as anti-science. Instead, it comes garnished with the supposed facts, theories, approved methods, and postures of objectivity and disinterestedness associated with genuine science. Wrong-headedness often advertises its embrace of officially cherished scientific values — skepticism, disinterestedness, universalism, the distinction between secure facts and provisional theories — and frequently does so more vigorously than the science rejected. The deniers’ notion of science sometimes seems, so to speak, hyperscientific, more royalist than the king. And, if you want examples of hyperscientific tendencies in so-called pseudoscience, there are now sensitive studies of the biblical astronomy craze instigated in the 1950s by the psychiatrist Immanuel Velikovsky, or you can consider the meticulous methodological attentiveness of parapsychology, or you can reflect on why it might be that students of the human sciences are deluged with lessons on The Scientific Method while chemists and geologists are typically content with mastering just the various methods of their specialties. The Truth-Deniers find scientific facts and theories shamefully ignored by the elites; they embrace conceptions of a coherent, stable, and effective Scientific Method that the elites are said to violate; they insist on the necessity of radical scientific skepticism, universal replication, and openness to alternative views that the elites contravene. On those criteria, who’s really anti-scientific? Who are the real Truth-Deniers?
If you follow the claims of the Truth-Deniers, you can’t but recognize this surfeit of science — so many facts and theories unknown at elite universities, such an abundance of scientific papers and institutions, such a cacophonous chorus of scientific voices. This is a world in which the democratic “essence” of science is taken very seriously and scientific aristocracy and elitism are condemned. Why should such institutions as Oxford, Harvard, and their like monopolize scientific Truth? It’s hard to fault the principle of scientific democracy, but, as a normal practice, it’s faulted all the time.
Today such companies as Apple, Facebook, Google, Microsoft, and Twitter play an increasingly important role in how users form and express opinions, encounter information, debate, disagree, mobilize, and maintain their privacy. What are the human rights implications of an online domain managed by privately owned platforms? According to the Guiding Principles on Business and Human Rights, adopted by the UN Human Right Council in 2011, businesses have a responsibility to respect human rights and to carry out human rights due diligence. But this goal is dependent on the willingness of states to encode such norms into business regulations and of companies to comply. In this volume, contributors from across law and internet and media studies examine the state of human rights in today’s platform society.
The contributors consider the “datafication” of society, including the economic model of data extraction and the conceptualization of privacy. They examine online advertising, content moderation, corporate storytelling around human rights, and other platform practices. Finally, they discuss the relationship between human rights law and private actors, addressing such issues as private companies’ human rights responsibilities and content regulation.
👾 Das Game Metal Gear Solid 2 hat unsere neue geile Internet-Welt ziemlich genau vorhergesagt: We are living in Hideo Kojima’s dystopian nightmare. Can he save us?
Metal Gear Solid 2 was about everything else passed on, memetics, cultural traits and social norms, and how social evolution is threatened by junk data crowding the Internet’s discourse. Crowding caused by what the game derisively called a “sea of garbage you people produce.”
Fast forward to today: Kojima’s dystopian future has become our current reality.
It’s a reality where studies show Americans even struggle to find common understanding around what caused the Civil War. Social media, a parallel digital society, has a reputation for being self-absorbed and mean. Some of those who built that space, like Facebook’s Mark Zuckerberg, fear the “erosion of truth,” but won’t take action against lies and manipulated facts spread on their platform. Governments and media constantly call into question the actuality of our lived experiences.