Lecture 5 - The work in keeping communities civil Flashcards
Reasons for pseudonymity (pseudoniem)
- Internet culture: e.g. Twitch streamers
- Potential threat to live: risk political or economic retaliation
- Risk to discrimination: deadname
- Risk of violence
Real name fallacy
- US victims of online harassment already know who their attacker is
- Conflict, harassment and discrimination are social and cultural problems
- Revealing personal information exposes people to greater levels of harassment and discrimination
- Companies storing personal information for business purposes also expose people to potential serious risks
- Identity protections are first line of defense for people facing serious risks online
- People manage identity across different social contexts
- Social norms can reduce problems
- People may reveal identity when it helps them icnrease their influence and approval from others
- Hate groups operate openly to seek legitimacy
Automated content moderation
- Remove problematic content without direct, consistent human intervention
- Matching – > new content is matched with existing content
- Classification –> Assesing new content with no previous examples, machine learning
Algorithmic commercial content moderation
“Systems that classify user-generated content based on either matching or prediction, leading to a decision and governance outcome (e.g., removal, geoblocking, account takedown)
Content moderation
“The detection of, assesment of, and interventions taken on content or behaviour deemed unacceptable by platforms or other information intermediaries, including the rules they impse, the human labour and technologies required, and the institutional mechanisms of adjudication, enforcement, and appeal that support.”
Artisanal approach
- Moderation type on small scale platforms
- Vimeo, medium, patreon, discord
- Case by case governance
- 5-200 in house moderators
- Limited use of automated content moderation
- Greater time available on moderation per report
Community-reliant approach
- Wikipedia and Reddit
- Federal system governance
- Site wide governance by formal policies with community input
- Platform administrators over volunteer moderators
- content moderation is up to volunteer moderators
- Balance of power
Industrial approach of moderation
- Large-scale moderation
- Facebook, Google, Twitter
- Formal policies created by policy team
- Mass moderation operation
- High consistent enforcement of rules
- Loss of context
- Thousands of workers moderating content
- High use of automated content moderation
Free speech vs. safe space results (Gibson, A 2019)
- In the safe space, moderators removed significantly more comments, as well as users removing their own comments –> higher self-censorship
- Language in the safe space is more positive and discussions are more about leisure
- Language in the free speech is more negative and angry
- Suggests that differences in moderation policies may affect self-censorship and language in online space
Deplatforming hate
- Banning individuals or groups from a social media platform
–> They will migrate to other platforms
The efficiacy of reddits 2015 ban examined through hate speech (2017)
- Study of the ban of r/fatpeoplehate and r/CoonTown
- the ban worked for Reddit
- More accounts than expected discontinued using the site; those that stayed drastically decreased their hate speech usage—by at least 80%
- Though many subreddits saw an influx of r/fatpeoplehate and r/CoonTown “migrants,” those subreddits saw no significant changes in hate speech usage
Results of deplatforming far right celebrities on twitter (Jhaver et. al. 2021)
- Communication around celebrities are reduced
- Spread of offensive ideas associated to celebrities are reduced
- Activity and toxicity of supporters are reduced