razz
2-28-17, 4:13pm
This is a positive step but I am not sure that we all understand and accept that it is possible. I hope that it works.Is it workable for small sites such as SLF?
From: http://www.cbc.ca/news/opinion/online-toxicity-1.4001767
The misanthrope's view of the internet is that it's a hotbed for hate speech and angry trolling, and that it will forever be so.
That's because, according to this view, the online world is basically lawless frontier where we're free from the structure, confines and civility of real life. Another perspective is that the internet simply acts as a massive floodlight, exposing the ugliest parts of human nature.
But new approaches to taming trolls show that the current state of online toxicity may just be an issue of bad design. Companies such as Google and Riot Games — the makers of the massive multiplayer game League of Legends — are implementing new strategies to tackle poisonous speech, and these solutions might also prove successful in taming trolls on news sites and other online communities...
Beyond that, a number of studies show that anonymity might not be driving online toxicity after all. Rather, it could very well be the lack of repercussions and real-life consequences — coupled with anonymity — that fuel nasty behaviour online. Indeed, anonymity might set the foundation for aggression, but the lack of consequences is arguably what keeps the harassment...
Now Google is trying a similar approach. The company's tech incubator, Jigsaw, along with its Counter Abuse Technology team, recently launched Perspective, a public API that uses artificial intelligence to automatically flag toxic online speech. By comparing new comments with a large data set of archived comments, previously flagged as toxic, from sources such as Wikipedia or online news comment sections, Jigsaw believes it can positively identify hateful speech. As a result, a user's commenting privileges may be revoked, or else, he or she might be subject to "shadowbanning," whereby comments appear invisible to other members of the community.
From: http://www.cbc.ca/news/opinion/online-toxicity-1.4001767
The misanthrope's view of the internet is that it's a hotbed for hate speech and angry trolling, and that it will forever be so.
That's because, according to this view, the online world is basically lawless frontier where we're free from the structure, confines and civility of real life. Another perspective is that the internet simply acts as a massive floodlight, exposing the ugliest parts of human nature.
But new approaches to taming trolls show that the current state of online toxicity may just be an issue of bad design. Companies such as Google and Riot Games — the makers of the massive multiplayer game League of Legends — are implementing new strategies to tackle poisonous speech, and these solutions might also prove successful in taming trolls on news sites and other online communities...
Beyond that, a number of studies show that anonymity might not be driving online toxicity after all. Rather, it could very well be the lack of repercussions and real-life consequences — coupled with anonymity — that fuel nasty behaviour online. Indeed, anonymity might set the foundation for aggression, but the lack of consequences is arguably what keeps the harassment...
Now Google is trying a similar approach. The company's tech incubator, Jigsaw, along with its Counter Abuse Technology team, recently launched Perspective, a public API that uses artificial intelligence to automatically flag toxic online speech. By comparing new comments with a large data set of archived comments, previously flagged as toxic, from sources such as Wikipedia or online news comment sections, Jigsaw believes it can positively identify hateful speech. As a result, a user's commenting privileges may be revoked, or else, he or she might be subject to "shadowbanning," whereby comments appear invisible to other members of the community.