Forum Moderators: rogerd
Google "Perspective" Machine Learning To Hide Toxic Comments
The system learns by seeing how thousands of online conversations have been moderated and then scores new comments by assessing how "toxic" they are and whether similar language had led other people to leave conversations. What it's doing is trying to improve the quality of debate and make sure people aren't put off from joining in. Google "Perspective" Machine Learning To Hide Toxic Comments [bbc.co.uk]
This isnt about spam this is about censorship plain and simple.