Welcome to WebmasterWorld Guest from 18.104.22.168
Forum Moderators: rogerd
The system learns by seeing how thousands of online conversations have been moderated and then scores new comments by assessing how "toxic" they are and whether similar language had led other people to leave conversations. What it's doing is trying to improve the quality of debate and make sure people aren't put off from joining in. Google "Perspective" Machine Learning To Hide Toxic Comments [bbc.co.uk]
This isnt about spam this is about censorship plain and simple.
extremely bad and manipulative especially jigsaws "projects" like the syrian one which is deeply worrying