Forum Moderators: not2easy

Message Too Old, No Replies

Social Media Moderation via AI?

Algorithms in charge now

         

not2easy

4:51 pm on Mar 24, 2020 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



Governments around the world are asking people to work from home and the majority of Facebook's human reviewers are not located where they might be able to access their work from home. These contractors are currently on paid leave which means that Facebook needs to deal with content issues in a less perfect way.

From an article (possible paywall): [washingtonpost.com...]
The people who do that sensitive work — nearly 15,000 contractors at 20 sites globally — continued to come to the office until last Monday, when public pressure, internal protests and quarantine measures around the world pushed Facebook to make a drastic move to shutter its moderation offices.

Due to such large scale layoffs, Facebook is currently relying on automated methods which they admit are not perfect. AI and algorithms have led to delays in removing misinformation and accidental removal of valid content. Neither is beneficial and they are making adjustments to improve accuracy.

Still, chief executive Mark Zuckerberg said on a media call Wednesday that Facebook will be forced during the pandemic to rely more heavily on artificial intelligence software to make those judgment calls. The company also will train full-time employees to devote “extra attention” to highly sensitive content, such as any involving suicide, child exploitation and terrorism. Users should expect more mistakes while Facebook triages the process, he said, in part because a fraction of the humans will be involved and because software makes more blunt decisions than humans.

Of course, Facebook is not alone in this situation. Twitter [blog.twitter.com] as well as YouTube [youtube-creators.googleblog.com] and other social media sites are facing the same kind of limited accuracy.

Facebook shares details on their blog:
https://about.fb.com/news/2020/03/coronavirus/#content-review

Facebook shares details on status report:
https://transparency.facebook.com/community-standards-enforcement#bullying-and-harassment

(Note: Link contains # so please copy to visit)

lammert

6:08 pm on Mar 24, 2020 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Hmmm, Facebook not able to create a work-at-home environment for something as computer-bound as moderating content? I have reviewed Google Maps additions for many years and never worked from an office location.

It seems Facebook wants to pull all workers inside their offices, just as they try to pull all users inside their facebook.com eco-system.

not2easy

6:53 pm on Mar 24, 2020 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



From the article, it appears that this work is done by contractors so that the 'test environment' is known and controlled for unexpected consequences. Large segments of the workforce are in the eastern hemisphere, they specifically mentioned their center in Manila.

engine

8:54 pm on Mar 24, 2020 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



Very interesting.
I wonder how the AI will do, and if it really will learn.

lammert

9:19 pm on Mar 24, 2020 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I assume this type of AI is trained by feeding it with existing manual moderation decisions on historic content. With the manual workforce currently not available to give their input on actual content, AI results may deteriorate when more new sensitive topics arise. Regarding COVID-19, for example, the current available manual moderation training set may be too small to effectively train the AI system to moderate content around this subject properly.