Facebook Developing Artificial Intelligence To Flag Offensive Live Videos

Hamilton Nwosa
Writer

Ad

BREAKING: Tinubu lifts state of emergency in Rivers, Fubara set to resume office

By Obinna Uballa President Bola Tinubu on Wednesday announced the end of the six-month state of emergency in Rivers State, paving the way for Governor Siminalayi Fubara, his deputy Ngozi Nma Odu, and members of the State House of Assembly to return to office from Thursday, September 18. The President, in a nationwide broadcast, said…

How Dangote refinery’s exports altered dynamics in global fuel trade

By Obinna Uballa Nigeria’s Dangote refinery has entered the global spotlight with its first shipments of petrol to the United States, underscoring the facility’s potential to reshape international fuel flows, even as African leaders press for stronger regional collaboration on energy security. Vessel-tracking data showed that top trader Vitol and US distributor Sunoco took delivery…

Ad

Facebook Inc (FB.O) is working on automatically flagging offensive material in live video streams, building on a growing effort to use artificial intelligence to monitor content, said Joaquin Candela, the company’s director of applied machine learning.

The social media company has been embroiled in a number of content moderation controversies this year, from facing international outcry after removing an iconic Vietnam War photo due to nudity, to allowing the spread of fake news on its site.

Facebook has historically relied mostly on users to report offensive posts, which are then checked by Facebook employees against company “community standards.” Decisions on especially thorny content issues that might require policy changes are made by top executives at the company.

Candela told reporters that Facebook increasingly was using artificial intelligence to find offensive material. It is “an algorithm that detects nudity, violence, or any of the things that are not according to our policies,” he said.

The company already had been working on using automation to flag extremist video content, as Reuters reported in June.

Now the automated system also is being tested on Facebook Live, the streaming video service for users to broadcast live video.

Using artificial intelligence to flag live video is still at the research stage, and has two challenges, Candela said. “One, your computer vision algorithm has to be fast, and I think we can push there, and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down.”

Facebook said it also uses automation to process the tens of millions of reports it gets each week, to recognize duplicate reports and route the flagged content to reviewers with the appropriate subject matter expertise.

Chief Executive Officer Mark Zuckerberg in November said Facebook would turn to automation as part of a plan to identify fake news. Ahead of the Nov. 8 U.S. election, Facebook users saw fake news reports erroneously alleging that Pope Francis endorsed Donald Trump and that a federal agent who had been investigating Democratic candidate Hillary Clinton was found dead.

However, determining whether a particular comment is hateful or bullying, for example, requires context, the company said.

Yann LeCun, Facebook’s director of AI research, declined to comment on using AI to detect fake news, but said in general news feed improvements provoked questions of tradeoffs between filtering and censorship, freedom of expressions and decency and truthfulness.

“These are questions that go way beyond whether we can develop AI,” said LeCun. “Tradeoffs that I’m not well placed to determine.”

Ad

Unlocking Opportunities in the Gulf of Guinea during UNGA80
X whatsapp