Facebook To Broaden Artificial Intelligence To Assist Prevent Suicide

Express News

Facebook has not disclosed numerous technical information of the program, but the business stated its software application searches for specific phrases that could be clues, such as the questions “Are you ok?” and “Can I help?”

World|Reuters|Upgraded: November 28, 2017

If the Facebook software application finds a potential suicide, it signals a team of Facebook employees

SAN FRANCISCO: Facebook Inc will expand its pattern recognition software application to other nations after successful tests in the U.S. to spot users with self-destructive intent, the world’s biggest social media network said on Monday.

Facebook began evaluating the software application in the United States in March, when the company began scanning the text of Facebook posts and comments for expressions that might be signals of an upcoming suicide.

Facebook has not disclosed numerous technical details of the program, however the business said its software application look for specific expressions that could be clues, such as the concerns “Are you ok?” and “Can I assist?”

If the software application spots a potential suicide, it alerts a group of Facebook employees who specialize in managing such reports. The system suggests resources to the user or to friends of the individual such as a telephone assistance line. Facebook employees in some cases call local authorities to intervene.

Man Rosen, Facebook’s vice president for item management, stated the business was starting to present the software outside the United States because the tests have succeeded. During the previous month, he said, first responders examined individuals more than 100 times after Facebook software identified self-destructive intent.

Facebook stated it aims to have expert workers available at any hour to call authorities in local languages.

“Speed really matters. We need to get help to individuals in real time,” Rosen said.

Last year, when Facebook launched live video broadcasting, videos proliferated of violent acts consisting of suicides and murders, providing a risk to the business’s image. In May Facebook stated it would hire 3,000 more people to monitor videos and other content.

Rosen did not name the nations where Facebook was releasing the software application, but he stated it would ultimately be used around the world other than in the European Union due to sensitivities, which he decreased to discuss.

Other tech companies also aim to avoid suicides. Google’s online search engine displays the contact number for a suicide hot line in response to specific searches.

Facebook knows lots about its 2.1 billion users – information that it uses for targeted advertising – but in general the business has not been known previously to systematically scan discussions for patterns of harmful behavior.

One exception is its efforts to identify suspicious conversations in between children and adult sexual predators. When its automated screens select up inappropriate language, Facebook sometimes contacts authorities.

It might be more hard for tech firms to validate scanning conversations in other circumstances, stated Ryan Calo, a University of Washington law professor who composes about tech.

“Once you unlock, you might wonder what other examples we would be looking for,” Calo stated.

Rosen decreased to state if Facebook was considering pattern recognition software in other areas, such as non-sex criminal activities.