Artificial Intelligence Could Help Keep You From Embarrassing Yourself On Facebook

Chuck Bednar for redOrbit.com – Your Universe Online
Good news for those who tend to post drunken selfies: Facebook is reportedly developing an advanced AI program that can detect when people attempt to post pictures in which they are inebriated and keep it from happening.
According to Herald Sun technology correspondent Harry Tucker, the social media company is currently developing software that, instead of just being able to identify people’s faces, will actually be capable of determining what is taking place in a picture.
Yann LeCun of Facebook’s Artificial Intelligence research lab told Wired the new AI program would act like “an intelligent digital assistant” that would “mediate your interaction with your friends, and also with content on Facebook.” If you went to post a photo that it determines could show you in a somewhat embarrassing state, it would advise you against uploading it.
The program could also help protect users from their friends as well. LeCun, a researcher and machine learning expert who is now in charge of Facebook’s AI lab, said that this type of virtual assistant could alert users if someone else was attempting to post an embarrassing photo of them without their permission, Tucker said.
“In a virtual way, he explains, this assistant would tap you on the shoulder and say: ‘Uh, this is being posted publicly. Are you sure you want your boss and your mother to see this?’” said Wired’s Cade Metz. He added that the concept “is more than just an idle suggestion” and that LeCun’s team is “laying the basic groundwork” for the AI assistant.
Creating this type of software primarily involves developing image recognition technology capable of distinguishing between a person’s sober appearance and what they look like when inebriated, Metz said. This requires a form of AI known as deep learning, which the social network already uses to identify faces that can be tagged in photos.
At the same time, BGR writer Chris Smith said that LeCun “wants to protect the online identity of a person, even though having intelligent machines analyzing personal data might not sound too thrilling to some Facebook users.” The researcher also believes that this type of digital assistant “would initially be able to answer simple questions, but in time, it’ll be able to analyze a lot more data than just photos posted on Facebook,” Smith added.
Tuesday marked the one-year anniversary of LeCun’s Facebook lab, which is known within company circles as FAIR, according to Metz. Among the work that it has completed thus far is the deep learning algorithms currently used to examine a person’s Facebook activity in order to determine what types of links, posts and photos he or she is more likely to click on so that those types of content are more likely to appear in his or her news feeds.
Those algorithms will “soon analyze the text you type into status posts, automatically suggesting relevant hashtags,” the Wired reporter added. The ultimate goal, however, is to develop AI systems capable of understanding Facebook data “in more complex ways,” thus enabling the social media site to provide a full-on digital assistant to its members.
“For some, this is a harrowing proposition. They don’t want machines telling them what to do, and they don’t want machines identifying their faces and storing them in some distant data center, where they can help Facebook, say, target ads,” Metz said, adding that LeCun and his colleagues insist that their research “is about giving you more control over your online identity, not less.”
—–
Follow redOrbit on Twitter, Facebook, Instagram and Pinterest.