Quantcast

New Face Recognition Technology Produces Better Selfies

May 29, 2014
Image Credit: Thinkstock.com

Brett Smith for redOrbit.com – Your Universe Online

Photo-stylizing applications like Instagram allow users to quickly generate filtered images that can then be uploaded to social media, but these filters don’t produce the best results when it comes to selfies as the human face is made up of a range of textures and surfaces.

MIT researchers have unveiled a solution to this issue by leveraging off-the-shelf face-recognition technology. The new method is scheduled for presentation at Siggraph, the premier graphics conference being held in Vancouver this year.

“Most previous methods are global: From this example, you figure out some global parameters, like exposure, color shift, global contrast,” said project researcher YiChang Shih, an MIT graduate student in electrical engineering. “We started with those filters but just found that they didn’t work well with human faces. Our eyes are so sensitive to human faces. We’re just intolerant to any minor errors.”

Instead of focusing on global tweaks, Shih said, the MIT team devised at process called “local transfer.” The process starts by identifying a stylized photo to use as a reference for the desired output.

“We then find a dense correspondence — like eyes to eyes, beard to beard, skin to skin — and do this local transfer,” Shih said.

To refine the filtering method, the team added another element to the process called “multiscale matching.”

“Human faces consist of textures of different scales,” Shih said. “You want the small scale — which corresponds to face pores and hairs — to be similar, but you also want the large scale to be similar — like nose, mouth, lighting.”

The researchers found that their method could also be applied to videos. The process is an improvement on global filters because it can compensate for any extreme glare – caused, for example, when a person wearing glasses turns their head. The new algorithm considers the eyes separately from the rest of the face, and therefore the total image is less disrupted.

The study team noted that their filtering method caused distortions in a person’s eye color in certain cases, so they developed an optional feature that eliminates the distortion.

Shih said the filter produces the best results when the source and reference picture are closely matched. When the two aren’t a good match, the output can look bizarre – such as a baby with wrinkled features. The MIT team tested their filter on 94 images culled from the photo-sharing site Flickr and found that it regularly produced good results.

“We’re looking at creating a consumer application utilizing the technology,” said Robert Bailey, currently with Adobe’s Disruptive Innovation Group. “One of the things we’re exploring is remixing of content.”

Bailey said the MIT filter is a significant improvement on the photo filters that are currently available.

“You can’t get stylizations that are this strong with those kinds of filters,” he said. “You can increase the contrast, you can make it look grungy, but you’re not going to fundamentally be able to change the lighting effect on the face.”

“You can take a photo that has relatively flat lighting and bring out portrait-style pro lighting on it and remap the highlights as well,” he added.


Source: Brett Smith for redOrbit.com - Your Universe Online



comments powered by Disqus