No need to fake-enhance the pictures, sorry CSI

“Researchers at the University of Texas at Austin and Cornell Tech say that they’ve trained a piece of software that can undermine the privacy benefits of standard content-masking techniques like blurring and pixelation by learning to read or see what’s meant to be hidden in images—anything from a blurred house number to a pixelated human face in the background of a photo. And they didn’t even need to painstakingly develop extensive new image uncloaking methodologies to do it.”

Source: wired.com

This piece of software isn’t even cutting edge, it is based on a rather simple facial recognition algorithm. What can this mean security-wise?

The success rate (40-80%) is still not high enough yet to pose a serious threat for a single individual, but it can (and WILL) change over the next few years. Right now most online services like Google street view blurs faces and license plates. This could be all for nothing in the future. But that’s not even the most serious concern.

What are the limits of future technologies like this one? Will it be able to identify a person from hijacked, blurred CCTV footage? Will such technology be able to easily bypass most kinds of captcha identifications in the future?

Researchers suggest that instead of just blurring the face, one should partially cover the original face in the picture with another to confuse the recognition algorithm. Now combine this with the inefficiently protected feeds of CCTV cameras and whoever wants to know where a person is, or was, can easily access that data.

In not more than ten or twenty years, when we will have further developed sophisticated AI based software, our processing capabilities will rise to unimaginable heights and most of the privacy we still have today will probably be gone.