Protecting your visual identity in the digital age

This article was written by Tanweer Ali, Senior Developer at AVG’s Innovation Labs in Amsterdam. Tanweer is leading research into Computer Vision with a focus on enhancing user privacy. He is an experienced developer with background in Scientific Computing, 3D Graphics and Numerical Methods.

Introduction

With the increasing use of cellphone cameras in public places, the possibility of your picture or video being taken unaware and ending up on the internet is ever growing. Then the Big Data technologies allow not only for storage of large amounts of image data but also for analyzing them in real-time. Take Google’s StreetView example, when they pictured random people in objectionable places or situations (such as vomiting, being-arrested etc.) or filming inside their home fences. In some countries, Google is even allowed to keep the un-censored images for 6-months. Couple that with other advancements in face-recognition technologies, such as Facebook’s DeepFace (see here and here), which will soon give a private corporation power to not only recognize you in a crowded place, but also having instant access to your most private information.

How much of our visual identity can we protect from unwanted privacy invaders? Can we hide ourselves from cameras or interfere with the face-recognition technologies?

As we look inside the camera’s hardware, it turns out that there are limitations of the hardware itself that, in theory atleast, can be exploited. Based on these, a number of privacy advocates are already trying to sell wearables claiming to protect your identity. How effective are these techniques and can they really promise us anonymity in the face of prying digital eyes? These are the kind of questions we try to answer in this article.

We start with some background into the camera’s sensor and the suppliers of the cellphone image-sensor market. Then we focus on two techniques of distorting captured images using the infra-red lights and by using retro-reflective materials. We then present our own findings in the last section.

Cellphone Cameras & Image Sensors

At the core of any camera are the Lens and the Image Sensor. The lens focuses the light onto the sensor and there can be multiple lens elements inside the lens assembly as shown in figure (2). The focal length of these lenses determines the field of view and the magnification of the scene.

Fig 1: Galaxy S5; a Samsung S5K2P2XX 1/2.6” sensor with an f/2.2 lens. Photo: iFixit

Fig 1: Galaxy S5; a Samsung S5K2P2XX 1/2.6” sensor with an f/2.2 lens. Photo: iFixit

Fig 2: Typical Camera Module on Mobile Phones

Fig 2: Typical Camera Module on Mobile Phones

The image-sensor is what converts the light into electrical signals. When an image is being captured, the light passes through the lens and falls on the sensor. The sensors consists of large number of very small photo-detectors or picture elements called pixels, which registers the amount of light that falls on it and converts it to corresponding voltage and the digital signal.  It is the number of these pixels on the sensor that we measure in Megapixels. The size of each pixel is typically measured in µm (micro). A sensor with larger pixel-size will perform better in low light conditions as it will be more sensitive.

There are two main types of Image Sensor technologies: CCD (Charged Coupled Device) and CMOS (Complementary Metal-Oxide Semiconductor). Smartphone cameras are almost all CMOS sensors. CMOS sensors require less circuitry outside the sensor and consume 100 times less power than their CCD counterparts. They have a faster readout, higher noise immunity, and a smaller system size.

The sensor itself cannot distinguish between different wavelengths of light and hence does not contain any color information. The filter in front of the sensor allows it to assign color values to each pixel in either RGB or CMYG format. The Bayer array is one example of an RGB filter with alternating rows of red-green and green-blue filters.

Fig 3: Electromagnetic wavelengths ranges

Fig 3: Electromagnetic wavelengths ranges

An Infrared-filter is included in almost every mobile camera, shown as IR-Glass in figure (2). This keeps the Infrared light from interfering with the output of the sensor. The visible light spectrum has a wavelength range from 380nm to around 780nm. Then comes the infrared light from 780nm to about 1mm (see figure-3).

There are other features of CMOS sensors which allow for improvement in the image quality under varying light conditions e.g. Backside Illumination (BSI) and Dynamic Range. The Dynamic Range of the sensor, defined as the ration between the maximum and minimum possible signal, also determines the image quality when the scene contains both dark shadows along with high intensity light reflections.

CMOS Image Sensor Market

The top manufacturers of CMOS sensors include Sony, OmniVision, Samsung leading the market with more or less 20% share but also others like Aptina-Imaging, Toshiba, Nikon, STMicroelectronics etc. competing for a fair share of CMOS sensor market.

There are also differences in the strength of sensors from different manufacturers. Backside Illumination, for example, was introduced in CMOS sensors for the first time by Sony and improves image quality in low-light conditions. ISOCELL technology is what Samsung later introduced to add BSI in its sensors, which additionally reduces crosstalk between pixels. Then there can also be differences between Dynamic Range of the sensors.

The following table looks at various CMOS sensor products found in some of the most popular mobile devices:

Device Name Sensor Manufacturer Resolution(MegaPixels) Pixel-size(μm * μm)
iPhone 4 OV5650 OmniVision 5 1.75 µm
iPhone 4S IMX145 Sony 8 1.4 µm
iPhone 5 IMX145-Derivative Sony 8 1.4 µm
iPhone 5S Sony 8 1.5 µm
iPhone 6 ISX014 Sony 8 1.5 µm
         
LG Nexus 4 IMX111PQ Sony 8 1.12 µm
LG Nexus 5 IMX179 Sony 8 1.4 µm
Samsung Galaxy S3 S5K2P1 Samsung 16
Samsung Galaxy S4 S5K3L2 Samsung 13
Samsung Galaxy S5 (ISOCELL based) Samsung 16 1.12 µm

Techniques to hide from Cameras

We will focus here on two main techniques that are in use to hide the subject from camera or try to break Face-Detection algorithms. One of them uses infrared lighting (see here and here ) and the other is based on the use retro-reflective materials.

1.        Infrared Lighting to Break Face Detection

This method was introduced by Isao Echizen of Tokyo Nation Institute of Informatics. Its uses goggles with infra-red LEDs inserted around the eyes and the nose areas. Since the infra-red lights are completely invisible to human eyes, they are only detectable by cameras which are sensitive to the wavelengths of these LEDs. They claim to break face detection when the lights are switched on:

infrared-2-300x226 infrared-1-300x226

Infrared lights breaks facial detection

However, the drawback of this approach, as we will see from some of our own experiments, is that many cellphone camera sensors have an IR filter strong enough to cut off any wavelengths beyond the visible spectrum. The design of the glasses has to be acceptable for a good social interaction. Also, if the subject is pictured with the flashlight, the interference from the light will minimize the intensity of the infrared lights.

2.        Retro-reflective Materials

Most rough surfaces would reflect lights by diffusing or scattering in all directions, hence minimizing the intensity of light. Mirror surfaces on the other hand, reflect lights at angle almost 90-degrees opposite to the angle of incidence on the surface.

reflection-of-light-from-a-mirror-or-polished-surfaceScattering-of-light-from-a-rough-surface

However, retro-reflections use a surface specially designed to reflect light back at the same angle as the angle of incidence. These materials thus appear brightest to the observer who is located near the light source.

So how can this material be used in our defense against cameras? It turns out that if the flash of the capturing camera is located near the image sensor itself, this material can reflect a most of the flash light back to the sensor. This will result in an image which will put the Dynamic Range of the camera sensor to test.

Retroreflection explained in 3M's product brochure

Retroreflection explained in 3M’s product brochure

However, the drawback of this approach is that if the flash is not used then there can also be no distortion in the image unless the source of light is located near the camera. Secondly, a camera with higher dynamic range can be used to minimize the darkening of the subject.

Betabrand's flashback clothing

Betabrand’s flashback clothing

Experiments & Our Approach

We decided to put all these different techniques to test at AVG-Labs.

First, we set up a simple experiment with Infrared lights and tested the sensor response from different cellphone cameras. We used LEDs from 765nm to 940nm wavelengths. We found that most iPhones had a very strong infrared filter and so any lights beyond 830nm were not visible in the final image, while almost all Android devices we tested were sensitive to the wavelengths up to 940nm, but with varying intensities. The results are shown in the figures below (compare this to Table-1 with CMOS sensors):IR-LED-lights

 

Note that 765nm LED in the bottom, is part of visible red spectrum and could be seen with the naked eye. The LEDs from 830nm and beyond were not visible to the naked eye when turned on.

We then decided to test our findings on a prototype of AVG’s Invisibility Glasses. The results seem quite promising when we pictured our test subjects from cellphone and laptop cameras which were sensitive to IR. We used online face-detection services from Facebook, Face++ and BetaFace-Api. See from the results below that face detection did fail on Facebook when the lights on the glasses were turned On:

facebook_facial_recognition

We know that flashlight can reduce the intensity of infrared lighting. That is why our glasses were also covered with a retro-reflective material. This is a first time someone has attempted a combination of the two techniques in a single wearable. Secondly, our methods will result in glasses which will be more fashionable and acceptable to wear in public.

The results of using a flashlight on our test subject can be seen from the figure below.Invisibility-Glasses-darkens-the-subject-when-flashlight-is-used

 

This is the first time that someone has attempted a hybrid technique of combining infrared with retro-reflective materials in a single wearable.

Another approach that we are working on employs the use of projecting infra-red patterns or makeup on the face of the subject. This is a completely new approach that we are working on and more on this will be published in a later article.

Conclusion

Our experiments in the previous section show that there is a potential to exploit the limitations in cameras hardware to our advantage and protect our identity. The variations in Dynamic Range and IR-filters of different cameras offer us a way to do this. A universal technique that will work for all cameras, under all lighting conditions is still open to exploration. In our future attempts, we hope to make progress towards exactly these goals.

 

PS: We presented the glasses yesterday at the PEPCOM event (part of Mobile Wordl Congress 2o15) and everybody was excited to play with them and try them on. Here is what the media had to say about the Invisibility Glasses:

Disclaimer: All opinions are those of the author and not those of AVG or Innovation Labs.

Leave a Reply

Your email address will not be published. Required fields are marked *