Google Project Soli. You may never have heard about this so let me recap it in a few words. Google is working on many interesting projects, and Project Soli is one of them. It is an extremely small radar device that will most likely be integratable into mobile devices and/or robots.

Soli sensor technology works by emitting electromagnetic waves in a broad beam.

Objects within the beam scatter this energy, reflecting some portion back towards the radar antenna. Properties of the reflected signal, such as energy, time delay, and frequency shift capture rich information about the object’s characteristics and dynamics, including size, shape, orientation, material, distance, and velocity.

Soli tracks and recognizes dynamic gestures expressed by fine motions of the fingers and hand. In order to accomplish this with a single chip sensor, we developed a novel radar sensing paradigm with tailored hardware, software, and algorithms. Unlike traditional radar sensors, Soli does not require large bandwidth and high spatial resolution; in fact, Soli’s spatial resolution is coarser than the scale of most fine finger gestures. Instead, our fundamental sensing principles rely on motion resolution by extracting subtle changes in the received signal over time. By processing these temporal signal variations, Soli can distinguish complex finger movements and deforming hand shapes within its field.’

Sachi (St Andrews Computer Human Interaction research group) is working on a complementary technology that combines the Project Soli sensor along with their recognition software to train and classify different materials and objects, in real time, with very high accuracy. They call it RadarCat.

‘RadarCat (Radar Categorization for Input & Interaction) is a small, versatile radar-based system for material and object classification which enables new forms of everyday proximate interaction with digital devices. In this work we demonstrate that we can train and classify different types of objects which we can then recognize in real time. Our studies include everyday objects and materials, transparent materials and different body parts.’ Writes Sachi on its page.

And why is this interesting? Fast and accurate real life 3D object recognition is definitely an area where we need improvements if we ever want robots to be able to efficiently help humans in their daily tasks. Identifying objects used by humans, human facial expressions or human body language is a hard and painstaking process which puts human-to-machine interaction in a new and promising perspective. It will provide insight for machines into the deeper understanding of social mechanisms and through that even into group dynamics.

Lying, cheating, misuse; any negative action is surrounded by certain accompanying human behavioral patterns that computers cannot recognize. They only understand what is told them (or through them) in their own language. A computer enabled with recognition of these patterns, complete with object identification, would be tomorrow’s perfect security monitoring device. The comprehension level of a dog would be perfect for a start to find out about bad intentions before they develop into something worse.

There are two other questions that had come into my mind.

  1. Do computers really want to understand us or recognize our stuff? They can’t really opt out for now but after a while, the question of whether we should learn their language (and the special logic that comes with it) or they should learn ours, will develop into a painful dilemma for humanity.
  2. Why wouldn’t we develop systems that would not just “feel our touch” but give a humanly comprehensible physical response to it? For example, when we touch a file on our screen that has malware in it, the surface of the file could become rough. Or new updates could simply be represented by warmer symbols; system files could be cold and encrypted folders a bit spiky.
    Biological creatures have been developing the sense of touch AND the feedback that comes with it for millions of years. Why wouldn’t we just build on this extraordinary source of information in our interaction with computers?

Anyway, equipping computers with the knowledge of humans is a two-edged sword. In the short term it will be entirely beneficial, but in the long run it will have its major challenges. However good it may sound for a cyber-secret futurist to have a computer that completely understands and interprets human behavior or the objects we are using, certain countermeasures (such as programming empathy into it) should be taken into consideration before we put this computer in charge of anything that concerns our safety.