Using off-the-shelf smart devices, the new system supports an unobtrusive, flexible and lightweight communication channel between screens and cameras.
The system, called HiLight, will enable new context-aware applications for smart devices, researchers said.
Such applications include smart glasses communicating with screens to realise augmented reality or acquire personalised information without affecting the content that users are currently viewing.
The system also provides far-reaching implications for new security and graphics applications.
The idea is simple: information is encoded into a visual frame shown on a screen, and any camera-equipped device can turn to the screen and immediately fetch the information, researchers said.
Operating on the visible light spectrum band, screen-camera communication is free of electromagnetic interference, offering a promising alternative for acquiring short-range information.
But these efforts commonly require displaying visible coded images, which interfere with the content the screen is playing and create unpleasant viewing experiences.
The team at Dartmouth College studied how to enable screens and cameras to communicate without the need to show any coded images like QR code, a mobile phone readable barcode.
In the HiLight system, screens display content as they normally do and the content can change as users interact with the screens.
At the same time, screens transmit dynamic data instantaneously to any devices equipped with cameras behind the scene, unobtrusively, in real time.
HiLight supports communication atop any screen content, such as an image, movie, video clip, game, web page or any other application window, so that camera-equipped devices can fetch the data by turning their cameras to the screen.
HiLight leverages the alpha channel, a well-known concept in computer graphics, to encode bits into the pixel translucency change.
HiLight overcomes the key bottleneck of existing designs by removing the need to directly modify pixel colour values. It decouples communication and screen content image layers.
“Our work provides an additional way for devices to communicate with one another without sacrificing their original functionality,” said senior author Xia Zhou, an assistant professor of computer science and co-director of the DartNets (Dartmouth Networking and Ubiquitous Systems) Lab.
“It works on off-the-shelf smart devices. Existing screen-camera work either requires showing coded images obtrusively or cannot support arbitrary screen content that can be generated on the fly. Our work advances the state-of-the-art by pushing screen-camera communication to the maximal flexibility,” said Zhou.
PTI