Hand microgestures may be just about the perfect input method for casual interactions with portable electronics and augmented reality glasses. But what platform is going to dominate if this style of user interface goes mainstream? The Meta Neural Band looks very promising, but perhaps a more compact smart ring will ultimately prove to be more acceptable to consumers. Whichever direction the technology goes, one thing is certain: the intuitive and always-available nature of hand microgestures will be a very welcome alternative to touchscreens and voice recognition.
If a group of engineers at Cornell University is right, hand gesture recognition systems may have a rocky road ahead of them on the path to consumer acceptance. The reason is that existing devices generally require that the user’s hands be empty for use. But when you are on the go, how often do you find yourself with a cup, phone, or bag in your hand? Using a traditional solution, gestures cannot be reliably detected under these conditions.
An overview of the approach (📷: C. Lee et al.)
That could change in the future, however. The team has developed a device they call Grab-n-Go that makes it possible to recognize a wide range of hand microgestures, even when the hands are occupied in other ways. This compact wristband leverages active acoustic sensing to get a clear picture of the wearer’s hand position, even when objects in their hand — such as a cup — stand in the way.
Rather than relying on cameras or EMG sensors, the system uses two tiny speakers and two microphones embedded in a wristband. The speakers emit inaudible sound waves (between 18 and 24.5 kilohertz), which bounce off the user’s hand and the object being held. The microphones then pick up the reflected signals. By analyzing these acoustic reflections, the system can infer the shape of the hand, the grasping pose, and the object’s geometry.
These reflections are complex, influenced by finger position, object material, and hand movement. But through the use of a deep learning framework, Grab-n-Go can sort through the signal patterns to identify what the wearer is doing. The system recognizes 30 distinct microgestures, divided into six gestures for each of five grasping poses — cylindrical, spherical, palmar, tip, and hook — drawn from Schlesinger’s classic grasp taxonomy.
An early hardware prototype (📷: C. Lee et al.)
The hardware itself is very compact. Built into a flexible silicone wristband, the device houses the speaker-microphone pairs on small, custom printed circuit boards. Each pair sits in a 3D-printed case that can slide along the band to suit different wrist sizes. A microcontroller drives the system, powered by a small LiPo battery. An onboard amplifier boosts the acoustic signal, while the data is either stored on a microSD card or transmitted wirelessly over Bluetooth Low Energy to a smartphone for real-time processing.
The two speakers each operate in slightly different frequency ranges (one at 18–21 kHz and the other at 21.5–24.5 kHz), allowing the microphones to distinguish between their echoes using band-pass filters. By combining these signals along four unique travel paths between speakers and microphones, the wristband builds a rich acoustic map of both the hand and the object it’s holding.
In testing, the system performed quite well. Across 10 participants and 25 everyday objects, Grab-n-Go achieved an average recognition accuracy of 92%. A follow-up study with 10 deformable objects, such as soft containers and flexible materials, maintained nearly the same accuracy, showing the system’s robustness.
Grab-n-Go solves a big problem in hand microgesture recognition, but it is still unclear what the future holds for these interfaces. The field is still wide open for innovation.