You.0

When guests enter the Mizuho Bank’s central branch in Tokyo, they see something unexpected in the lobby.

A 4-foot-tall robot named Pepper, who greets patrons with a pleasant voice and a touch screen on her chest. Pepper also has a secret trick: She can understand users’ emotions, according to her manufacturer, French firm Aldebaran. She scans users’ eyes and facial movements, cross-compares them with voice-recognition technology, and applies an algorithm to determine their mood. Pepper then adjusts her behavior to fit whether a person is happy, sad, fascinated by the robot, annoyed, in a hurry, and so on.

As we become increasingly demanding of our gadgets, and as more of them fight for our dollars and attention, the ones that create emotional connections with us are the most likely to become indelible parts of our lives. As a result, all sorts of interfaces are becoming more personal: keyboards, mice, touch screens, voice recognition, and even cute humanoid robots. Pepper is at the cutting edge of human-machine interfaces (HMIs), the fancy label for any device that allows humans to interact with machines. “I think that people want more intuitive user interfaces, and technology has changed to allow us to interact differently from how we have in the past,” says Greg McNeil, vice president and general manager of the Innovation Lab at global electronics design, engineering, and manufacturing company Flex. “We want more and more capability and a more natural experience. Put a computer mouse in front of a 1-year-old and they won’t know what to do with it other than put it in their mouth. Put a mobile phone in front of a 1-year-old with a screen full of icons and they’ll learn quickly to tap open the app that shows the pretty picture of the elephant.”

Sometimes even a touch screen is too complicated, though. At the Flex Innovation Lab, for example, McNeil and his team realized that simple lights act as an HMI in wearable devices—the broad family of technological innovations that includes everything from fitness bands to sensor-enabled clothing. “Light itself can be a powerful HMI,” McNeil says. Those on some activity trackers, for example, can offer at a glance encouragement that the wearer is getting close to her personal goal, all without draining precious battery power. The best HMIs not only convey information but they also do so through the most efficient means. That’s one reason some wearables use such a rudimentary interface instead of, say, a touch screen. “Lighting five LEDs in sequence takes a lot less power than lighting a display up,” McNeil says.

HMIs on the body will take even more surprising forms over the next few years. Designer Gradinar Razvan predicts that the aesthetic of smartphones—centered around blinking lights, haptic alerts, and real-time pings—will migrate into articles of clothing. Writing for industry site UsabilityGeek, he predicted a strong likelihood of innovations such as watches that guide users through shopping malls and bracelets that project a graphical display onto the nearest secondary device, such as a tablet. Someday it may even be possible to watch streaming video with your friends by projecting your watch onto the wall. All of these forms of interaction are designed to fit the ever-changing and increasingly bespoke (and often mobile) needs of users. The more seamlessly technology fits into everyday forms, the more personal the interface must become.

While HMIs in wearables are just beginning to migrate into the mainstream, we can learn a lot about how they might revolutionize consumer and enterprise markets by studying the adoption of an HMI that has penetrated public consciousness, thanks mostly to smartphones and tablets: touch screens.

People want more intuitive user interfaces, and technology has changed to allow us to interact differently from how we have in the past, McNeil says.

Once upon a time, touch screens were a novelty, one we were reluctant to embrace in everyday life. We tethered a mouse to our Macs rather than use built-in touch pads, for example. In the 2008 New York Times article “Dreaming of an iPhone Keyboard as Good as a BlackBerry’s,” Ralph de la Vega, then chief executive at AT&T Mobility, predicted an ideal future where phone makers would be sophisticated enough to offer users both virtual and physical keyboards. Asked about the iPhone keyboard at the time, when AT&T was Apple’s exclusive cellular service provider, de la Vega said, “I have mixed emotions about it. I would prefer they have a better way of typing on the keyboard.”

Today even the automotive world has embraced touch screens as the preferred mediator between human and high-speed machine. Elektrobit, a software vendor that works with the auto industry, noted in a recent white paper that because cars can only offer a limited number of buttons on the dashboard, manufacturers have increasingly turned to touch screens in order to offer a more efficient driving experience. There’s a generation gap; younger drivers prefer touch screens while many older drivers are more comfortable with conventional buttons. Nonetheless, touch screens are becoming the favored method to access a wide range of navigation and infotainment functions inside the car.

HMIs are ever changing, though. According to Elektrobit, the company and other vendors are increasingly looking at voice-recognition technology managed by cloud-based services that allocate massive amounts of server power to deciphering and understanding natural human language in the noisy environment of the car. Even though efforts in the sphere will take a long time to reach fruition, the end goal is to have different systems such as voice recognition and touch screens working side by side, so drivers can use the options that best suit their personal needs. (AT&T’s de la Vega was onto something.)

Touch screens today can be used for unexpected purposes. Illinois-based company Maverick Technologies develops touch screens that allow construction cranes to be operated remotely using tablets. Maverick, which specializes in industrial automation engineering consulting, is one of a number of companies that sees the utility of touch screens. Unexpectedly, they have begun to replace traditional joystick-style controllers for crane operators because they provide better data to operators in real time and seem to function better overall.

The most cutting-edge HMIs seemingly borrow their means of personalized user interfaces from science fiction. “There’s iris recognition,” Flex’s McNeil says. “When you walk up to a particular display or device, or you get in your car, it knows who you are. It doesn’t matter which key fob you have with you. It sets your seat in the right spot. It changes all the mirrors. It sets your radio stations the right way. All those kinds of things, all by scanning your iris.”

Iris recognition might be an innovation right out of Mission: Impossible or Minority Report, but eye tracking is already a very real part of the way we interact with technology. Many smartphones are equipped with eye-tracking technology to automatically scroll through screens when we finish looking from top to bottom, or to pause video play when we look away. There are also new and even more personalized use cases. Because emotional states can be extrapolated under some contexts from eye tracking, the technology has become of increasing interest for marketers. One company, Tobii, even launched an eye-tracking reader for the consumer market with the goal of implementing it into video games. Tobii’s primary use for eye tracking is to allow paraplegic and quadriplegic users to operate computers.

No matter their form—whether it be tracking a user’s eyes, pressing the glass on a touch screen, or talking to a robot—human-machine interfaces depend on a loop of input and output. Users need ways to give machines directions and feed data, to request actions of the machine, or to modify its current course of action. In the present day, the most common varieties of HMIs are keyboards, toggles, switches, and screens that send commands to a machine.

In addition to input, output is given to the user, which allows the machine to relay back information about the state of its processing. Screens frequently display information, or they can be read back to the users by a mechanized voice or even given in the form of haptic feedback on a smartwatch or other wearable device.

HMIs tie into a larger philosophy known as “product as a platform,” which holds that products are no longer just physical objects but, in fact, vessels through which information passes to users. This means that, for instance, a smart car is no longer just a car but the tool by which we unlock our garage door, dial our coworkers’ number on our smartphone, and turn on the television for the kids. According to Capgemini Consulting, for instance, this can result in new tech innovations such as programmable LED lightbulbs that can be dimmed or configured with your smartphone, or timed to coordinate with your lifestyle, or aggregate trends in usage for utility companies. Thinking of the product as a platform leads to many new opportunities.

Sometimes, the information fed into a product doesn’t have to be actively fed at all. In the popular Fitbit fitness trackers, for instance, a user’s body motion itself feeds data that the tracker then uploads to a server to provide metrics on physical activity, sleep, and other health concerns. But several Fitbit models go even further. The high-end Fitbit surge includes a continuous heart-rate tracker that works through an unusual method: bouncing light off of the skin.

So-called optical heart-rate sensors work by shining an almost invisible green light onto the skin and then measuring blood flow. According to the The Wall Street Journal’s Joanna Stern, however, the technology is still in its infancy and often puzzles or concerns users with inaccurate results. Optical heart-rate sensors are rapidly improving in accuracy and efficiency comparable to EKGs.

Some of the HMIs in wearables are even more exotic. Last year, manufacturer OMSignal debuted new wearable T-shirts with built-in sensors that track a wearer’s motion as they exercise. The products, which made it to market in an early incarnation, transmit information on users’ body motion to phones via Bluetooth. Rather than merely spewing data, they’re analyzing and mapping out future action. “The real potential is when wearables help you predict and look forward,” OMSignal CEO and Founder Stéphane Marceau told The Observer.

While Fitbits have a handful of sensors, OMSignal’s device features a wider array that captures a much fuller picture of what the wearer is doing—how deeply he’s breathing or bending, for example. “Currently, trackers on our wrists are taking biometrics,” says Murad Kurwa, senior vice president of Flex’s Advanced Engineering Group, with whom OMSignal partnered to make its device. “But these are embedded into garments and other convenient apparel. So if someone needs to monitor their heart rate and they have a biosensor that is embedded into their clothing, it’s touching the skin and absorbing some amount of sweat. And that sweat can be a way of measuring the person’s biometrics.”

HMI products that, amazingly, use sweat as an input, are already in the early stages of development. In 2011, a collaboration between the University of Oslo and the National Hospital of Norway created a sweat reader that uses small electrical currents to determine when a user is at risk of developing clinically significant hypoglycemia. The product is currently being tested on diabetic patients.

When it comes to personalizing HMIs, one of the most important design factors isn’t the sensor; it’s the display. Displays matter when designing any technological consumer product a person will interact with: computers, smartphones, tablets, e-books, smart televisions, or any of hundreds of other devices. Designers of these screens need to keep in mind navigation, aesthetic appeal, and reliability and usefulness of output—in addition to unique uses in new contexts. The solution, then, lies not in a brand new interface that requires users to learn a whole new set of behaviors, but in an evolution of an HMI we’ve embraced: multi-touch, capacitive touch screens that can sense simultaneous taps in more than one location. In their most advanced uses they can recognize multi-finger gestures. Adopting multi-touch screens lets designers of HMI dispense with the ungainly interface of menus and submenus, and instead replace them with easily navigable symbols.

Currently, trackers on our wrists are taking biometrics. But these are now being embedded into garments and other convenient apparel, Kurwa says.

Some of the most advanced multi-touch screens, which are slowly migrating from corporate R&D labs and universities to consumer products, go even further. A new gadget called the Touchjet Wave, debuting in early 2016, is essentially an Android-based dongle that uses infrared signals to detect finger motions and essentially simulates the functions of a multi-touch screen on an ordinary television screen. Other advances in the space will apply multitouch screens to even more personalized uses. Several manufacturers are working on touch screens that offer haptic feedback in real time to a user’s motions. These touch screens “touch back,” and Apple, among other companies, has filed patents for the technology.

In a recent story in Wired, David J. Linden, a professor of neuroscience at the Johns Hopkins University School of Medicine, said, “In the future, when you are using your iPhone 12 (or perhaps your Samsung Galaxy S10), it will be possible for your smartphone to deliver synthetic touch with such fidelity that you would use that tactile information to inform your online purchase of a smooth, buttery leather case for that very same device. A few years later, your iPhone 14 could send signals to control dynamic tactile displays embedded in your clothing, opening up a whole new world of marketing (and other) possibilities.”

Haptic feedback has also found its way into the ultimate personal experience: surgery. Researchers at Stanford University’s Collaborative Haptics and Robotics in Medicine (CHARM) Lab are experimenting with its uses in robots that perform surgery, as well as in tele-operated surgical devices. In one experimental project, they equipped styluses with artificial “skin” that would offer feedback to the surgeon as he performed an operation.

Then there’s the furthest cutting edge of HMIs, a technique known as brain-computer interfaces that allow users to control a computer program using their thoughts. While used mainly for novelty consumer applications and experimental medical projects these days, BCIs show important potential for the future.

The Food and Drug Administration is considering a wireless device that is implanted in a user’s skull and sends commands to a central server. Dubbed BrainGate, the system is a collaboration between researchers at Brown University and a small startup called Blackrock Microsystems. It is designed for the disabled (a test group includes late-stage ALS patients) and could potentially let them turn household devices and appliances on or off with thoughts alone.

Though experimental, BCIs such as BrainGate hint at how interface designers are thinking less of conventional interaction with machines and more about methods of interaction that are so seamless we don’t even realize we are interacting.

In the meantime, Pepper will still be here, asking in a remarkably appealing tone how she might make your day more pleasant.