Takeaway: The race is on to develop the next frontier of AI: humanlike robots that display high levels of emotional intelligence.

TREND WATCH: What’s Happening? Empathic robots—machines able not only to interact with you, but to acknowledge and respond to your emotions—are making inroads in the business world. These machines have enormous commercial potential not only as customer service agents, but also as caregivers for an aging population.

Our Take: But are we ready for a future of living alongside social androids that may end up squeezing out service-sector workers? And can these robots truly relate to humans on an emotional level? While it may be decades before both of these questions can be answered affirmatively, the future looks bright thanks to Millennials, who already shy away from deep interpersonal interactions with other humans.

Meet Rose, a cheeky concierge for The Cosmopolitan of Las Vegas. She’ll recommend restaurants, arrange for pillows to be delivered to your room, flirt with you, and even play games like Would You Rather. But you can’t thank her for the great service: She’s a robot.

According to a recent piece published in Advertising Age, Rose isn’t alone. Businesses of all kinds are increasingly using similar chatbots to connect with customers and lift sales. These efforts are part of an ongoing quest to develop ever-more humanlike AI, with scientists refining machine-learning algorithms to create robots that recognize emotions, display empathy, and communicate in a socially natural manner.

THE PRESENT: EMOTIONALLY INTELLIGENT MACHINES

An increasing number of AI companies are developing robots that possess emotional intelligence—that is, the ability to detect and react to social signals. (See: The Age of Artificial Intelligence.”) This is possible in part thanks to hardware improvements: Sensors like Microsoft Kinect allow robots to track a variety of emotional cues, such as facial expressions, words used, tone of voice, and body language. Improved Big Data capabilities and machine learning (including so-called “neural network” software) then enable these robots to analyze massive amounts of human interaction and discern behavioral patterns.

For some robots, it stops there. What the robot learns is hardwired into an algorithm that guides its future behavior. The most cutting-edge incarnations, however, continue to learn and evolve with each new interaction on the job. Pepper, one such robot released in 2015 by the Japanese company SoftBank Robotics, can recognize four emotions: joy, anger, surprise, and sadness. If you’re happy, Pepper will share in your delight by dancing. If you’re down, it’ll cheer you up with your favorite song. Powered by what developers call an “emotion engine,” Pepper will, in a sense, get to know you based on repeated interactions.

Ex Machina: Empathy - chart2

Effective machine learning models, of course, require enormous amounts of data—enough to encompass a wide range of phenomena—and thereby identify useful patterns. Within the AI world, the competition to create and provide superior data sets has turned into an arms race, with startups like TwentyBN at one end and tech giants like Facebook and Google at the other. In fact, Google just launched a new program for machine-learning startups that will provide access to data sets and simulation resources.

HOW MACHINE LEARNING AND EMPATHIC AI ARE BEING USED TODAY

While the concept of a sentient, ever-evolving robot may sound futuristic (and disquieting), machine learning and empathic AI are flooding into every corner of modern life.

Internet apps. The phone in your pocket is already a hotbed for platforms and apps that improve via machine learning—even if you don’t realize it. If Facebook has ever automatically tagged your photos, you’ve seen machine learning at work. Ditto for e-mail filtering: Services like Gmail are continuously learning what spam e-mail looks like so that they know which incoming messages to send directly to your spam folder.

It doesn’t end there. Many banks that offer mobile check deposit services use machine learning to decipher your handwriting. The same goes for CAPTCHA tests on websites, whereby machines incorporate millions of responses into their own algorithms in order to constantly improve their e-reading capabilities. (Of course, this self-improvement gives rise to a cat-and-mouse game in which it becomes ever-tougher to distinguish machines from humans.)

Ex Machina: Empathy - chart3

Customer service. Many brands, in an attempt to beef up their customer service capabilities, are deploying empathic robots on the front lines. Chatbots like Rose or Amazon’s Alexa, which are built with distinctive personalities, are an upgrade from the sterile automated prompts that customers are used to encountering.

A 2016 survey from Forrester Research found that one-quarter of companies are either piloting chatbots or using them regularly, while 32% planned to try them in 2017.

Ex Machina: Empathy - chart4

Unlike their more sophisticated counterparts, most chatbots don’t learn; instead, they dispense preprogrammed answers. But they’re becoming more sensitive thanks to startups like Koko, which last year raised $2.5 million to help provide chatbots with an “empathy layer” that will offer more thoughtful responses to users who mention issues such as depression or stress.

Plenty of high-profile names are betting on these digital assistants. Capital One nabbed a Pixar character development expert to help give its new chatbot a distinct, playful personality. Other financial companies like MasterCard and Bank of America have released chatbots of their own. At Facebook’s recent F8 developer conference, a company executive said that Facebook Messenger now features 100,000 bots—up from 33,000 last September. And last year, 1-800-Flowers attracted a slew of new customers (mostly young) with its virtual concierge GWYN, which is powered by IBM’s Watson platform and offers gift recommendations in a friendly, conversational style.

It’s easy to see why chatbots are gaining ground. They cost roughly one-third less to build and are easier to maintain than apps, and free up support staff for more complex interactions. Or, they could eliminate the need for human workers altogether: A Business Insider analysis of OPM data shows that, if used to their full potential, chatbots could save billions of dollars in human labor costs. (As for all of the workers who could be rendered obsolete by chatbots…we’ll get to that later.)

Ex Machina: Empathy - chart5

Standalone robots in retail locations are showing some early promise as well. After Pepper was tested for a week assisting customers in a tech store in Santa Monica last year, revenue rose by 13% and sales of a featured product grew sixfold. At a clothing store where Pepper was deployed, foot traffic increased by 20%.

Could robotic concierges help reverse the long slide of brick-and-mortar retail? That’s a tall order—but retailers with nothing to lose may try to find out the answer.

Social work. Empathic robots are also gaining ground as companions and caretakers. Some, like Mayfield Robotics’ Kuri, assist with household tasks, like keeping track of family members. Kuri responds to touch, smiles when it sees a face, and changes the color of its glowing “heart” according to its emotional state.

Other robots, like IBM and Rice University’s MERA (a customized version of the Pepper robot), are aimed at the elderly. MERA monitors vital signs and can call for help if it senses distress. With millions of Boomers entering their golden years, these robots could be a valuable source of support for decades to come. The United States may very well follow in the footsteps of Japan, where nursing facilities expect to adopt “carebots” en masse to serve the country’s ballooning elderly population.

Ex Machina: Empathy - chart6

Artificial empathy is even being used in therapy. Ellie, from the University of Southern California Institute for Creative Technologies, interviews military personnel who have recently returned from deployment and looks for signs of PTSD.

In many ways, humanoid robots could be an ideal addition to a business or a family. “Machines are infinitely patient,” roboticist Maja Matarić pointed out in Scientific American. “They have [fewer] biases to begin with, and they have no expectations.” This quality has made them particularly well-suited for psychological support. In one study, patients offered Ellie more information about themselves than they did to a human therapist, because they felt more comfortable.

REMAINING ROADBLOCKS

Think the whole world is being taken over by feeling machines? Not so fast. Key obstacles remain between where we are today and the widespread adoption of truly empathic, humanlike AI.

Not every business is embracing empathic AI. Just because the technology exists for high-tech chatbots and digital assistants doesn’t mean that every company wants to use it. The fashion retailer Everlane, known for its transparency with customers, recently dropped its chatbot in favor of more personalized e-mail support.

Issues like brand safety (see: “Do Brands Need Safe Spaces?”) could also persuade companies to implement less-than-full-throttle AI. Skeptics can already point to evidence that, in some cases, chatbots do more harm than good. Just look at what happened with Microsoft’s Tay, a supposedly “smart” chatbot the company took offline after less than 24 hours after it started mindlessly parroting racist and offensive language from users.

Simple nonlearning algorithms might be bland, but they don’t risk causing offense.

Likewise, many consumers are turned off by robots. As mentioned earlier, one of the most promising uses of emotionally intelligent robotics is in the realm of senior care. But the Boomers now aging into elderhood are fiercely independent, are highly skeptical of Big Brother, and would much prefer the human touch of a family member than the cold embrace of a robot caregiver.

In retail, many shoppers would rather deal with a human staffer than a robot—however humanlike. A recent UPS survey of digital shoppers found that more than half of respondents age 18+ would prefer interacting with a person over a robot when entering a physical store. While machine learning and AI are somewhat expected nowadays when shopping through digital channels, they are still an unwelcome sight to many brick-and-mortar shoppers.

Ex Machina: Empathy - chart7

Millennials are a notable exception. Their preference for live chat agents shows that they don’t need a human voice when it comes to sales support: A 2015 survey from Software Advice shows that nearly half of Millennials would rather receive live chat support than phone support for queries about finance (hardly an impersonal subject). For all older consumers, the split is roughly 80-20 in favor of phone support.

Ex Machina: Empathy - chart8

Yes, Millennials are comfortable with digital IT, but that’s just part of the explanation. This is a generation of straight-talkers that views conversation as transactional. For many Millennials, having to intuit the speaker’s emotion distracts from their ultimate goal: finding out information. For older consumers, on the other hand, making an emotional connection and establishing trust is just as valuable as having their question answered.

Techno-pessimists are worried. Plenty of labor economists are raising alarm about the impact that evolved AI will have on business-to-consumer jobs—particularly retail, which constitutes a major share of the workforce.

In short, humanoid sellers would further squeeze a sector that’s already suffering and shedding employees. With sales tanking, the pressure to reduce costs is high. Of course, friendly robots could replace other types of workers, too. Virtually any routine service that’s offered at the other end of an 800 number could be automated, as could middle-skill, middle-wage jobs that rely on following a set of rules. (See: “The Age of the Intelligent Machine.”)

Techno-pessimists like Martin Ford (who penned 2015’s Rise of the Robots: Technology and the Threat of a Jobless Future) argue that robotics is a zero-sum game, where every job gained by a robot constitutes one lost by a human. While the truth is likely more complicated (techno-optimists believe that most AI systems will augment, not replace, human workers), there is no question that advanced robotics will render at least some human jobs obsolete.

The technology just isn’t there yet. The final—and perhaps most daunting—hurdle that developers must overcome is technological. The emotions that robotic AI can currently process are nowhere near the depth of what humans actually experience. Today’s bots can mirror your emotions, but they cannot intuit your hopes, dreams, or fears. For these reasons, Forrester Senior Analyst Xiaofeng Wang predicted in Advertising Age that it could be at least five years until chatbots are able to consult on more intricate products like life insurance.

Empathy is a lot more than emotional recognition and mimicry. It also includes the ability to imagine and “feel” yourself in another person’s shoes, which would require robots to have a human-like knowledge of self. AI developers are nowhere near tackling this challenge.

WHERE WE’RE HEADED

While many consumers are still wary of robots invading their everyday lives, Xers and Millennials are slowly being eased into an existence surrounded by ambient AI. By degrees, they are getting acclimated. Reflecting on the success of GWYN, 1-800-Flowers CEO Chris McCann told Digiday that, to his surprise, “[Most] customers, especially Millennials, would rather interact with a robot than a human.” As a generation that would rather send a text or e-mail than pick up the phone (see: “Don’t Call Us, We’ll Text You”), Millennials represent hope for advanced robotics.

Some futurists predict that empathic robots could eventually morph from personal assistants into personal stand-ins—not just devices that can help you, but devices that could be you. Imagine a self-simulacrum able to converse with family members after you die. Hossein Rahnama, a visiting scholar at the MIT Media Lab, calls this concept “augmented eternity” and is developing a system that collects digital data produced by living persons (like videos, e-mails, and text messages) in order to replicate their expressive styles. The ethical quandaries posed by such a development would be immense (see this episode of Black Mirror)—and, if early skepticism is any indication, so would the backlash.

Of course, this future is a long way off. Truly human-seeming robots—ones with real self-awareness—remain a speculative vision for all but (perhaps) the Homeland Generation late in the 21st century. Once they arrive, however, the possibilities are endless.