Computer-Patient Relationship

Meet Ellie. She’s a therapist conducting cutting-edge research for post-traumatic stress disorder. Since the Gulf War, between 11% and 12% of American military veterans who served in missions have some form of PTSD. The U.S. Department of Veterans Affairs reports that 7% to 8% of civilians will experience the condition at some point in their lives. With Ellie, these patients have hope for recovery.  

Ellie is funded by the Department of Defense. Sitting on a magenta armchair wearing casual clothing and a delicate gold necklace, she asks patients questions about their family relationships and sleep habits while encouraging them to continue with one of some 200 versions of a reassuring “uh-huh.” She can track and analyze a patient’s every twitch, wince, and smile. Ellie is supportive and perceptive, but she’s not human. Her multiple sensor systems detect subtle changes in facial expressions, body movements, and speech, sending reams of data about real human interaction to her digital brain.  

Developed at the University of Southern California’s Institute for Creative Technologies, Ellie is a “virtual human,” an on-screen avatar that uses advanced artificial intelligence to mimic the ways actual therapists engage with patients. Ellie received funding from the Defense Advanced Research Projects Agency as a way to potentially screen soldiers for depression and PTSD. Although Ellie is still a prototype, she illustrates how cutting-edge AI techniques can transform many aspects of our lives, as well as our connections to computers and the world.  

As one of two scientists who created her, Albert “Skip” Rizzo, a psychologist and research professor at the University of Southern California, said in an interview with NPR, “What computers [like Ellie] offer is the ability to look at massive amounts of data and begin to look at patterns, and that, I think, far outstrips the mere mortal brain.”  

More and more, AI is playing an influential role in everything we do. Netflix and Pandora defined the “curation” phase. Siri defined voice as an interface, and voice searches now account for 20% of Google Android mobile searches. Alexa and other intelligent virtual assistants have ushered AI with a set of specific skills.  

We’re in the midst of what’s been called AI’s “great awakening,” fueled by superfast computers, powerful software, greater connectivity, and the Internet of Things. At the same time, AI advancements such as deep learning and neural networks, the computational models that function much like biological brains, are expanding the capacity of computers to be, well, more like us.   

During the 1970s, scientists programmed computers to perform tasks. Today, computers can teach themselves. They learn about the world through trial and error, as a child does, and build their own knowledge bases. At stake is control “over what could represent an entirely new computational platform: pervasive, ambient artificial intelligence,” technology journalist Gideon Lewis-Kraus wrote in The New York Times Magazine last December.  

Advanced AI is already being applied in several industries, from helping driverless cars navigate city streets to identifying weeds from plants and automatically thinning fields, increasing production for farmers with a machine called LettuceBot. Meanwhile, doctors in Boston hope to detect cancers and diseases earlier with help from an AI system that parses a database of 10 billion medical images. 

AI can also handle strategic games like chess and Go. In a recent test, a program designed by computer scientists at Carnegie Mellon University took on poker professionals in a three-week tournament of Heads-Up No-Limit Texas Hold’em. Here, the computer not only acquired the rules and logic of the game but learned to bluff, bet, and adopt a winning strategy when confronted with down-facing cards and stone-faced players. Data scientists reckoned that the program analyzed poker “information sets” numbering 10 to the 160th power—a feat of deep learning that earned the computer more than $1.7 million in chips. But mastering games is just one part of the overall machine-learning equation. 

Infinite Investment

Companies are scrambling to invest in AI. Nearly 140 private companies developing such technologies have been acquired since 2011—and 40 were bought in 2016 alone, according to venture capital database CB Insights. Google has adopted an AI-first strategy for its business categories. Apple, Amazon, Facebook, and Salesforce are jumping in, as are other giants such as General Electric and Samsung. AI startups raised $5.02 billion worldwide in 2016, a five-year high, Nikkei Asian Review reported in January. In May 2016, the Chinese government announced plans to invest $15 billion in AI by 2018.  

AI is even changing the drinks we consume. London-based IntelligentX Brewing Co. utilizes machine-learning algorithms to automatically analyze customer feedback on its bottled beers, which, in turn, influences how its human brewers create new products targeted to drinkers’ rapidly changing tastes.  

Health care presents another burgeoning area for AI. Researchers at Stanford University are training an algorithm to identify potential skin cancer, one of the most common types of cancer in humans. The Stanford scientists loaded an algorithm with nearly 130,000 skin-lesion images and that represented more than 2,000 diseases to test whether the computer could distinguish harmless moles from malignant melanomas and carcinomas. It did so with surprising accuracy, performing as well as a panel of 21 board-certified dermatologists. The team would like to put its system on smartphones in the future. “Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic,” the study’s researchers wrote in Nature this year.  

AI’s impact is not just limited to interactions between humans and machines. Machines also talk to other machines via the Internet of Things. Cars can communicate with other cars as part of self-driving technology, sharing data and road information about the infrastructure and traffic conditions around them and easing traffic with constantly circulating vehicles that travel the ideal distance between each other.  

In a similar way, technology is transforming manufacturing and warehouse logistics. Last holiday season, Amazon employed 45,000 robots alongside human workers in 20 fulfillment centers, up from 30,000 in 2015. At the same time, Samsung and Volkswagen signed on to use driverless vehicles made by startup RoboCV in their respective Russian factories. Employing vision sensors and obstacle-avoidance technology, the autopilot systems in these factory vehicles build mathematical models that choose the best route, arriving at their destinations quicker.  

Companies are also using machine-learning software to produce songs, logos, video games, and even clothing and industrial designs, including proof-of-concept car parts and Danish modern chairs, The Wall Street Journal reported this year. The San Francisco–based personalized clothing e-tailer Stitch Fix, for instance, uses software to design apparel. Its system processes trillions of possible combinations of patterns, cuts, and colors, factoring in customer purchasing behavior and information about the latest fashion trends.  

The Workplace Revolution

Indeed, as machines become smarter and more jobs get automated, many wonder if robots will soon do all the work. What is certain is that AI-enabled systems with machine-learning smarts have the potential to cause major workforce disruptions in traditional industries and revolutionize the nature of work in many sectors. The World Economic Forum forecasts that automation could lead to more than 5 million lost jobs by 2020. 

Equally controversial is the growing reach and decision-making power of AI, especially in the workplace. AI software can now screen thousands of résumés from job applicants and weed out many candidates before a human ever sees them. That may benefit busy HR departments, but it could be problematic because machine-learning algorithms are still created by humans. That means these algorithms can reflect their biases, which may lead to mistakes and misinterpretations that wrongfully exclude certain candidates. 

Moreover, AI systems are already being used to track an employee’s whereabouts and activities at work. Software can monitor all kinds of actions on an employee’s computer—from recording keystrokes to analyzing the words and phrases written in emails—looking for subtle changes, patterns, and comments. This data may reveal that a worker is unproductive, or wishes to leave the company. “The AI systems’ thirst for data can push the boundaries of workers’ privacy,” Wall Street Journal reporter Ted Greenwald pointed out this year, cautioning, “It is incumbent on managers to use them wisely.” 

Gideon Mann, head of data science at global finance, media, and tech company Bloomberg LP, wants to see more safeguards in both the workplace and the regulatory environment to ensure privacy and avoid possible abuses. New AI technologies such as voice and video synthesis are worrying, he says, because they could persuasively mimic someone’s words. “We can imagine a world where the question of what actually happened—the truth, that is—becomes a very slippery concept,” he argues. “Data scientists must recognize the impact of their work and realize that data science has consequences.”  

Sensing Sentiment 

There are limits to AI’s capabilities, however, if human designers are tasked with completing a creative project. 

At the 2017 Mobile World Congress in Barcelona, IBM’s intelligent supercomputer system, Watson, designed a “thinking sculpture” that evoked the style of Antoni Gaudí. To inspire the artist in Watson, the supercomputer was fed hundreds of images of Gaudí’s creations, along with volumes of related literary works, historical articles, and even song lyrics related to the famed Catalan architect. Watson then identified Gaudí-like motifs, shapes, patterns, and colors, but it was up to the New York design studio SOFTlab to convert the findings into a large structural aluminum sculpture with laser-cut petals and lights. 

For Jonas Nwuke, IBM Watson platform manager, the piece offered a framework for machines and humans to benefit from each other. “We don’t believe we’re marching toward a world where the machine is making decisions and providing directions—it’s just providing a little boost, and in this case a bit of inspiration that may or may not have come about naturally,” he told Forbes. As for the “thinking” part of the artwork, Watson’s linguistic-analysis technology monitored social chatter on Twitter and generated buzz among attendees at the conference, which made the artwork shift its shape in real time. 

Analytics that can detect and measure sentiment on social media have made their way into deployed systems. For example, financial traders depend on social media to gauge the impact of news and events on the markets, but it’s been difficult to analyze and draw meaningful conclusions from social platforms until recently. Bloomberg LP addressed this problem by designing a function on its terminals that continually sifts through millions of social media conversations, interprets whether the content is negative or positive, and overlays this information with a visualization of stock-price movements. The result: detailed insights into how Twitter and other social media may affect stocks and other financial instruments. 

“As AI technologies evolve and surpass the goals and benchmarks we set, it is increasingly clear that there is much value to be gained from all we know how to do,” says Bloomberg’s Mann. Just a few years ago, he recalls, it would have been preposterous to even imagine driverless cars and highly accurate machine-learning speech recognition. “It’s pretty remarkable what has been achieved, and the ramifications will be enormous,” he adds. 

Those ramifications will likely be overwhelmingly positive as well. Accenture forecasts that artificial intelligence could double annual economic growth rates by 2035 and boost labor productivity by up to 40%. AI also promises to shrink time and distance, as well as human limitations, allowing people to spend more time on creative work. Cognitive robotics are on track to endow machines with emotional intelligence and the ability to perceive, remember, plan, and reason. 

Military vets who are speaking with Ellie, the AI-supported virtual therapist, have already glimpsed this future. Initial research suggests that study participants were more likely to speak freely and display emotion with a virtual interviewer than a human one. They felt less fear of being judged and displayed more emotion when interacting with the computer.  

For humans and AI technology, this could well be the beginning of a beautiful friendship.