Skip to main content

Teaching Machines to Learn on Their Own

Stephen Hoover, CEO of Xerox’s Palo Alto Research Center, talks with Scientific American tech editor Larry Greenemeier about the revolution underway in machine learning, in which the machine eventually programs itself
 

Uncertain

Steve Mirsky: Welcome to Scientific American’s, Science Talk, posted on November 10, 2015. I'm Steve Mirsky. A short episode today for which I'll turn it over now to Scientific American’s associate tech editor, Larry Greenemeier.

Larry Greenemeier: Computers have always been good at doing things that are really complicated for us humans. Things like crunching insane numbers and running complex algorithms. On the other hand, computers have a really hard time recognizing a particular voice or face in a crowd; something most kids learn to do before they're even out of diapers. But things are changing fast. Over the next decade or so, machines will more easily mimic inherently human abilities. And they'll learn to do it much the same way we do: through experience.

                           Experience in this case means computers will be fed data patterns over and over again until they're able to automatically identify a particular sound or image on their own. This process is called machine learning. To better understand the dawn of intelligent machines, and what it means for our daily lives, I spoke with Stephen Hoover, CEO of Xerox's Palo Alto Research Center, at a recent intelligence assistance conference in New York City. Here's an edited version of our conversation. We start by talking about the ongoing rapid change of machine learning.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Stephen Hoover: That's what's really changed over the last five years is that computers now have the ability to understand in much deeper ways, what it is that we're asking and we're trying to do. You're actually starting to see it be embedded in the products. So many of your readers will be familiar with the Nest Thermostat. The Nest Thermostat is a product with an intelligent agent built into it. So I don't program a thermostat now, right. It learns from my behaviors what it is that I'm doing, understanding my context and then delivering to me the experience that I want. So it is an intelligent agent that's built in to the product. And more and more you're gonna see these kinds of capabilities built into products.

Greenemeier: What role does machine learning play?

Hoover: You don't so much as program a computer in machine learning in the way that you did, which was I broke a task into a series of steps to do that. It's that machine learning actually learns from the data the right answer. The machine programs itself.

Greenemeier: Kind of like the way humans learn as kids.

Hoover: You show your daughter a car and a book. You show her another one and you say car, car, car and she says car. They learn by labeled data. Did we program our child to recognize a car? I mean, in some sense you did but that's what machine learning is. You're gonna show the computer a bunch of instances and you're gonna label it. And it's gonna learn how to do it. There's a core code which is that learning algorithm and then that's applied to multiple contexts. We're switching from where computers helped people to people helping computers.

Greenemeier: You can't talk about machine learning without also mentioning the hardware that makes it possible.

Hoover: Computers can do things that used to be hard for computers but are easy for humans. Like, recognizing a smile that's because Moore's law has enabled it. And Moore's law means the doubling of computational power every 18 months. Obviously that means I can write more and more complex software.

Greenemeier: Like apps on a smart phone?

Hoover: In your iPhone, which by the way, this is as powerful as a Cray supercomputer from 1998 that modeled the weather for the entire world. Right, that's what you carry around in your pocket. But one of the amazing things in there – think about it, right – it's like a $15.00 chip that has GPS, an accelerometer, a pressure sensor. GPS right, I can tell – you can tell where you're at in the world to within a meter, anywhere in the world. For $15.00 one-time buy. It's phenomenal. So, hardware that senses the world to help then software make better decisions is really important.

                           That's this whole idea of the Internet of Things. The Internet of Things – the analogy I make for people is you think about Google. What Google and search did is, it enlarged the human memory, right. I used to have to know everything, right, if I wanted to access it in less than glacial time. Or else I had to make a trip to the library and look-up the card catalogue. I don't have to know anything today from a fact view point, when I say know anything, I mean I don't have to memorize a set of facts. I just go on Google to – my mind has been expanded to be as large as all human knowledge.

                           The Internet of Things is about googling reality. What I mean is, think about is if my body is now as big as the globe. I wanna know what the temperature is in Augusta, Maine. I wanna know what the state of pollution is in Beijing. I wanna know is there fresh fish today at Whole Foods. Sensors are gonna be in the world that are gonna tell me those things. And so, hardware, not only begets the capability to create new kinds of software like machine learning, but also is creating new ways to sense, measure and control the world. And that feedback loop is again one of the big changes that we're gonna see coming.

Mirsky: That's it for this short Science Talk. In the coming days, we'll have interviews with the authors of three new books about math, horses and Parkinson's disease and lot's more. Meanwhile, get your science news at our website, www.scientificamerican.com, or you can also check out the November issue of the magazine, including a long planned article about how the construction of Egypt's great pyramid changed civilization. Who knows, it could come up in a presidential debate. Although that might go against the grain. And follow us on Twitter where you'll get a tweet whenever a new item hits the Web site. Our Twitter name is @sciam. For Scientific American Science Talk, I'm Steve Mirsky. Thanks for clicking on us.

[End of Audio].

Teaching Machines to Learn on Their Own