An AI Pioneer Explains the Evolution of Neural Networks

Google's Geoff Hinton was a pioneer in researching the neural networks that now underlie much of artificial intelligence. He persevered when few others agreed.

Geoffrey Hinton is one of the creators of Deep Learning, a 2019 winner of the Turing Award, and an engineering fellow at Google. Last week, at the company’s I/O developer conference, we discussed his early fascination with the brain, and the possibility that computers could be modeled after its neural structure—an idea long dismissed by other scholars as foolhardy. We also discussed consciousness, his future plans, and whether computers should be taught to dream. The conversation has been lightly edited for length and clarity.

Nicholas Thompson: Let’s start when you write some of your early, very influential papers. Everybody says, “This is a smart idea, but we're not actually going to be able to design computers this way.” Explain why you persisted and why you were so confident that you had found something important.

Geoffrey Hinton: It seemed to me there's no other way the brain could work. It has to work by learning the strength of connections. And if you want to make a device do something intelligent, you’ve got two options: You can program it, or it can learn. And people certainly weren't programmed, so we had to learn. This had to be the right way to go.

NT: Explain what neural networks are. Explain the original insight.

GH: You have relatively simple processing elements that are very loosely models of neurons. They have connections coming in, each connection has a weight on it, and that weight can be changed through learning. And what a neuron does is take the activities on the connections times the weights, adds them all up, and then decides whether to send an output. If it gets a big enough sum, it sends an output. If the sum is negative, it doesn't send anything. That’s about it. And all you have to do is just wire up a gazillion of those with a gazillion squared weights, and just figure out how to change the weights, and it'll do anything. It's just a question of how you change the weights.

NT: When did you come to understand that this was an approximate representation of how the brain works?

GH: Oh, it was always designed as that. It was designed to be like how the brain works.

NT: So at some point in your career, you start to understand how the brain works. Maybe it was when you were 12; maybe it was when you were 25. When do you make the decision that you will try to model computers after the brain?

GH: Sort of right away. That was the whole point of it. The whole idea was to have a learning device that learns like the brain, like people think the brain learns, by changing connection strings. And this wasn't my idea; [British mathematician Alan] Turing had the same idea. Turing, even though he invented a lot of the basis of standard computer science, he believed that the brain was this unorganized device with random weights, and it would use reinforcement learning to change the connections, and it would learn everything. And he thought that was the best route to intelligence.

NT: And so you were following Turing’s idea that the best way to make a machine is to model it after the human brain. This is how a human brain works, so let's make a machine like that.

GH: Yeah, it wasn't just Turing’s idea. Lots of people thought that.

NT: When is the darkest moment? When is the moment where other people who've been working, who agreed with this idea from Turing, start to back away, and yet you continue to plunge ahead.

GH: The were always a bunch of people who kept believing in it, particularly in psychology. But among computer scientists, I guess in the ’90s, what happened was data sets were quite small and computers weren't that fast. And on small data sets, other methods, like things called support vector machines worked a little bit better. They didn't get confused by noise so much. So that was very depressing, because in the ’80s we developed back propagation. We thought it would solve everything. And we were a bit puzzled about why it didn't solve everything. And it was just a question of scale, but we didn't really know that then.

NT: And so why did you think it was not working?

GH: We thought it was not working because we didn't have quite the right algorithms, we didn’t have quite the right objective functions. I thought for a long time it was because we were trying to do supervised learning, where you have to label data, and we should have been doing unsupervised learning, where you just learned from the data with no labels. It turned out it was mainly a question of scale.

NT: That's interesting. So the problem was, you didn't have enough data. You thought you had the right amount of data, but you hadn't labeled it correctly. So you just misidentified the problem?

GH: I thought just using labels at all was a mistake. You do most of your learning without making any use of labels, just by trying to model the structure in the data. I actually still believe that. I think as computers get faster, for any given size data set, if you make computers fast enough, you're better off doing unsupervised learning. And once you've done the unsupervised learning, you'll be able to learn from fewer labels.

NT: So in the 1990s, you're continuing with your research, you’re in academia, you are still publishing, but you aren't solving big problems. Was there ever a moment where you said, you know what, enough of this. I'm going to go try something else? Or did you just say, we're going to keep doing deep learning?

GH: Yes. Something like this has to work. I mean, the connections in the brain are learning somehow, and we just have to figure it out. And probably there's a bunch of different ways of learning connection strengths; the brain’s using one of them. There may be other ways of doing it. But certainly you have to have something that can learn these connection strengths. I never doubted that.

NT: So you never doubt it. When does it first start to seem like it's working?

GH: One of the big disappointments in the ’80’s was, if you made networks with lots of hidden layers, you couldn't train them. That's not quite true, because you could train for fairly simple tasks like recognizing handwriting. But most of the deep neural nets, we didn't know how to train them. And in about 2005, I came up with a way of doing unsupervised training of deep nets. So you take your input, say your pixels, and you'd learn a bunch of feature detectors that were just good at explaining why the pixels were even like that. And then you treat those feature detectors as the data, and you learn another bunch of feature detectors, so we could explain why those feature detectors have those correlations. And you keep learning layers and layers. But what was interesting was, you could do some math and prove that each time you learned another layer, you didn't necessarily have a better model of the data, but you had a band on how good your model was. And you could get a better band each time you added another layer.

NT: What do you mean, you had a band on how good your model was?

GH: Once you've got a model, you can say, “How surprising does a model find this data?” You show it some data and you say, “Is that the kind of thing you believe in, or is that surprising?” And you can sort of measure something that says that. And what you'd like to do is have a model, a good model is one that looks at the data and says, “Yeah, yeah, I knew that. It's unsurprising.” It's often very hard to compute exactly how surprising this model finds the data. But you can compute a band on that. You can say that this model finds the data less surprising than that one. And you could show that as you add extra layers of feature detectors, you get a model, and each time you add a layer, the band on how surprising it finds the data gets better.

NT: That's about 2005 where you come up with that mathematical breakthrough. When do you start getting answers that are correct? And what data are you working on? It is speech data where you first have your breakthrough, right?

GH: This was just handwritten digits. Very simple. And then, around the same time, they started developing GPUs [graphics processing units]. And the people doing neural networks started using GPUs in about 2007. I had one very good student who started using GPUs for finding roads in aerial images. He wrote some code that was then used by other students for using GPUs to recognize phonemes in speech. So they were using this idea of pretraining. And after they'd done all this pretraining, just stick labels on top and use back propagation. And that way, it turned out, you could have a very deep net that was pretrained. And you could then use back propagation, and it actually worked. And it sort of beat the benchmarks for speech recognition. Initially, just by a little bit.

NT: It beat the best commercially available speech recognition? It beat the best academic work on speech recognition?

GH: On a relatively small data set called TIMIT, it did slightly better than the best academic work. Also work done at IBM.

And very quickly, people realized that this stuff—since it was beating standard models that are taking 30 years to develop—would do really well with a bit more development. And so my graduate students went off to Microsoft and IBM and Google, and Google was the fastest to turn it into a production speech recognizer. And by 2012, that work that was first done in 2009, came out in Android. And Android suddenly got much better at speech recognition.

NT: So tell me about that moment where you've had this idea for 40 years, you've been publishing on it for 20 years, and you're finally better than your colleagues. What did that feel like?

GH: Well, back then I had only had the idea for 30 years!

NT: Correct, correct! So just a new idea. Fresh!

GH: It felt really good that it finally got the state of the real problem.

NT: And do you remember where you were when you first got the revelatory data?

GH: No.

NT: All right. So you realize it works on speech recognition. When do you start applying it to other problems?

GH: So then we start applying it to all sorts of other problems. George Dahl, who was one of the people that did the original work on speech recognition, applied it to predict if a molecule will bind to something and act as a good drug. And there was a competition. And he just applied our standard technology designed for speech recognition to predicting the activity of drugs, and it won the competition. So that was a sign that this stuff sort of felt fairly universal. And then I had a student who said, “You know, Geoff, this stuff is going to work for image recognition, and Fei-Fei Li has created the correct data set for it. And there’s a public competition; we have to do that.”

And we got results that were a lot better than standard computer vision. That was 2012.

NT: So those are three areas where it succeeded, modeling chemicals, speech, voice. Where was it failing?

GH: The failure is only temporary, you understand?

NT: Well, what distinguishes the areas where it works the most quickly and the areas where it will take more time? It seems like visual processing, speech recognition, sort of core human things that we do with our sensory perception are deemed to be the first barriers to clear, is that correct?

GH: Yes and no, because there are other things we do like motor control. We're very good at motor control. Our brains are clearly designed for that. And only just now are neural nets beginning to compete with the best other technologies that’s there. They will win in the end, but they're only just winning now.

I think things like reasoning, abstract reasoning, they’re the kind of last things we learn to do, and I think they'll be among the last things these neural nets learn to do.

NT: And so you keep saying that neural nets will win at everything eventually.

GH: Well, we are neural nets. Anything we can do they can do.

NT: Right, but the human brain is not necessarily the most efficient computational machine ever created.

GH: Certainly not.

NT: Certainly not my human brain! Couldn't there be a way of modeling machines that is more efficient than the human brain?

GH: Philosophically, I have no objection to the idea that there could be some completely different way to do all this. It could be that if you start with logic, and you try to automate logic, and you make some really fancy theorem prover, and you do reasoning, and then you decide you're going to do visual perception by doing reasoning, it could be that that approach would win. It turned out it didn't. But I've no philosophical objection to that winning. It's just we know that brains can do it.

NT: But there are also things our brains can't do well. Are those things that neural nets also won't be able to do well?

GH: Quite possibly, yes.

NT: And then there's a separate problem, which is, we don't know entirely how these things work, right?

GH: No, we really don't know how they work.

NT: We don't understand how top-down neural networks work. That’s a core element of how neural networks work that we don't understand. Explain that, and then let me ask the obvious follow up, which is, if we don't know how these things work, how can those things work?

GH: If you look at current computer vision systems, most of them basically feed forward; they don't use feedback connections. There's something else about current computer vision systems, which is they're very prone to adversarial errors. You can change a few pixels slightly, and something that was a picture of a panda and still looks exactly like a panda to you, it suddenly says that’s an ostrich. Obviously, the way you change the pixels is cleverly designed to fool it into thinking it's an ostrich. But the point is, it still looks like a panda to you.

Initially we thought these things worked really well. But then, when confronted with the fact that they're looking at a panda and are confident it’s an ostrich, you get a bit worried. I think part of the problem there is that they're not trying to reconstruct from the high-level representations. They're trying to do discriminative learning, where you just learn layers of feature detectors, and the whole objective is just to change the weights so you get better at getting the right answer. And recently in Toronto, we've been discovering, or Nick Frosst has been discovering, that if you introduce reconstruction, then it helps you be more resistant to adversarial attack. So I think in human vision, to do the learning, we're doing reconstruction. And also because we're doing a lot of learning by doing reconstructions, we are much more resistant to adversarial attacks.

NT: You believe that top-down communication in a neural network is designed to let you test how you reconstruct something. How do you test and make sure it's a panda and not an ostrich?

GH: I think that's crucial, yes.

NT: But brain scientists are not entirely agreed on that, correct?

GH: Brain scientists are all agreed that if you have two areas of the cortex in a perceptual pathway, there'll always be backwards connections. They're not agreed on what it's for. It could be for attention, it could be for learning, or it could be for reconstruction. Or it could be for all three.

NT: So we don't know what the backwards communication is. You are building your new neural networks on the assumption that—or you're building backwards communication, that is for reconstruction into your neural networks, even though we're not sure that's how the brain works?

GH: Yes.

NT: Isn’t that cheating? I mean if you're trying to make it like the brain, you're doing something we're not sure is like the brain.

GH: Not at all. I'm not doing computational neuroscience. I'm not trying to make a model of how the brain works. I'm looking at the brain and saying, “This thing works, and if we want to make something else that works, we should sort of look to it for inspiration.” So this is neuro-inspired, not a neural model. The whole model, the neurons we use, they're inspired by the fact that neurons have a lot of connections, and they change the strengths.

"The whole idea was to have a learning device that learns like the brain," says Geoffrey Hinton.

Aaron Vincent Elkaim/The New York Times/Redux

NT: It's interesting. So if I were in computer science, and I was working on neural networks, and I wanted to beat Geoff Hinton, one option would be to build in top-down communication and base it on other models of brain science. So based on learning not on reconstruction.

GH: If they were better models, then you'd win. Yeah.

NT: That's very, very interesting. Let's move to a more general topic. So neural networks will be able to solve all kinds of problems. Are there any mysteries of the human brain that will not be captured by neural networks or cannot? For example, could the emotion …

GH: No.

NT: So love could be reconstructed by a neural network? Consciousness can be reconstructed?

GH: Absolutely. Once you've figured out what those things mean. We are neural networks. Right? Now consciousness is something I'm particularly interested in. I get by fine without it, but … people don't really know what they mean by it. There's all sorts of different definitions. And I think it's a pretty scientific term. So 100 years ago, if you asked people what life is, they would have said, “Well, living things have vital force, and when they die, the vital force goes away. And that's the difference between being alive and being dead, whether you’ve got vital force or not.” And now we don't have vital force, we just think it's a prescientific concept. And once you understand some biochemistry and molecular biology, you don't need vital force anymore, you understand how it actually works. And I think it's going to be the same with consciousness. I think consciousness is an attempt to explain mental phenomena with some kind of special essence. And this special essence, you don't need it. Once you can really explain it, then you'll explain how we do the things that make people think we're conscious, and you'll explain all these different meanings of consciousness, without having some special essence as consciousness.

NT: So there's no emotion that couldn't be created? There's no thought that couldn't be created? There's nothing that a human mind can do that couldn't theoretically be recreated by a fully functioning neural network once we truly understand how the brain works?

GH: There’s something in a John Lennon song that sounds very like what you just said.

NT: And you're 100 percent confident of this?

GH: No, I'm a Bayesian, and so I'm 99.9 percent confident.

NT: Okay, then what is the 0.1?

GH: Well, we might, for example all be part of a big simulation.

NT: True, fair enough. So what are we learning about the brain from our work in computers?

GH: So I think what we've learned in the last 10 years is that if you take a system with billions of parameters, and an objective function—like to fill in the gap in a string of words—it works much better than it has any right to. It works much better than you would expect. You would have thought, and most people in conventional AI thought, take a system with a billion parameters, start them off with random values, measure the gradient of the objective function—that is for each parameter, figure out how the objective function would change if you change that parameter a little bit—and then change it in the direction that improves the objective function. You would have thought that would be a kind of hopeless algorithm that gets stuck. But it turns out, it's a really good algorithm. And the bigger you scale things, the better it works. And that's just an empirical discovery, really. There's some theory coming along, but it's basically an empirical discovery. Now, because we've discovered that, it makes it far more plausible that the brain is computing the gradient of some objective function, and updating the weights of strength of synapses to follow that gradient. We just have to figure out how it gets degraded and what the objective function is.

NT: But we didn't understand that about the brain? We didn't understand the reweighting?

GH: It was a theory. A long time ago, people thought that's a possibility. But in the background, there were always sort of conventional computer scientists saying, “Yeah, but this idea of everything's random, you just learn it all by gradient descent—that's never going to work for a billion parameters. You have to wire in a lot of knowledge.” And we know now that's wrong; you can just put in random parameters and learn everything.

NT: So let's expand this out. As we run these massive tests on models, based on how we think the human brain functions, we'll presumably continue to learn more and more about how the brain actually does function. Does there come a point where we can essentially rewire our brains to be more like the most efficient machines?

GH: If we really understand what's going on, we should be able to make things like education work better. And I think we will. It will be very odd if you could finally understand what's going on in your brain and how it learns, and not be able to adapt the environment so you can learn better.

NT: A couple of years from now, how do you think we will be using what we've learned about the brain and about how deep learning works to change how education functions? How would you change a class?

GH: In a couple of years, I'm not sure we’ll learn much. I think to change education is going to be longer. But if you look at it, assistants are getting pretty smart. And once assistants can really understand conversations, assistants can have conversations with kids and educate them.

NT: And so theoretically, as we understand the brain better, you will program the assistants to have better conversations with the children based on how we know they'll learn.

GH: Yeah, I haven't really thought much about this. It's not what I do. But it seems quite plausible to me.

NT: Will we be able to understand how dreams work?

GH: Yes, I'm really interested in dreams. I'm so interested I have at least four different theories of dreams.

NT: Let's hear them all—one, two, three, four.

GH: So a long time ago, there were things called Hopfield networks, and they would learn memories as local attractors. And Hopfield discovered that if you try and put too many memories in, they get confused. They'll take two local attractors and merge them into an attractor sort of halfway in between.

Then Francis Crick and Graeme Mitchison came along and said, we can get rid of these false minima by doing unlearning. So we turn off the input, we put the neural network into a random state, we let it settle down, and we say that's bad, change the connection so you don't settle to that state, and if you do a bit of that, it will be able to store more memories.

And then Terry Sejnowski and I came along and said, “Look, if we have not just the neurons where you’re storing the memories, but lots of other neurons too, can we find an algorithm that will use all these other neurons to help restore memories?” And it turned out in the end, we came up with the Boltzmann machine-learning algorithm, which had a very interesting property: I show you data, and it sort of rattles around the other units until it's got a fairly happy state, and once it's done that, it increases the strength of all the connections based on if two units are both active.

You also have to have a phase where you cut it off from the input, you let it rattle around and settle into a state it’s happy with, so now it's having a fantasy, and once it’s had the fantasy you say, “Take all pairs of neurons that are active and decrease the strength of the connection.”

So I'm explaining the algorithm to you just as a procedure. But actually, that algorithm is the result of doing some math and saying, “How should you change these connection strings, so that this neural network with all these hidden units finds the data unsurprising?” And it has to have this other phase, what we call the negative phase, when it's running with no input, and its unlearning whatever state it settles into.

We dream for many hours every night. And if I wake you up at random, you can tell me what you were just dreaming about because it’s in your short-term memory. So we know you dream for many hours, but when you wake up in the morning, you can remember the last dream but you can't remember all the others—which is lucky, because you might mistake them for reality. So why is it we don't remember our dreams at all? And Crick’s view was, the whole point of dreaming is to unlearn those things. So you put the learning all in reverse.

And Terry Sejnowski and I showed that, actually, that is a maximum-likelihood learning procedure for Boltzmann machines. So that's one theory of dreaming.

NT: I want to go to your other theories. But have you actually set any of your deep learning algorithms to essentially dream? Study this image data set for a period of time, reset, study it again, reset.

GH: So yes, we had machine learning algorithms. Some of the first algorithms that could learn what to do with hidden units were Boltzmann machines. They were very inefficient. But then later on, I found a way of making approximations to them that were efficient. And those were actually the trigger for getting deep learning going again. Those were the things that learned one layer of feature detectors at the time. And it was an efficient form of a restrictive Boltzmann machine. And so it was doing this kind of unlearning. But rather than going to sleep, that one would just fantasize for a little bit after each data point.

NT: Ok, so Androids do dream of electric sheep. So let's go to theories, two, three, and four.

GH: Theory two was called the Wake Sleep Algorithm. And you want to learn a generative model. So you have the idea that you're going to have a model that can generate data, it has layers of feature detectors and activates the high-level ones and the low-level ones, and so on, until it activates pixels, and that's an image. You also want to learn the other way. You also want to recognize data.

And so you're going to have an algorithm that has two phases. In the wake phase, data comes in, it tries to recognize it, and instead of learning the connections that it’s using for recognition, it’s learning the generative connections. So data comes in, I activate the hidden units. And then I learn to make those hidden units be good at reconstructing that data. So it's learning to reconstruct at every layer. But the question is, how do you learn the forward connections? So the idea is, if you knew the forward connections, you could learn the backward connections, because you could learn to reconstruct.

Now, it also turns out that if you use the backward connections, you can learn the forward connections, because what you could do is start at the top and just generate some data. And because you generated the data, you know the states of all the hidden layers, and so you could learn the forward connections to recover those states. So that would be the sleep phase. When you turn off the input, you just generate data, and then you try and reconstruct the hidden units that generated the data. And so if you know the top-down connections, you learn the bottom-up ones. If you know the bottom-up ones, you learn the top-down ones. So what's gonna happen if you start with random connections, and try alternating both, and it works. Now to make it work well, you have to do all sorts of variations of it, but it works.

NT: All right, do you want to go through the other two theories? We only have eight minutes left, so maybe we should jump through some other questions.

GH: If you give me another hour, I could do the other two things.

NT: So let's talk about what comes next. Where's your research headed? What problem are you trying to solve now?

GH: Eventually, you're going to end up working on something that you don't finish. And I think I may well be working on the thing I never finish, but it's called capsules, and it's the theory of how you do visual perception using reconstruction, and also how you route information to the right places. In standard neural nets, the information, the activity in the layer, just automatically goes somewhere; you don't decide where to send it. The idea of capsules was to make decisions about where to send information.

Now, since I started working on capsules, some other very smart people at Google invented transformers, which are doing the same thing. They're deciding where to route information, and that's a big win.

The other thing that motivated capsules was coordinate frames. So when humans do visual, they're always using coordinate frames. If they impose the wrong coordinate frame on an object, they don't even recognize the object. So I'll give you a little task: Imagine a tetrahedron; it’s got a triangular base and three triangular faces, all equilateral triangles. Easy to imagine, right? Now imagine slicing it with a plane, so you get a square cross section.

That's not so easy, right? Every time you slice, you get a triangle. It’s not obvious how you get a square. It's not at all obvious. Okay, but I'll give you the same shape described differently. I need your pen. Imagine the shape you get if you take a pen like that, another pen at right angles like this, and you connect all points on this pen to all points on this pen. That's a solid tetrahedron.

OK, you're seeing it relative to a different coordinate frame, where the edges of the tetrahedron, these two line up with the coordinate frame. And for this, if you think of the tetrahedron that way, it's pretty obvious that at the top you've got a long rectangle this way, at the bottom we got a long rectangle that way, and there’s a square in the middle. So now it's pretty obvious how you can slice it to get a square, but only if you think of it with that coordinate frame.

So it's obvious that for humans, coordinate frames are very important for perception.

NT: But how is adding coordinate frames to your model not the same as the error you were making in the ’90s where you were trying to put rules into the system as opposed to letting the system be unsupervised?

GH: It is exactly that error. And because I'm so adamant that that's a terrible error, I'm allowed to do a tiny bit of it. It's sort of like Nixon negotiating with China. Actually, that puts me in a bad role.

NT: So your current task is specific to visual recognition or it is a more general way of improving by coming up with a rule set for coordinate frames?

GH: It could be used for other things, but I'm really interested in the use for visual recognition.

NT: Deep learning used to be a distinct thing. And then it became sort of synonymous with the phrase AI, and now AI is a marketing term that basically means using a machine in any way whatsoever. How do you feel about the terminology as the man who helped create this?

GH: I was much happier when there was AI, which meant you're logic-inspired and you do manipulations on symbol strings. And there was neural nets, which meant you want to do learning in a neural network. They were different enterprises that really didn't get along too well and fought for money. That's how I grew up. And now I see people who spend years saying neural networks are nonsense, saying “I'm an AI professor, so I need money.” And it’s annoying.

NT: So your field succeeded, kind of ate or subsumed the other field, which then gave them an advantage in asking for money, which is frustrating.

GH: Yeah, now it's not entirely fair, because a lot of them have actually converted.

NT: Well, I've got time for one more question. In one interview, talking about AI, you said, well, think of it like a backhoe—a machine that can build a hole or, if not constructed properly, can wipe you out. And the key is, when you work on your backhoe, to design it in such a way that it's best to build the hole and not to clock you in the head. As you think about your work, what are the choices you make like that?

GH: I guess I would never deliberately work on making weapons. I mean, you could design a backhoe that was very good at knocking people's heads off. And I think that would be a bad use of a backhoe, and I wouldn't work on it.

NT: All right. Well, Geoffrey Hinton, that was an extraordinary interview. All kinds of information. We will be back next year to talk about dream theories three and four.

Corrected, 6-3-19, 6:40pm: An earlier version of this article misspelled the name of researcher Nick Frosst.


More Great WIRED Stories