close
close

Nobel laureate in physics for solving problems between fields

Nobel laureate in physics for solving problems between fields

Portrait of John Hopfield.

John Hopfield began his career in physics and moved on to study problems in chemistry and biology.Photo: Denise Applewhite, Princeton University.

John Hopfield, one of this year’s best Nobel Prize laureates in physicsa real erudite. His career began with research in solid state physics during the field’s heyday in the 1950s, before moving to hemoglobin chemistry in the late 1960s and the study of DNA synthesis in the following decade.

In 1982, he developed a brain-like network in which neurons, which he modeled as interacting particles, formed a kind of memory. “Hopfield Network”for which he was awarded the Nobel Prize, is now widely seen as a building block machine learningunderlying modern artificial intelligence (AI). Hopfield shared the award with artificial intelligence pioneer Geoffrey Hinton of the University of Toronto in Canada.

Hopfield, now 91 and professor emeritus at Princeton University in New Jersey, spoke with Nature about whether his prize work was it was really physics and why we should care about AI.

It was discussed that your award-winning work it wasn’t really physicsbut computer science. What do you think?

My definition of physics is that physics is not what you work on, but how you work on it. If you have the position of a person who came from physics, then this is a physical problem. Having a father and mother who were both physicists distorted my understanding of what physics was. Everything that was interesting to you in the world happened because you understood the physics of assembling such things. I grew up with puzzles and wanted to solve them.

In 1981, I was giving a talk at a meeting, and Terry Sejnowski, who was my research student in physics, was sitting next to Geoff Hinton. (Sejnowski now directs the computational neuroscience group at the Salk Institute in La Jolla, California.) It was clear that Jeff knew how to make this type of system—the mechanics I do—express computer science. They talked and eventually wrote their first article together. One day Terry remembered this: it was a story about how they came from physics to computer science.

You started your career in physics. How did you get into biology?

Solid state physics was the basis of new technologies of the time. But it became harder and harder to find a good problem that I was capable of solving and was interested in solving. And I had a friend, Bob Shulman from Bell Laboratories, where I was working at the time, who had recently moved from chemistry to biology, and he started talking about how you’re starting to study biological molecules in detail. . I got the idea that maybe it was time to use the method we used to study solid state with large molecules.

What do you think your physics approach brings to biology?

I was trying to build up an understanding of smaller systems and then try to see if I could use that to understand larger systems. Maybe we can move from physics at one end to biology at the other? There were problems whose conclusions I could visualize because of my understanding of the physical system that was abstractly related to it.

In the late 1970s, you turned to neuroscience and tried to model the brain using artificial neurons. How did the Hopfield network come about?

I began writing down simple equations that described how the activity of the nervous system would change over time based on the system’s interaction with itself and with the outside world. You can come up with similar equations for spin systems interacting in magnetism. This is really what motivated me to try to combine the equations of motion in one field and in another field.

Hinton has been vocal about his concerns about the potential harms of AI. Are you worried?

I’m worried about this. (Think about) nuclear technology, which has allowed people to create bombs of arbitrary size and which can also be very useful. People began to worry as soon as they realized what a chain reaction was. Fast forward to 1970: biologists were very concerned about genetic engineering. If you create a virus in the right way, you can come close to wiping out the population. Essentially, it’s a chain reaction. It wouldn’t surprise me if this kind of danger could be realized in AI – creating programs in such a way that they are self-replicating.

The world doesn’t need unlimited speed in AI development. Until we understand more about the limitations of the systems you can create—where you stand on that ladder of danger—I worry.

What advice do you have for today’s graduate students?

Where the two fields are apart, look to see if there is anything interesting in the crack between them. I’ve always been interested in these interfaces because they contain interesting people with different motivations, and listening to them argue is quite educational. It will tell you what they really value and how they are trying to solve the problem. If they don’t have the tools to solve the problem, maybe there’s room for me.

Are you still an active researcher?

I don’t teach. I have one collaborator, Dmitry Krotov (from the MIT–IBM Watson Artificial Intelligence Laboratory in Cambridge, Massachusetts), who works in theoretical physics, and I have fun talking to him. Now I never do math. But I certainly enjoy interacting with people who are trying to ask and answer important questions. It’s nice to be reminded of the breadth of problems people are working on. When I taught, there were always young people, different views, and that’s how you stay young.

This interview has been edited for length and clarity.