Researchers are peering inside computer brains. What they’ve found will surprise you

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Researchers say they have made an important finding that could have big implications for the study of computer brains and, possibly, human ones too.

OpenAI, the San Francisco–based A.I. research company, says that it has advanced methods for peering into the inner workings of artificial intelligence software known as neural networks, helping to make their notoriously opaque decision-making more interpretable. In the process they have uncovered that individual neurons in a large neural network can encode a particular concept, a finding that parallels one that neuroscientists have glimpsed in the human brain.

Neural networks are a kind of machine-learning software loosely modeled on the human brain. The use of these networks has been responsible for most of the rapid advances in artificial intelligence in the past eight years, including the speech recognition found in digital assistants, facial recognition software, and new ways to discover drugs.

But one drawback in large neural networks is that it can be challenging to understand the rationale behind their decisions, even for the machine-learning experts who create them. As a result, it is difficult to know exactly when and how this software can fail. And that has made people understandably reluctant to use such A.I. software, even when these A.I. systems seem to outperform other kinds of automated software or humans. This has particularly been true in medical and financial settings, where a wrong decision may cost money or even lives.

“Because we don’t understand how these neural networks work, it can be hard to reason about their errors,” says Ilya Sutskever, OpenAI’s cofounder and chief scientist. “We don’t know if they are reliable or if they have hidden vulnerabilities that are not apparent from testing.”

But researchers at the company recently used several techniques to probe the inner workings of a large neural network they had created for identifying images and putting them into broad category buckets. The researchers discovered that individual neurons in the network were associated with one particular label or concept.

This was significant, OpenAI said in a blog post discussing its research, because it echoed findings from a landmark 2005 neuroscience study that found the human brain may have “grandmother” neurons that fire in response to one very specific image or concept. For instance, the neuroscientists discovered that one subject in their study seemed to have a neuron that was associated with the actress Halle Berry. The neuron fired when the person was shown an image of Berry, but the same neuron was also activated when the person heard the words “Halle Berry,” or when shown images associated with Berry’s iconic roles.

OpenAI’s research focused on an A.I. system it debuted in January that can perform a wide variety of image classification tasks with a high degree of accuracy, without being specifically trained for those tasks with labeled data sets. The system, called CLIP (short for Contrastive Language-Image Pre-training), ingested 400 million images from the Internet and paired with captions. From this information, the technology learned to predict which of 32,768 text snippet labels was most likely to be associated with any given image, even those it had never encountered before. For instance, show CLIP a picture of a bowl of guacamole, and not only is it able to correctly label the image as guacamole but it also knows that guacamole is “a type of food.”

In the new research, OpenAI used techniques that reverse engineer what makes a particular artificial neuron fire the most to build up a picture of that neuron’s “Platonic ideal” for a given concept. For instance, OpenAI probed one neuron associated with the concept “gold” and found that the image that most activated it would contain shiny yellow coin-like objects as well as a picture of the text “gold” itself. A neuron affiliated with “spider man” was triggered in response to photos of a person dressed up as the comic book hero but also to the word “spider.” Interestingly, a neuron affiliated with the concept “yellow” fired in response to the words “banana” and “lemon,” as well as the color and word itself.

“This is maybe evidence that these neural networks are not as incomprehensible as we might think,” Gabriel Goh, the OpenAI researcher who led the team working on interpreting CLIP’s conceptual reasoning, told Fortune. In the future, such methods could be used to help companies using neural networks to understand how they arrive at decisions and when a system is likely to fail or exhibit bias. It might also point a way for neuroscientists to use artificial neural networks to investigate the ways in which human learning and concept formation may take place.

Not every CLIP neuron was associated with a distinct concept. Many fired in response to a number of different conceptual categories. And some neurons seemed to fire together, possibly meaning that they represented a complex concept.

OpenAI said that some concepts that the researchers expected the system to have a neuron for were absent. Even though CLIP can accurately identify photographs of San Francisco and can often even identify the neighborhood of the city in which they were taken, the neural network did not seem to have a neuron associated with a concept of “San Francisco” or even “city” or “California.” “We believe this information to be encoded within the activations of the model somewhere, but in a more exotic way,” OpenAI said in its blog post.

In a demonstration that this technique can be used to uncover hidden biases in neural networks, the researchers discovered that CLIP also had what OpenAI dubbed a “Middle East” neuron that fired in response to images and words associated with the region, but also in response to those associated with terrorism, the company said. It had an “immigration” neuron that responds to Latin America. And it found a neuron that fired for both dark-skinned people and gorillas, which OpenAI noted was similar to other racist photo tagging that had previously caused problems for neural network–based image classification systems at Google.

Racial and gender biases hidden in large A.I. models have become an increasing area of concern for A.I. ethics researchers and civil society organizations, especially those trained from massive amount of data culled from the Internet.

The researchers also said that their methods had uncovered a particular bias in how CLIP makes decisions that would make it possible for someone to fool the A.I. into making incorrect identifications. The system associated the text of a word or symbol associated with a concept so strongly that if someone put that symbol or word on a different object, the system would misclassify it. For instance, a dog with a big “$$$” sign on it might be misclassified as a piggy bank.

“I think you definitely see a lot of stereotyping in the model,” Goh said. Sutskever said that being able to identify these biases was a first step toward trying to correct them, something he thought could be accomplished by providing the neural network with a relatively small number of additional training examples that are specifically designed to break the inappropriate correlation the system has learned.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.