Inside each neural network are layers of artificial neurons connected like webs.
In the case of the neural network tested by Google and OpenAI’s researchers for this work, these categories were wide-ranging: everything from wool to Windsor ties, from seat belts to space heaters.
Later research has taken this same basic approach and fine-tuned it: first targeting individual neurons within the network to see what excites them, then clusters of neurons, then combinations of neurons in different layers of the network.
What do Activation Atlases actually show us about the inner workings of algorithms? Well, you can start by just navigating around Google and OpenAI’s example here, built to unspool the innards of a well-known neural network called GoogLeNet or InceptionV1.
Scrolling around, you can see how different parts of the network respond to different concepts, and how these concepts are clustered together.
In the image below, you can see the various activations that are used by the neural network to identify these labels.
“I find it almost awe inducing to look through these atlases at higher resolutions and just see the giant space of things these networks can represent.”
This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.