For better deep neural network vision, just add feedback (loops)

While modeling primate object recognition in the visual cortex has revolutionized artificial visual recognition systems, current deep learning systems are simplified, and fail to recognize some objects that are child’s play for primates such as humans.

In findings published in Nature Neuroscience, McGovern Institute investigator James DiCarlo and colleagues have found evidence that feedback improves recognition of hard-to-recognize objects in the primate brain, and that adding feedback circuitry also improves the performance of artificial neural network systems used for vision applications.

Deep convolutional neural networks are currently the most successful models for accurately recognizing objects on a fast timescale and have a general architecture inspired by the primate ventral visual stream, cortical regions that progressively build an accessible and refined representation of viewed objects.

Rather than trying to guess why deep learning was having problems recognizing an object, the authors took an unbiased approach that turned out to be critical.

Instead, the authors presented the deep learning system, as well as monkeys and humans, with images, homing in on “Challenge images” where the primates could easily recognize the objects in those images, but a feedforward DCNN ran into problems.

“What the computer vision community has recently achieved by stacking more and more layers onto artificial neural networks, evolution has achieved through a brain architecture with recurrent connections,” says Kar.

“Since entirely feedforward deep convolutional nets are now remarkably good at predicting primate brain activity, it raised questions about the role of feedback connections in the primate brain. This study shows that, yes, feedback connections are very likely playing a role in object recognition after all.”

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

In race for better batteries, Japan hopes to extend its lead

As their name implies, solid-state batteries use solid rather than liquid materials as an electrolyte.

Because they do not leak or give off flammable vapor, as lithium-ion batteries are prone to, solid-state batteries are safer.

Solid-state batteries are a promising power source for the internet-of-things devices that are coming into wider use, and for electric cars because of their potential to offer greater range than the stacks of lithium-ion cells that power such vehicles now.

The project aims to develop technology to make solid-state automotive batteries practical and mass-produce them.

It hopes to come up with designs and develop manufacturing processes and testing methods for automotive solid-state batteries by the end of March 2023.

It is working with Japan’s Honda Motor to develop batteries for electric vehicles aimed at the Chinese market, and is set to start supplying batteries to Germany’s Volkswagen.

Japan still has the edge in solid-state batteries, with its companies holding nearly half the patents in the world for related technologies.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

A new tool from Google and OpenAI lets us better see through the eyes of artificial intelligence

Inside each neural network are layers of artificial neurons connected like webs.

In the case of the neural network tested by Google and OpenAI’s researchers for this work, these categories were wide-ranging: everything from wool to Windsor ties, from seat belts to space heaters.

Later research has taken this same basic approach and fine-tuned it: first targeting individual neurons within the network to see what excites them, then clusters of neurons, then combinations of neurons in different layers of the network.

What do Activation Atlases actually show us about the inner workings of algorithms? Well, you can start by just navigating around Google and OpenAI’s example here, built to unspool the innards of a well-known neural network called GoogLeNet or InceptionV1.

Scrolling around, you can see how different parts of the network respond to different concepts, and how these concepts are clustered together.

In the image below, you can see the various activations that are used by the neural network to identify these labels.

“I find it almost awe inducing to look through these atlases at higher resolutions and just see the giant space of things these networks can represent.”

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link