Quanta Magazine

These “Convolutional neural networks” have proved surprisingly adept at learning patterns in two-dimensional data – especially in computer vision tasks like recognizing handwritten words and objects in digital images.

“The same idea that there’s no special orientation – they wanted to get that into neural networks,” said Kyle Cranmer, a physicist at New York University who applies machine learning to particle physics data.

Michael Bronstein, a computer scientist at Imperial College London, coined the term “Geometric deep learning” in 2015 to describe nascent efforts to get off flatland and design neural networks that could learn patterns in nonplanar data.

A convolutional neural network slides many of these “Windows” over the data like filters, with each one designed to detect a certain kind of pattern in the data.

Convolutional networks became one of the most successful methods in deep learning by exploiting a simple example of this principle called “Translation equivariance.” A window filter that detects a certain feature in an image – say, vertical edges – will slide over the plane of pixels and encode the locations of all such vertical edges; it then creates a “Feature map” marking these locations and passes it up to the next layer in the network.

“The point about equivariant neural networks is [to] take these obvious symmetries and put them into the network architecture so that it’s kind of free lunch,” Weiler said.

“We’re now able to design networks that can process very exotic kinds of data, but you have to know what the structure of that data is” in advance, he said.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

Sharing is caring!

Leave a Reply

Your email address will not be published. Required fields are marked *