ottokart/punctuator2: A bidirectional recurrent neural network model with attention mechanism for restoring missing punctuation in unsegmented text

DEMO and DEMO2A bidirectional recurrent neural network model with attention mechanism for restoring missing inter-word punctuation in unsegmented text.

The model can be trained in two stages (second stage is optional):First stage is trained on punctuation annotated text.

Second stage with pause durations can be used for example for restoring punctuation in automatic speech recognition system output.

Optional second stage can be trained on punctuation and pause annotated text.

In this stage the model learns to combine pause durations with textual features and adapts to the target domain.

Training speed with default settings, an optimal Theano installation and a modern GPU should be around 10000 words per second.

Example: to be ,COMMA or not to be ,COMMA that is the question .PERIOD(Optional) Pause annotated text files for training and validation of the second phase model.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

For better deep neural network vision, just add feedback (loops)

While modeling primate object recognition in the visual cortex has revolutionized artificial visual recognition systems, current deep learning systems are simplified, and fail to recognize some objects that are child’s play for primates such as humans.

In findings published in Nature Neuroscience, McGovern Institute investigator James DiCarlo and colleagues have found evidence that feedback improves recognition of hard-to-recognize objects in the primate brain, and that adding feedback circuitry also improves the performance of artificial neural network systems used for vision applications.

Deep convolutional neural networks are currently the most successful models for accurately recognizing objects on a fast timescale and have a general architecture inspired by the primate ventral visual stream, cortical regions that progressively build an accessible and refined representation of viewed objects.

Rather than trying to guess why deep learning was having problems recognizing an object, the authors took an unbiased approach that turned out to be critical.

Instead, the authors presented the deep learning system, as well as monkeys and humans, with images, homing in on “Challenge images” where the primates could easily recognize the objects in those images, but a feedforward DCNN ran into problems.

“What the computer vision community has recently achieved by stacking more and more layers onto artificial neural networks, evolution has achieved through a brain architecture with recurrent connections,” says Kar.

“Since entirely feedforward deep convolutional nets are now remarkably good at predicting primate brain activity, it raised questions about the role of feedback connections in the primate brain. This study shows that, yes, feedback connections are very likely playing a role in object recognition after all.”

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

Inside the ‘Black Box’ of a Neural Network

Neural networks have proven tremendously successful at tasks like identifying objects in images, but how they do so remains largely a mystery.

On Wednesday, Carter’s team released a paper that offers a peek inside, showing how a neural network builds and arranges visual concepts.

Olah’s team taught a neural network to recognize an array of objects with ImageNet, a massive database of images.

Neural networks are composed of layers of what researchers aptly call neurons, which fire in response to particular aspects of an image.

Researchers trying to understand how neural networks function have been fighting a losing battle, he points out, as networks grow more complex and rely on vaster sums of computing power.

As an illustration, Olah pulls up an ominous photo of a fin slicing through turgid waters: Does it belong to a gray whale or a great white shark? As a human inexperienced in angling, I wouldn’t hazard a guess, but a neural network that’s seen plenty of shark and whale fins shouldn’t have a problem.

Neural networks are generally excellent at classifying objects in static images, but slip-ups are common-say, in identifying humans of different races as gorillas and not humans.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link