In 2012, artificial intelligence researchers revealed a big improvement in computers’ ability to recognize images by feeding a neural network millions of labeled images from a database called ImageNet.
Feed a neural network a billion words, as Peters’ team did, and this approach turns out to be quite effective.
The most widely tested model, so far, is called Embeddings from Language Models, or ELMo.
But recent research from fast.Ai, OpenAI, and the Allen Institute for AI suggests a potential breakthrough, with more robust language models that can help researchers tackle a range of unsolved problems.
For languages other than English, researchers often don’t have enough labeled data to accomplish even basic tasks.
This article was summarized automatically.