Google’s AI language model Reformer can process the entirety of novels

Whether it’s language, music, speech, or video, sequential data isn’t easy for AI and machine learning models to comprehend – particularly when there’s dependence on extensive surrounding context.

That’s how all AI models extract features and learn to make predictions, but Transformer uniquely have attention such that every output element is connected to every input element.

As my colleague Khari Johnson notes, one of the biggest machine learning trends of 2019 was the continued growth and proliferation of natural language models based on this Transformer design.

Google open-sourced BERT, a Transformer-based model, in 2018.

The research team experimented with Reformer-based models on images and text, using them to generate missing details in images and process the entirety of Crime and Punishment.

“They leave to future work applying them to even longer sequences and improving their handling of positional encodings. We believe Reformer gives the basis for future use of Transformer models, both for long text and applications outside of natural language processing,” added Kaiser and Kitaev.

In an interview late last year, Google AI chief Jeff Dean told VentureBeat that larger context would be a principal focus of Google’s work going forward.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

Sharing is caring!

Leave a Reply

Your email address will not be published. Required fields are marked *