Meena is Google’s attempt at making true conversational AI

Talk to any of the best-known AI assistants today – Alexa, Siri, Google Assistant – and they’re not exactly conversational.

To share progress towards deep learning designed to carry a conversation, Google today introduced Meena, a neural network with 2.6 billion parameters.

Meena can handle multiturn dialogue, and Google claims it’s better than other AI agents built for conversation and available online today.

Google today also released Sensibleness and Specificity Average, a metric created by Google researchers to measure the ability of a conversational agent to maintain responses in conversation that make sense and are specific.

The SSA standard Google proposes is different than the metric other AI assistants have set for assessing a truly conversational AI. Now in its third year, the Alexa Prize is a challenge for teams of student developers to create AI that can hold a conversation for up to 20 minutes.

Conversations is a feature that packages voice app recommendations in conversational multiturn dialogue.

AI assistants that can maintain a conversation may be able to secure closer bonds with humans and do things like provide emotional support to people, or cure the loneliness epidemic, as former Alexa Prize head and current Google Research director Ashwin Ram put it in 2017.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

ProBeat: Why Google is really calling for AI regulation

On Sunday, the Financial Times published an op-ed penned by Sundar Pichai titled “Why Google thinks we need to regulate AI.” Whether he wrote it himself or merely signed off on it, Pichai clearly wants the world to know that as the CEO of Alphabet and Google, he believes AI is too important not to be regulated.

Yes, Google wants “AI regulation.” But it’s not for the same reasons you or I might.

It’s telling that the only example Pichai offers in his op-ed is that Europe’s General Data Protection Regulation “Can serve as a strong foundation.” Remember, while privacy advocates don’t scoff at what GDPR has achieved, many also point out that it has been a boon for Google and Facebook.

That’s exactly where I suspect Google’s lobbying dollars will go next – ensuring any upcoming “AI regulation” helps Google more than anything else.

What if Google spent its lobbying money educating the U.S. government about the pros and cons of AI instead? I doubt the Financial Times op-ed cost much, all things considered.

If Google did the work, it wouldn’t need to try to convince the public with the power of the pen.

Plus, journalists would spend their Fridays writing about Google’s efforts to outline what “AI regulation” might look like rather than critiquing an op-ed.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

Google’s AI language model Reformer can process the entirety of novels

Whether it’s language, music, speech, or video, sequential data isn’t easy for AI and machine learning models to comprehend – particularly when there’s dependence on extensive surrounding context.

That’s how all AI models extract features and learn to make predictions, but Transformer uniquely have attention such that every output element is connected to every input element.

As my colleague Khari Johnson notes, one of the biggest machine learning trends of 2019 was the continued growth and proliferation of natural language models based on this Transformer design.

Google open-sourced BERT, a Transformer-based model, in 2018.

The research team experimented with Reformer-based models on images and text, using them to generate missing details in images and process the entirety of Crime and Punishment.

“They leave to future work applying them to even longer sequences and improving their handling of positional encodings. We believe Reformer gives the basis for future use of Transformer models, both for long text and applications outside of natural language processing,” added Kaiser and Kitaev.

In an interview late last year, Google AI chief Jeff Dean told VentureBeat that larger context would be a principal focus of Google’s work going forward.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link