Elon Musk-backed AI Company Claims It Made a Text Generator That’s Too Dangerous to Release

The OpenAI researchers found that GPT-2 performed very well when it was given tasks that it wasn’t necessarily designed for, like translation and summarization.

AdvertisementThe researchers used 40GB of data pulled from 8 million web pages to train the GPT-2 software.

Photo: GettyResearchers at the non-profit AI research group OpenAI just wanted to train their new text generation software to predict the next word in a sentence.

That’s ten times the amount of data they used for the first iteration of GPT.

Elon Musk has been clear that he believes artificial intelligence is the “biggest existential threat” to humanity.

The dataset was pulled together by trolling through Reddit and selecting links to articles that had more than three upvotes.

Rather than releasing the fully trained model, it’s releasing a smaller model for researchers to experiment with.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link