Elon Musk-backed AI Company Claims It Made a Text Generator That’s Too Dangerous to Release

The OpenAI researchers found that GPT-2 performed very well when it was given tasks that it wasn’t necessarily designed for, like translation and summarization.

AdvertisementThe researchers used 40GB of data pulled from 8 million web pages to train the GPT-2 software.

Photo: GettyResearchers at the non-profit AI research group OpenAI just wanted to train their new text generation software to predict the next word in a sentence.

That’s ten times the amount of data they used for the first iteration of GPT.

Elon Musk has been clear that he believes artificial intelligence is the “biggest existential threat” to humanity.

The dataset was pulled together by trolling through Reddit and selecting links to articles that had more than three upvotes.

Rather than releasing the fully trained model, it’s releasing a smaller model for researchers to experiment with.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

How We Can Prepare for Catastrophically Dangerous AI—and Why We Can’t Wait

Fueled primarily by the powers of machine learning, we’ve entered into a golden era of AI research, and with no apparent end in sight.

“If you think about it, what happens to chimpanzees is no longer up to them, because we humans control their environment by being more intelligent.

We also need to develop standards for safe AI system design, he added.

AI researchers can organize to uphold ethical and safe AI development procedures, and research organizations can set up processes for whistle-blowing.”Action is also required at the international level.

“We do not yet know how to control ASI, because, traditionally, control over other entities seems to require the ability to out-think and out-anticipate—and, by definition, we cannot out-think and out-anticipate ASI.

What makes ASI particularly dangerous is that it will operate beyond human levels of control and comprehension.

AI ethics boards are starting to become commonplace, for example, along with standards projects to ensure safe and ethical machine intelligence.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link