How We Can Prepare for Catastrophically Dangerous AI—and Why We Can’t Wait

Fueled primarily by the powers of machine learning, we’ve entered into a golden era of AI research, and with no apparent end in sight.

“If you think about it, what happens to chimpanzees is no longer up to them, because we humans control their environment by being more intelligent.

We also need to develop standards for safe AI system design, he added.

AI researchers can organize to uphold ethical and safe AI development procedures, and research organizations can set up processes for whistle-blowing.”Action is also required at the international level.

“We do not yet know how to control ASI, because, traditionally, control over other entities seems to require the ability to out-think and out-anticipate—and, by definition, we cannot out-think and out-anticipate ASI.

What makes ASI particularly dangerous is that it will operate beyond human levels of control and comprehension.

AI ethics boards are starting to become commonplace, for example, along with standards projects to ensure safe and ethical machine intelligence.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link