As AI continues to revolutionize how we interact with technology, there’s no denying that it’s going to have an incredible impact on our future. There’s also no denying that AI has some pretty serious risks if left unchecked.
Enter a new team of experts assembled by OpenAI.
Designed to help fight what it calls “catastrophic” risks, the team of experts at OpenAI — called Preparedness — plans to evaluate current and future projected AI models for several risk factors. Those include individualized persuasion (or matching the content of a message to what the recipient wants to hear), overall cybersecurity, autonomous replication and adaptation (or, an AI changing itself on its own), and even extinction-level threats like chemical, biological, radiological, and nuclear attacks.
If AI starting a nuclear war seems a little far-fetched, remember that it was just earlier this year that a group of top AI researchers, engineers, and CEOs including Google DeepMind CEO Demis Hassabis ominously warned, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
How could AI possibly cause a nuclear war? Computers are ever-present in determining when, where, and how military strikes happen these days, and AI will most certainly be involved. But, AI is prone to hallucinations and doesn’t necessarily hold the same philosophies a human might have. In short, AI might decide it’s time for a nuclear strike when it’s not.
“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models,” a statement from OpenAI read, “have the potential to benefit all of humanity. But they also pose increasingly severe risks.”
To help keep AI in check, OpenAI says, the team will focus on three main questions:
- When purposefully misused, just how dangerous are the frontier AI systems we have today and those coming in the future?
- If frontier AI model weights were stolen, what exactly could a malicious actor do?
- How can a framework that monitors, evaluates, predicts, and protects against the dangerous capabilities of frontier AI systems be built?
Heading this team is Aleksander Madry, Director of the MIT Center for Deployable Machine Learning and a faculty co-lead of the MIT AI Policy Forum.
To expand its research, OpenAI also launched what it’s calling the “AI Preparedness Challenge” for catastrophic misuse prevention. The company is offering up to $25,000 in API credits to up to 10 top submissions that publish probable, but potentially catastrophic misuse of OpenAI.