The AI giant predicts human-like machine intelligence could arrive within 10 years, so they want to be ready for it in four.
OpenAI is seeking researchers to work on containing super-smart artificial intelligence with other AI. The end goal is to mitigate a threat of human-like machine intelligence that may or may not be science fiction.
“We need scientific and technical breakthroughs to steer and control AI systems much smarter than us,” wrote OpenAI Head of Alignment Jan Leike and co-founder and Chief Scientist Ilya Sutskever in a blog post.
OpenAI’s Superalignment team is now recruiting
The Superalignment team will devote 20% of OpenAI’s total compute power to training what they call a human-level automated alignment researcher to keep future AI products in line. Toward that end, OpenAI’s new Superalignment group is hiring a research engineer, research scientist and research manager.
OpenAI says the key to controlling an AI is alignment, or making sure the AI performs the job a human intended it to do.
The company has also stated that one of its objectives is the control of “superintelligence,” or AI with greater-than-human capabilities. It’s important that these science-fiction-sounding hyperintelligent AI “follow human intent,” Leike and Sutskever wrote. They anticipate the development of superintelligent AI within the last decade and want to have a way to control it within the next four years.
SEE: How to build an ethics policy for the use of artificial intelligence in your organization (TechRepublic Premium)
AI trainer may keep other AI models in line
Today, AI training requires a lot of human input. Leike and Sutskever propose that a future challenge for developing AI might be adversarial — namely, “our models’ inability to successfully detect and undermine supervision during training.”
Therefore, they say, it will take a specialized AI to train an AI that can outthink the people who made it. The AI researcher that trains other AI models will help OpenAI stress test and reassess the company’s entire alignment pipeline.
Changing the way OpenAI handles alignment involves three major goals:
- Creating AI that assists in evaluating other AI and understanding how those models interpret the kind of oversight a human would usually perform.
- Automating the search for problematic behavior or internal data within an AI.
- Stress-testing this alignment pipeline by intentionally creating “misaligned” AI to ensure that the alignment AI can detect them.
Personnel from OpenAI’s previous alignment team and other teams will work on Superalignment along with the new hires. The creation of the new team reflects Sutskever’s interest in superintelligent AI. He plans to make Superalignment his primary research focus.
Superintelligent AI: Real or science fiction?
Whether “superintelligence” will ever exist is a matter of debate.
OpenAI proposes superintelligence as a tier higher than generalized intelligence, a human-like class of AI that some researchers say won’t ever exist. However, some Microsoft researchers think GPT-4 scoring high on standardized tests makes it approach the threshold of generalized intelligence.
Others doubt that intelligence can really be measured by standardized tests, or wonder whether the very idea of generalized AI approaches a philosophical rather than a technical challenge. Large language models can’t interpret language “in context” and therefore don’t approach anything like human-like thought, a 2022 study from Cohere for AI pointed out. (Neither of these studies is peer-reviewed.)
SEE: Some high-risk uses of AI could be covered under the laws being developed in the European Parliament. (TechRepublic)
OpenAI aims to get ahead of the speed of AI development
OpenAI frames the threat of superintelligence as possible but not imminent.
“We have a lot of uncertainty over the speed of development of the technology over the next few years, so we choose to aim for the more difficult target to align a much more capable system,” Leike and Sutskever wrote.
They also point out that improving safety in existing AI products like ChatGPT is a priority, and that discussion of AI safety should also include “risks from AI such as misuse, economic disruption, disinformation, bias and discrimination, addiction and overreliance, and others” and “related sociotechnical problems.”
“Superintelligence alignment is fundamentally a machine learning problem, and we think great machine learning experts — even if they’re not already working on alignment — will be critical to solving it,” Leike and Sutskever said in the blog post.
This post originally appeared on TechToday.