Over 1,400 tech leaders and academics including Elon Musk, Apple co-founder Steve Wozniak and Pinterest co-founder Evan Sharp have signed an open letter calling on Artificial Intelligence developers to pause training of some of their more sophisticated AI experiments, citing risks they claim could result in “loss of control of our civilization.”
The letter, published by the Future of Life Coalition on Wednesday, comes at an unprecedented period of growth for AI, which has garnered significant attention in recent months thanks to chatbots like ChatGPT, image generators like Dalle-2 and Midjourney and voice-cloning software. The letter suggests that the current AI development race may be reckless and lack the consideration of unexpected consequences for society such as the spread of misinformation, or the replacing and obsoletion of human jobs. Developers, the letter asks, should pause testing of any technology stronger than the recently released ChatGPT-4 for at least six months, and they should also create more robust safety protocols that would ensure their AI developments are “rigorously audited and overseen by independent outside experts.”
“It comes down to remaining in control of systems that are very powerful, maybe more powerful than we are. We don’t know how to control these models or how to exploit them fully,” says Mark Nitzberg, executive director at UC Berkeley’s Center For Human-Compatible AI, and a signatory of the open letter. “There are all kinds of very enterprising innovations coming out every day moving very quickly. It’s worth pausing beyond this point, the cat’s already out of the bag with GPT-4, but we could reasonably say this is a good time to take stock and make sure we’re being safe.”
AI has the potential to impact nearly every aspect of human society. ChatGPT has proven powerful enough to write essays and lines of code, while Midjourney managed to fool millions of people with a viral deep fake of Pope Francis wearing a Balenciaga puffer jacket earlier this week. AI is making its way to the music industry as well; AI songwriting tools are growing increasingly sophisticated, able to churn out melodies or beats in seconds. Earlier in March, some of the most powerful music trade organizations in the business started a coalition advocating for responsible use of AI to protect and assist artists.
The letter also calls upon AI developers to work with governments to develop more robust governance systems for oversight and enforcement around artificial intelligence. It’s easy to interpret the letter merely as a sounding of the alarm; it presents ominous and dystopian hypothetical scenarios where in a worst-case scenario, “machines flood our information channels with propaganda and untruth” or where “we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us.” However, as the letter also acknowledges, if honed in properly, AI could also lead to rapid societal advancements.
“Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society,” the letter said. “We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”
Nitzberg similarly acknowledges the potential benefits AI also poses, but says developing such a system requires more consideration on factors like unintended consequences, and assurances that humans remain in control of whatever function the AI has. “To rebuild all of AI on these principles is going to take more time,” he says. “And meanwhile blackbox systems are getting more and more powerful. And it is very difficult at this time to understand them.”
The sentiment Nitzberg wants readers to take from the letter comes down to ensuring humans keep autonomy rather than surrender it in the race to advance tech.
We’ve seen societally shifting technology before. We introduced the automobile and it reshaped the countryside and the cities entirely, there wasn’t really a thought about ‘do we want this’ at the time and ‘how do we want it,” he says. “But we should be deciding what we want to do with AI, instead of letting it sort of make the changes as they happen.”