Don’t be fooled. The call to pause AI research is purely symbolic.

More than 1,000 people including Elon Musk, Andrew Yang, and Steve Wozniak signed a statement calling for a six-month pause in development of large-language-model-based artificial intelligence systems like ChatGPT. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the statement says.

It’s a head-fake. The only impact of this toothless statement will be the publicity it generates.

Why the statement will cause no pause

While the potential harm from unpredictable generative AI systems is real, it’s hard to take the statement seriously. Google, Microsoft, and Apple will not unilaterally slow down and disarm. Researchers in other countries will not slow their pursuit of AI. Venture capitalists will not stop funding the hottest research area in tech. There’s a lot of money to be made here from automating tasks that humans currently do or could never do — writing, making pictures, analyzing patterns, identifying trends, predicting stock movements, and so on ad infinitum.

US lawmakers are unlikely to be able to get their heads around what AI is and what it does, let alone agree on what to do about it. And even if they did, they’re not about to cede leadership to other countries by attempting to regulate AI. It would likely take a decade for there to be any international agreement, and would you really trust China, Russia, and Saudi Arabia to put the brakes on any AI efforts?

So the statement is purely symbolic. It raises the visibility of an important issue, that’s all. Until an actual AI-generated disaster takes place, expect no action.

Analyzing the language in the statement

As I traditionally do, I’ll take apart the language in the statement so you can see where the weaknesses and toothlessness are visible. Pay particular attention to the passive voice statements about stuff that should happen, with no indication of who should do it. To see the numbered references, go to the original statement.

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI PrinciplesAdvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

This is impotent hand-wringing, as belied by the passive “should be planned for and managed with commensurate care and resources.” I’m not denying the problem; the potential for damage from unmanaged AI is real. But this is the reality of technology and capitalism in America and worldwide. Saying “something should be done” does very little to get anything to actually happen, even if 1,000 people sign a pledge to support it. Notice that none of the signatories are senior executives from major players here like Apple, Google, Microsoft, or OpenAI.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

After a litany of increasingly alarming rhetorical questions, (“Should we risk loss of control of our civilization?” Gee, I dunno, should we?), we get once again to the passive hope-based non-plan for solving the problem. Specifically:

  • “Such decisions must not be delegated to unelected tech leaders.” Who delegated them? Seems to me the tech leaders make the decisions, in the absence of any legislation. That’s capitalism.
  • “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” Translating this into the active voice, it would say “Developers should only develop powerful AI systems once we are confident that their effects will be positive and their risks will be manageable.” How are we going to gain such confidence? And how will you restrain the developers? I don’t mean just one or two: how will you restrain every AI researcher on the planet? Given the speed of computing advances, soon you’ll be able to run a powerful AI large language model on a few rented servers. So how will you restrain a thousand curious developers?
  • “This confidence must be well justified and increase with the magnitude of a system’s potential effects.” How will it be justified, and by whom? More wishful thinking with no responsibility assigned to anyone.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

Finally, active voice statements reveal what the statement actually intends. Labs should pause. Labs should offer public verification. It should include “all key actors” (which is likely a very long list). And if such a pause cannot be enacted (by zombies?), governments should regulate. Even putting aside the political challenge, that’s a pretty broad statement with a lot of niggling details that are going to be tough to resolve.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

This is clear and written in the active voice, although I’m a bit vague on how to ensure a new, unexplored technology is safe “beyond a reasonable doubt,” which is a standard from US criminal law. (And no, footnote 4 does not offer much in the way of specifics.)

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

Outrageously meaningless. Passive (“should be refocused”). Vague (what does it mean to “refocus” research and development, which inherently veers into unknown spaces?). The “more” in this statement is a weasel word (How much more?). How do you measure accuracy? Safety? Interpretability (what does that mean)? Transparency? Robustness (one of the vaguest words in all of technology)? Aligned with what? Trustworthy, by what measure? Loyal to whom??? This is a soft-brained whine in the wilderness — a vain hope for things to be “better” on a bunch of indefinable scales.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

This is the most useful part of the statement. It’s relatively clear and active with a bunch of moderately specific suggestions. It’s still not clear to me who is going to pay for this. And expecting AI developers to work hand-in-hand with policymakers is, to say the least, a statement of sheer optimism.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

Sounds like socialism.

Medical research into human cloning — the precedent cited here — had a limited potential for profit and, crucially, a limited set of potential expert practitioners who were working across national boundaries. It was (relatively) easy to regulate. AI research has vast profit potential, thousands of practitioners, no international regime of cooperation, and an unpredictable set of outcomes, many hugely beneficial, and others surely pernicious.

“Stop doing dangerous AI” is an excellent bumper sticker. The open letter will have a significant symbolic impact. But as a blueprint for action, it’s vacuous.

I’d like to offer a solution to this problem. Regrettably, I’m not anywhere near expert enough to do so. I just know that this statement is not that solution.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

11 Comments

  1. The people who signed this pathetic attempt at exonerating themselves must be hoping that history, and their descendants, will be gentler with them in the future as their contributions to the “profound risks to society and humanity” become evident. Methinks they doth protest too much. And too late. Interesting times indeed. Humans’ hubris and greed trump humility and generosity to our peril.

  2. I really enjoy and learn from the breakdowns of corporate messaging that you have done over the years. I was wondering if an organization has ever contacted you for your editing skills before they release one of these “communications”. (Better Call Josh)

  3. To me this letter comes across as a thinly-disguised propaganda piece; it’s very notable that the big-4 AI companies are absent as signatories. My opinion: all these B-list level companies here are hoping to invigorate a public backlash against the big-4 in order to give themselves more time to catch up in the space. But maybe my well-aged cynicism is just too strong today?

  4. The problem of safety and alignment is pretty intractable. I don’t know what pausing would do except help the non abiding competition. The cat is simply out of the bag at this stage.