The IAEA is not the model AI luminaries should follow if they are serious about the risks of artificial intelligence. Multilateral cooperation is a slow process, and would not be able to respond effectively to technology moving at such a breakneck speed. Indeed, nuclear armament increased dramatically in the first decade of the IAEA's existence. The onus for AI safety is on the developers themselves, who cannot shrug off this burden onto others. AI developers must work with each other and with governments to protect humanity from AI risks.
While international organizations are far from perfect, they are our best chance to get ahead of the worst consequences of unchecked AI development. With such fierce global competition in the digital world, a patchwork, country-by-country model simply would not suffice. The risks of AI are comparable to nuclear war and infectious disease and could shake our world to its core. Countries around the world are treating this issue with the gravity it deserves as they get the ball rolling on international guidelines.
There's a 50% chance that OpenAI will announce GPT-5 by Jan. 2025, according to the Metaculus prediction community.