Japanese Firms Warn AI Could Collapse Social Order

Japanese Firms Warn AI Could Collapse Social Order
Image copyright: Unsplash

The Facts

  • Nippon Telegraph and Telephone, Japan's largest telecom company, and Yomiuri Shimbun Group Holdings, the country's biggest newspaper, have called for a law to end unrestrained use of artificial intelligence (AI).

  • They warned that democracy and social order could be in peril in the face of unhindered AI development.

The Spin

Narrative A

AI struggles with hallucinations, confidently generating inaccurate information. Despite guardrails, these hallucinations are a challenge as these errors have consequences. A full eradication of the problem may be difficult, and perfect accuracy remains a distant goal. Trust in AI responses must be sparing, as there's no immediate fix in sight.

Narrative B

AI errors ought to be viewed as creative experimentation. The focus should be on embracing AI's unpredictable nature rather than aiming for specific outcomes. AI hallucinations could be a concern in fields like finance and healthcare, and ways to leverage them for creative endeavors must be explored for innovative outcomes. Proper context is key to managing this risk.

Narrative C

AI's chatbots' hallucinations act as a buffer, requiring human verification before full reliance on AI-generated content. The debate continues on whether these hallucinations can be eliminated entirely. For now, they offer a balance with even some upside, preventing complete automation and maintaining human involvement in critical decision-making processes.

Nerd narrative

There's an 81% chance that by June 30, 2025, OpenAI will release an LLM product or API that hallucinates 5x less than GPT-4 did when it was released, according to the Metaculus prediction community.

Articles on this story

Sign up to our daily newsletter