New York University emeritus professor Gary Marcus believes that the existential threats created by AI are currently exaggerated. Marcus argues that he is not personally worried about extinction risk; he feels that general problems of building AI systems without control can cause significant problems.
In March, Marcus showed his concerns when OpenAI, the creator of ChatGPT, announced the partnership with Microsoft to make it a more powerful AI model. He signed an open letter calling for a global pause in AI development along with Elon Musk and 1000 others. However, Marcus didn’t sign future statements endorsed by business leaders and specialists, including Sam Altman, the CEO of OpenAI, which emphasized the need to address AI’s extinction risk.
He suggests that society should pay attention to more realistic dangers instead of focusing on far-fetched scenarios where no one survives. He highlights that malicious actors manipulate markets, leading to geopolitical conflicts that could increase the risk of nuclear war. Marcus thought that our priority is to address these tangible risks rather than speculating about extinction-level scenarios.
While Marcus admires the potential future of AI in fields like science, medicine, and elder care, he believes that, in the short term, we need to prepare. He acknowledges that harm may occur along the way and emphasizes the need for serious regulation to mitigate these risks.