The Center for AI Safety, a non-profit organization, has issued a concise warning on the potential dangers of AI, emphasizing the need to prioritize mitigating the risk of AI advancements leading to global human extinction. More than 370 AI experts and industry professionals have signed the declaration, including the CEOs of OpenAI and Google DeepMind. Three Chinese AI experts have also endorsed the statement: Ya-Qin Zhang and Xianyuan Zhan from Tsinghua University, and Zeng Yi, a professor at the Institute of Automation at the Chinese Academy of Sciences. Notably, Zeng Yi published a co-authored article on overcoming barriers to cross-cultural cooperation in AI ethics and governance in 2020. [Center for AI Safety]

Read More