Jump to content

Yi Zeng (AI researcher)

From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by Tokisaki Kurumi (talk | contribs) at 00:48, 13 November 2024. The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Yi Zeng (Chinese: 曾毅) is a Chinese artificial intelligence researcher and professor at the Chinese Academy of Sciences, who also serves as the founding director of Center for Long-term AI, and as a member of the United Nations Advisory Body on AI.[1][2][3][4]

Career

[edit]

On May 25, 2019, Zeng led the team that published the Beijing Artificial Intelligence Principles, proposed as an initiative for the long-term research, governance and planning of AI, and the "realization of beneficial AI for mankind and nature".[5][6]

He was named on the Time 100 AI list, a list featuring the hundred most influential figures in artificial intelligence of the year, in 2023.[7]

In July 2023, Zeng addressed the United Nations Security Council in a meeting on the risks posed by recent strides in artificial intelligence. He said that AI models “cannot be trusted as responsible agents that can help humans to make decisions,” and warned of the risk of extinction posed by both near-term and long-term AI, arguing that “in the long term, we haven’t given superintelligence any practical reasons why they should protect humans”. Zeng stated that humans should always be responsible for final decision-making on the use of nuclear weapons, and that the United Nations must produce an international framework on AI development and governance, to ensure global peace and security.[8][9]

In October 2023, UN Secretary-General António Guterres announced the creation of an advisory body on issues surrounding the international governance of AI, of which Zeng would be a member.[10][11]

References

[edit]
  1. ^ "Professor Yi Zeng". The Alan Turing Institute. Retrieved 2024-11-11.
  2. ^ "Yi Zeng → UNIDIR". unidir.org. 2024-05-07. Retrieved 2024-11-11.
  3. ^ "Berggruen Institute". www.berggruen.org. Retrieved 2024-11-11.
  4. ^ Nations, United. "Members". United Nations. Retrieved 2024-11-11.
  5. ^ "Why does Beijing suddenly care about AI ethics?". MIT Technology Review. Retrieved 2024-11-12.
  6. ^ "Beijing Artificial Intelligence Principles". International Research Center for AI Ethics and Governance. 2022-01-10. Retrieved 2024-11-11.
  7. ^ "TIME100 AI 2023: Yi Zeng". Time. 2023-09-07. Retrieved 2024-10-21.
  8. ^ "International Community Must Urgently Confront New Reality of Generative, Artificial Intelligence, Speakers Stress as Security Council Debates Risks, Rewards | Meetings Coverage and Press Releases". press.un.org. Retrieved 2024-10-21.
  9. ^ Nichols, Michelle (2023-07-18). "UN Security Council meets for first time on AI risks". Reuters. Retrieved 2024-11-12.
  10. ^ Mukherjee, Supantha (2023-10-27). "United Nations creates advisory body to address AI governance". Reuters. Retrieved 2024-11-11.
  11. ^ Nations, United. "Members". United Nations. Retrieved 2024-11-11.
[edit]