P(doom): Difference between revisions
m clean up, added orphan tag |
Solomon1320 (talk | contribs) updated survey of AI researchers |
||
(7 intermediate revisions by 7 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Term in artificial intelligence}} |
{{Short description|Term in artificial intelligence}} |
||
{{Orphan|date=June 2024}} |
|||
'''P(doom)''' is a term in [[AI safety]] that refers to the [[probability]] of catastrophic outcomes (or "doom") as a result of [[artificial intelligence]].<ref name=":1" /><ref>{{Cite web |last=Thomas |first=Sean |date=2024-03-04 |title=Are we ready for P(doom)? |url=https://www.spectator.co.uk/article/are-we-ready-for-pdoom/ |access-date=2024-06-19 |website=The Spectator |language=en-US}}</ref> The exact outcomes in question differ from one prediction to another, but generally allude to the [[existential risk from artificial general intelligence]].<ref name=":2" /> |
'''P(doom)''' is a term in [[AI safety]] that refers to the [[probability]] of catastrophic outcomes (or "doom") as a result of [[artificial intelligence]].<ref name=":1" /><ref>{{Cite web |last=Thomas |first=Sean |date=2024-03-04 |title=Are we ready for P(doom)? |url=https://www.spectator.co.uk/article/are-we-ready-for-pdoom/ |access-date=2024-06-19 |website=The Spectator |language=en-US}}</ref> The exact outcomes in question differ from one prediction to another, but generally allude to the [[existential risk from artificial general intelligence]].<ref name=":2" /> |
||
Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of [[GPT-4]], as high-profile figures such as [[Geoffrey Hinton]]<ref>{{Cite news |last=Metz |first=Cade |date=2023-05-01 |title= |
Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of [[GPT-4]], as high-profile figures such as [[Geoffrey Hinton]]<ref>{{Cite news |last=Metz |first=Cade |date=2023-05-01 |title='The Godfather of A.I.' Leaves Google and Warns of Danger Ahead |url=https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html |access-date=2024-06-19 |work=The New York Times |language=en-US |issn=0362-4331}}</ref> and [[Yoshua Bengio]]<ref>{{Cite news |title=One of the "godfathers of AI" airs his concerns |url=https://www.economist.com/by-invitation/2023/07/21/one-of-the-godfathers-of-ai-airs-his-concerns |access-date=2024-06-19 |newspaper=The Economist |issn=0013-0613}}</ref> began to warn of the risks of AI.<ref name=":0" /> In a 2023 survey, AI researchers were asked to estimate the probability that future AI advancements could lead to human extinction or similarly severe and permanent disempowerment within the next 100 years. The average response was 14.4%, with a median of 5%.<ref>{{Cite web |last=Piper |first=Kelsey |date=2024-01-10 |title=Thousands of AI experts are torn about what they’ve created, new study finds |url=https://www.vox.com/future-perfect/2024/1/10/24032987/ai-impacts-survey-artificial-intelligence-chatgpt-openai-existential-risk-superintelligence |access-date=2024-09-02 |website=Vox |language=en-US}}</ref><ref>{{Cite web |title=2023 Expert Survey on Progress in AI [AI Impacts Wiki] |url=https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai |access-date=2024-09-02 |website=wiki.aiimpacts.org}}</ref> |
||
== Sample P(doom) values == |
== Sample P(doom) values == |
||
Line 21: | Line 19: | ||
|- |
|- |
||
|[[Paul Christiano (researcher)|Paul Christiano]] |
|[[Paul Christiano (researcher)|Paul Christiano]] |
||
|50%<ref>{{Cite web |date=2023-05-03 |title=ChatGPT creator says |
|50%<ref>{{Cite web |date=2023-05-03 |title=ChatGPT creator says there's 50% chance AI ends in 'doom' |url=https://www.independent.co.uk/tech/chatgpt-openai-ai-apocalypse-warning-b2331369.html |access-date=2024-06-19 |website=The Independent |language=en}}</ref> |
||
|Head of research at the [[US AI Safety Institute]] |
|Head of research at the [[US AI Safety Institute]] |
||
|- |
|- |
||
Line 33: | Line 31: | ||
|- |
|- |
||
|[[Geoffrey Hinton]] |
|[[Geoffrey Hinton]] |
||
|10%<ref name=":0" /><ref group=Note name=Note01/> |
|10%-50%<ref name=":0" /><ref group=Note name=Note01/> |
||
|AI researcher, formerly of [[Google]] |
|AI researcher, formerly of [[Google]] |
||
|- |
|- |
||
Line 41: | Line 39: | ||
|- |
|- |
||
|[[Jan Leike]] |
|[[Jan Leike]] |
||
|10-90%<ref name=":1">{{Cite web |last=Railey |first=Clint |date=2023-07-12 |title=P(doom) is |
|10-90%<ref name=":1">{{Cite web |last=Railey |first=Clint |date=2023-07-12 |title=P(doom) is AI's latest apocalypse metric. Here's how to calculate your score |url=https://www.fastcompany.com/90994526/pdoom-explained-how-to-calculate-your-score-on-ai-apocalypse-metric |website=Fast Company}}</ref> |
||
|AI alignment researcher at Anthropic, formerly of [[Google DeepMind|DeepMind]] and OpenAI |
|AI alignment researcher at Anthropic, formerly of [[Google DeepMind|DeepMind]] and OpenAI |
||
|- |
|- |
||
Line 61: | Line 59: | ||
|- |
|- |
||
|[[Eliezer Yudkowsky]] |
|[[Eliezer Yudkowsky]] |
||
|95%+ <ref name=":1" /> |
|||
|99%+ <ref>{{Cite web |date=2023-09-07 |title=TIME100 AI 2023: Eliezer Yudkowsky |url=https://time.com/collection/time100-ai/6309037/eliezer-yudkowsky/ |access-date=2024-06-18 |website=Time |language=en}}</ref> |
|||
|Founder of the [[Machine Intelligence Research Institute]] |
|Founder of the [[Machine Intelligence Research Institute]] |
||
|- |
|- |
||
Line 69: | Line 67: | ||
|- |
|- |
||
|[[Marc Andreessen]] |
|[[Marc Andreessen]] |
||
|0%<ref>{{Cite |
|0%<ref>{{Cite magazine |last=Marantz |first=Andrew |date=2024-03-11 |title=Among the A.I. Doomsayers |url=https://www.newyorker.com/magazine/2024/03/18/among-the-ai-doomsayers |access-date=2024-06-19 |magazine=The New Yorker |language=en-US |issn=0028-792X}}</ref> |
||
|American businessman |
|American businessman |
||
|- |
|- |
||
|[[Yann LeCun|Yann Le Cun]] |
|[[Yann LeCun|Yann Le Cun]] |
||
|<0.01%<ref>{{Cite web | |
|<0.01%<ref>{{Cite web |author1=Wayne Williams |date=2024-04-07 |title=Top AI researcher says AI will end humanity and we should stop developing it now — but don't worry, Elon Musk disagrees |url=https://www.techradar.com/pro/top-ai-researcher-says-ai-will-end-humanity-and-we-should-stop-developing-it-now-but-dont-worry-elon-musk-disagrees |access-date=2024-06-19 |website=TechRadar |language=en}}</ref><ref group=Note name=Note06/> |
||
|Chief AI Scientist at [[Meta Platforms|Meta]] |
|Chief AI Scientist at [[Meta Platforms|Meta]] |
||
|- |
|- |
||
|[[Toby Ord]] |
|||
|10%<ref>{{Cite news |date=2023-10-08 |title=Is there really a 1 in 6 chance of human extinction this century? |url=https://www.abc.net.au/news/2023-10-08/is-there-really-a-1-in-6-chance-of-human-extinction/102942530 |access-date=2024-09-01 |work=ABC News |language=en-AU}}</ref> |
|||
|Australian philosopher and author of [[The Precipice: Existential Risk and the Future of Humanity|The Precipice]] |
|||
|} |
|} |
||
Line 82: | Line 83: | ||
== In popular culture == |
== In popular culture == |
||
* In 2024, Australian rock band [[King Gizzard & the Lizard Wizard]] launched their new label, named p(doom) Records.<ref>{{Cite web |date=2024-05-07 |title=GUM & Ambrose Kenny-Smith are teaming up again for new collaborative album |
* In 2024, Australian rock band [[King Gizzard & the Lizard Wizard]] launched their new label, named p(doom) Records.<ref>{{Cite web |date=2024-05-07 |title=GUM & Ambrose Kenny-Smith are teaming up again for new collaborative album 'III Times' |url=https://diymag.com/news/gum-ambrose-kenny-smith-iii-times |access-date=2024-06-19 |website=DIY |language=en}}</ref> |
||
== See also == |
== See also == |
Revision as of 10:24, 2 September 2024
P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence.[1][2] The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.[3]
Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of GPT-4, as high-profile figures such as Geoffrey Hinton[4] and Yoshua Bengio[5] began to warn of the risks of AI.[6] In a 2023 survey, AI researchers were asked to estimate the probability that future AI advancements could lead to human extinction or similarly severe and permanent disempowerment within the next 100 years. The average response was 14.4%, with a median of 5%.[7][8]
Sample P(doom) values
Name | P(doom) | Notes |
---|---|---|
Dario Amodei | 10-25%[6] | CEO of Anthropic |
Elon Musk | 10-20%[9] | Businessman and CEO of X, Tesla, and SpaceX |
Paul Christiano | 50%[10] | Head of research at the US AI Safety Institute |
Lina Khan | 15%[6] | Chair of the Federal Trade Commission |
Emmet Shear | 5-50%[6] | Co-founder of Twitch and former interim CEO of OpenAI |
Geoffrey Hinton | 10%-50%[6][Note 1] | AI researcher, formerly of Google |
Yoshua Bengio | 20%[3][Note 2] | Computer scientist and scientific director of the Montreal Institute for Learning Algorithms |
Jan Leike | 10-90%[1] | AI alignment researcher at Anthropic, formerly of DeepMind and OpenAI |
Vitalik Buterin | 10%[1] | Cofounder of Ethereum |
Dan Hendrycks | 80%+ [1][Note 3] | Director of Center for AI Safety |
Grady Booch | 0%[1][Note 4] | c.American software engineer |
Casey Newton | 5%[1] | American technology journalist |
Eliezer Yudkowsky | 95%+ [1] | Founder of the Machine Intelligence Research Institute |
Roman Yampolskiy | 99.9%[11][Note 5] | Latvian computer scientist |
Marc Andreessen | 0%[12] | American businessman |
Yann Le Cun | <0.01%[13][Note 6] | Chief AI Scientist at Meta |
Toby Ord | 10%[14] | Australian philosopher and author of The Precipice |
Criticism
There has been some debate about the usefulness of P(doom) as a term, in part due to the lack of clarity about whether or not a given prediction is conditional on the existence of artificial general intelligence, the time frame, and the precise meaning of "doom".[6][15]
In popular culture
- In 2024, Australian rock band King Gizzard & the Lizard Wizard launched their new label, named p(doom) Records.[16]
See also
- Existential risk from artificial general intelligence
- Statement on AI risk of extinction
- AI alignment
- AI safety
Notes
- ^ Conditional on A.I. not being "strongly regulated", time frame of 30 years.
- ^ Based on an estimated "50 per cent probability that AI would reach human-level capabilities within a decade, and a greater than 50 per cent likelihood that AI or humans themselves would turn the technology against humanity at scale."
- ^ Up from ~20% 2 years prior.
- ^ Equivalent to "P(all the oxygen in my room spontaneously moving to a corner thereby suffocating me)".
- ^ Within the next 100 years.
- ^ "Less likely than an asteroid wiping us out".
References
- ^ a b c d e f g Railey, Clint (2023-07-12). "P(doom) is AI's latest apocalypse metric. Here's how to calculate your score". Fast Company.
- ^ Thomas, Sean (2024-03-04). "Are we ready for P(doom)?". The Spectator. Retrieved 2024-06-19.
- ^ a b "It started as a dark in-joke. It could also be one of the most important questions facing humanity". ABC News. 2023-07-14. Retrieved 2024-06-18.
- ^ Metz, Cade (2023-05-01). "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead". The New York Times. ISSN 0362-4331. Retrieved 2024-06-19.
- ^ "One of the "godfathers of AI" airs his concerns". The Economist. ISSN 0013-0613. Retrieved 2024-06-19.
- ^ a b c d e f Roose, Kevin (2023-12-06). "Silicon Valley Confronts a Grim New A.I. Metric". The New York Times. ISSN 0362-4331. Retrieved 2024-06-17.
- ^ Piper, Kelsey (2024-01-10). "Thousands of AI experts are torn about what they've created, new study finds". Vox. Retrieved 2024-09-02.
- ^ "2023 Expert Survey on Progress in AI [AI Impacts Wiki]". wiki.aiimpacts.org. Retrieved 2024-09-02.
- ^ Tangalakis-Lippert, Katherine. "Elon Musk says there could be a 20% chance AI destroys humanity — but we should do it anyway". Business Insider. Retrieved 2024-06-19.
- ^ "ChatGPT creator says there's 50% chance AI ends in 'doom'". The Independent. 2023-05-03. Retrieved 2024-06-19.
- ^ Altchek, Ana. "Why this AI researcher thinks there's a 99.9% chance AI wipes us out". Business Insider. Retrieved 2024-06-18.
- ^ Marantz, Andrew (2024-03-11). "Among the A.I. Doomsayers". The New Yorker. ISSN 0028-792X. Retrieved 2024-06-19.
- ^ Wayne Williams (2024-04-07). "Top AI researcher says AI will end humanity and we should stop developing it now — but don't worry, Elon Musk disagrees". TechRadar. Retrieved 2024-06-19.
- ^ "Is there really a 1 in 6 chance of human extinction this century?". ABC News. 2023-10-08. Retrieved 2024-09-01.
- ^ King, Isaac (2024-01-01). "Stop talking about p(doom)". LessWrong.
- ^ "GUM & Ambrose Kenny-Smith are teaming up again for new collaborative album 'III Times'". DIY. 2024-05-07. Retrieved 2024-06-19.