posts: 54836
Data license: CC-BY
This data as json
id | title | slug | type | status | content | archieml | archieml_update_statistics | published_at | updated_at | gdocSuccessorId | authors | excerpt | created_at_in_wordpress | updated_at_in_wordpress | featured_image | formattingOptions | markdown | wpApiSnapshot |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
54836 | AI timelines: What do experts in artificial intelligence expect for the future? | ai-timelines | post | publish | <!-- wp:html --> <div class="blog-info"> <p>Our World in Data presents the data and research to make progress against the world’s largest problems.<br>This article draws on data and research discussed in our entry on <strong><a href="https://ourworldindata.org/artificial-intelligence" target="_blank" rel="noopener">Artificial Intelligence</a></strong>.</p> </div> <!-- /wp:html --> <!-- wp:columns --> <div class="wp-block-columns"><!-- wp:column --> <div class="wp-block-column"><!-- wp:paragraph --> <p>Artificial intelligence (AI) that surpasses our own intelligence sounds like the stuff from science-fiction books or films. What do experts in the field of AI research think about such scenarios? Do they dismiss these ideas as fantasy, or are they taking such prospects seriously?</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>A human-level AI would be a machine, or a network of machines, capable of carrying out the same range of tasks that we humans are capable of. It would be a machine that is “able to learn to do anything that a human can do”, as Norvig and Russell put it in their textbook on AI.{ref}Peter Norvig and Stuart Russell (2021) – Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.{/ref}</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>It would be able to choose actions that allow the machine to achieve its goals and then carry out those actions. It would be able to do the work of a translator, a doctor, an illustrator, a teacher, a therapist, a driver, or the work of an investor. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In recent years, several research teams contacted AI experts and asked them about their expectations for the future of machine intelligence. Such expert surveys are one of the pieces of information that we can rely on to form an idea of what the future of AI might look like.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The chart shows the answers of 352 experts. This is from the most recent study by Katja Grace and her colleagues, conducted in the summer of 2022.{ref}A total of 4,271 AI experts were contacted; 738 responded (a 17% rate), of which 352 provided complete answers to the human-level AI question.<br><br>It’s possible that the respondents were not representative of all the AI experts contacted – that is, that there was “sample bias.” There is not enough data to rule out all potential sources of sample bias. After all, we don’t know what the people who didn’t respond to the survey, or others who weren’t even contacted, believe about AI. However, there is evidence from similar surveys to suggest that at least some potential sources of bias are minimal.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In similar surveys (e.g., <a rel="noreferrer noopener" href="https://arxiv.org/pdf/2206.04132.pdf" target="_blank">Zhang et al. 2022</a>; <a rel="noreferrer noopener" href="https://jair.org/index.php/jair/article/view/11222" target="_blank">Grace et al. 2018</a>), the researchers compared the group of respondents with a randomly selected, similarly sized group of non-respondents to see if they differed on measurable demographic characteristics, such as where they were educated, their gender, how many citations they had, years in the field, etc.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In these similar surveys, the researchers found some differences between the respondents and non-respondents, but they were small. So while other, unmeasured sources of sample bias couldn’t be ruled out, large bias due to the demographic characteristics that were measured could be ruled out.{/ref}</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Experts were asked when they believe there is a 50% chance that human-level AI exists.{ref}Much of the literature on AI timelines focuses on the 50% probability threshold. I think it would be valuable if this literature would additionally also focus on higher thresholds, say a probability of 80% for the development of a particular technology. In future updates of this article we will aim to broaden the focus and include such higher thresholds.{/ref} Human-level AI was defined as unaided machines being able to accomplish every task better and more cheaply than human workers. More information about the study can be found in the fold-out box at the end of this text.{ref}A discussion of the two most widely used concepts for thinking about the future of powerful AI systems – human-level AI and transformative AI – can be found in this <a href="https://ourworldindata.org/ai-impact">companion article</a>.{/ref}</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Each vertical line in this chart represents the answer of one expert. The fact that there are such large differences in answers makes it clear that experts do not agree on how long it will take until such a system might be developed. A few believe that this level of technology will never be developed. Some think that it’s possible, but it will take a long time. And many believe that it will be developed within the next few decades.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>As highlighted in the annotations, half of the experts gave a date before 2061, and 90% gave a date within the next 100 years.</p> <!-- /wp:paragraph --></div> <!-- /wp:column --> <!-- wp:column --> <div class="wp-block-column"></div> <!-- /wp:column --></div> <!-- /wp:columns --> <!-- wp:image {"id":54847,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image size-full"><img src="https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-grace.png" alt="" class="wp-image-54847"/></figure> <!-- /wp:image --> <!-- wp:columns --> <div class="wp-block-columns"><!-- wp:column --> <div class="wp-block-column"><!-- wp:paragraph --> <p>Other surveys of AI experts come to similar conclusions. In the following visualization, I have added the timelines from two earlier surveys conducted in 2018 and 2019. It is helpful to look at different surveys, as they differ in how they asked the question and how they defined human-level AI. You can find more details about these studies at the end of this text.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In all three surveys, we see a large disagreement between experts and they also express large uncertainties about their own individual forecasts.{ref}The visualization shows when individual experts gave a 50% chance of human-level machine intelligence. The surveys also include data on when these experts gave much lower chances (e.g., ~10%) as well as much higher ones (~90%), and the spread between the respective dates is often considerable, expressing the AI experts range of their individual uncertainty. For example, the average across individual experts in the Zhang et al study gave a 10% chance of human-level machine intelligence by 2035, a 50% chance by 2060, and a 90% chance by 2105.{/ref} </p> <!-- /wp:paragraph --></div> <!-- /wp:column --> <!-- wp:column --> <div class="wp-block-column"></div> <!-- /wp:column --></div> <!-- /wp:columns --> <!-- wp:image {"id":54848,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image size-full"><img src="https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-surveys.png" alt="" class="wp-image-54848"/></figure> <!-- /wp:image --> <!-- wp:heading {"level":4} --> <h4>What should we make of the timelines of AI experts?</h4> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Expert surveys are one piece of information to consider when we think about the future of AI, but we should not overstate the results of these surveys. Experts in a particular technology are not necessarily experts in making predictions about the future of that technology.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Experts in many fields do not have a good track record in making forecasts about their own field, as researchers including Barbara Mellers, Phil Tetlock, and others have shown.{ref}Mellers, B., Tetlock, P., & Arkes, H. R. (2019). Forecasting tournaments, epistemic humility and attitude depolarization. Cognition, 188, 19-26.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Tetlock, P. (2005) – Expert political judgment: How good is it? How can we know? Princeton, NJ: Princeton University Press</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Philip E. Tetlock and Dan Gardner (2015) – Superforecasting: The Art and Science of Prediction.{/ref} The history of flight includes a striking example of such failure. Wilbur Wright is quoted as saying, "I confess that in 1901, I said to my brother Orville that man would not fly for 50 years." Two years later, ‘man’ was not only flying, but it was these very men who achieved the feat.{ref}Another example is Ernest Rutherford, father of nuclear physics, calling the possibility of harnessing nuclear energy "moonshine." The research paper by John Jenkin discusses why. John G. Jenkin (2011) – Atomic Energy is ‘‘Moonshine’’: What did Rutherford Really Mean?. Published in Physics in Perspective. DOI 10.1007/s00016-010-0038-1{/ref} </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Additionally these studies often find large ‘framing effects’, two logically identical questions get answered in very different ways depending on how exactly the questions are worded.{ref}This is discussed in some more detail for the study by Grace et al. in the Appendix.{/ref}</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>What I do take away from these surveys however, is that the majority of AI experts take the prospect of very powerful AI technology seriously. It is not the case that AI researchers dismiss extremely powerful AI as mere fantasy. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The huge majority thinks that in the coming decades there is an even chance that we will see AI technology which will have a transformative impact on our world. While some have long timelines, many think it is possible that we have very little time before these technologies arrive. Across the three surveys more than half think that there is a 50% chance that a human-level AI would be developed before some point in the 2060s, a time well within the lifetime of today’s young people.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":4} --> <h4>The forecast of the Metaculus community</h4> <!-- /wp:heading --> <!-- wp:paragraph --> <p>In the big visualization on AI timelines below, I have included the forecast by the Metaculus forecaster community.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The forecasters on the online platform <a href="https://www.metaculus.com/">Metaculus.com</a> are not experts in AI but people who dedicate their energy to making good forecasts. Research on forecasting has documented that groups of people can assign surprisingly accurate probabilities to future events when given the right incentives and good feedback.{ref}See the previously cited literature on forecasting by Barbara Mellers, Phil Tetlock, and others.{/ref} To receive this feedback, the online community at Metaculus <a href="https://www.metaculus.com/questions/track-record/">tracks</a> how well they perform in their forecasts. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>What does this group of forecasters expect for the future of AI?</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>At the time of writing, in November 2022, the forecasters believe that there is a 50/50-chance for an ‘Artificial General Intelligence’ to be ‘devised, tested, and publicly announced’ by the year 2040, less than 20 years from now.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>On <a href="https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/">their</a> page about this specific question, you can find the precise definition of the AI system in question, how the timeline of their forecasts has changed, and the arguments of individual forecasters for how they arrived at their predictions.{ref}There are two other relevant questions on Metaculus. The <a href="https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/">first one</a> asks for the date when weakly General AI will be publicly known. And <a href="https://www.metaculus.com/questions/384/humanmachine-intelligence-parity-by-2040/">the second one</a> is asking for the probability of ‘human/machine intelligence parity’ by 2040.{/ref}</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The timelines of the Metaculus community have become much shorter recently. The expected timelines have shortened by about a decade in the spring of 2022, when several impressive AI breakthroughs happened faster than many had anticipated.{ref}Metaculus’s community prediction fell from the year 2058 in March 2022 to the year 2040 in July 2022.{/ref}</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":4} --> <h4>The forecast by Ajeya Cotra</h4> <!-- /wp:heading --> <!-- wp:paragraph --> <p>The last shown forecast stems from the research by Ajeya Cotra, who works for the nonprofit Open Philanthropy.{ref}Her research was announced in various places, including the AI Alignment Forum: Ajeya Cotra (2020) – <a href="https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines">Draft report on AI timelines</a>. As far as I know the report itself always remained a ‘draft report’ and was published <a href="https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP">here on Google Docs</a>.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In 2022 Ajeya Cotra published a <a href="https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines">Two-year update on my personal AI timelines</a>.{/ref} In 2020 she published a detailed and influential study asking when the world will see transformative AI. Her timeline is not based on surveys, but on the study of long-term trends in the computation used to train AI systems. I present and discuss the long-run trends in training computation in <a href="https://ourworldindata.org/brief-history-of-ai">this companion article.</a> </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Cotra estimated that there is a 50% chance that a transformative AI system will become possible and affordable by the year 2050. This is her central estimate in her “median scenario.” Cotra emphasizes that there are substantial uncertainties around this median scenario, and also explored two other, more extreme, scenarios. The timelines for these two scenarios – her “most aggressive plausible” scenario and her “most conservative plausible” scenario – are also shown in the visualization. The span from 2040 to 2090 in Cotra’s “plausible” forecasts highlights that she believes that the uncertainty is large.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The visualization also shows that Cotra updated her forecast two years after its initial publication. In 2022 Cotra published an update in which she shortened her median timeline by a full ten years.{ref}Ajeya Cotra’s <a href="https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines">Two-year update on my personal AI timelines</a>.{/ref} </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>It is important to note that the definitions of the AI systems in question differ very much across these various studies. For example, the system that Cotra speaks about would have a much more transformative impact on the world than the system that the Metaculus forecasters focus on. More details can be found in the appendix and within the respective studies.</p> <!-- /wp:paragraph --> <!-- wp:image {"id":54846,"sizeSlug":"full","linkDestination":"none","className":"wp-block-full-content-width"} --> <figure class="wp-block-image size-full wp-block-full-content-width"><img src="https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-big.png" alt="" class="wp-image-54846"/></figure> <!-- /wp:image --> <!-- wp:heading {"level":4} --> <h4>What can we learn from the forecasts?</h4> <!-- /wp:heading --> <!-- wp:paragraph --> <p>The visualization shows the forecasts of 1128 people – 812 individual AI experts, the aggregated estimates of 315 forecasters from the Metaculus platform, and the findings of the detailed study by Ajeya Cotra.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>There are two big takeaways from these forecasts on AI timelines:</p> <!-- /wp:paragraph --> <!-- wp:list {"ordered":true} --> <ol><li>There is no consensus, and the uncertainty is high. There is huge disagreement between experts about when human-level AI will be developed. Some believe that it is decades away, while others think it is probable that such systems will be developed within the next few years or months.<br><br>There is not just disagreement <em>between</em> experts; individual experts also emphasize the large uncertainty around their own individual estimate. As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare. </li></ol> <!-- /wp:list --> <!-- wp:list {"ordered":true,"start":2} --> <ol start="2"><li>At the same time, there is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that. The majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case it would plausibly be the biggest transformation in the lifetime of our children, or even in our own lifetime.</li></ol> <!-- /wp:list --> <!-- wp:paragraph --> <p>The public discourse and the decision-making at major institutions have not caught up with these prospects. In discussions on the future of our world – from the future of our climate, to the future of our economies, to the future of our political institutions – the prospect of transformative AI is rarely central to the conversation. Often it is not mentioned at all, not even in a footnote.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>We seem to be in a situation where most people hardly think about the future of artificial intelligence, while the few who dedicate their attention to it find it plausible that one of the biggest transformations in humanity’s history is likely to happen within our lifetimes.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p></p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator"/> <!-- /wp:separator --> <!-- wp:spacer --> <div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div> <!-- /wp:spacer --> <!-- wp:paragraph --> <p><strong>Acknowledgements:</strong> I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Bastian Herre, Edouard Mathieu, Esteban Ortiz-Ospina and Hannah Ritchie for their helpful comments to drafts of this essay.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>And I would like to thank my colleague Charlie Giattino who calculated the timelines for individual experts based on the data from the three survey studies and supported the work on this essay. Charlie is also one of the authors of the cited study by Zhang et al. on timelines of AI experts.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator"/> <!-- /wp:separator --> <!-- wp:owid/additional-information --> <!-- wp:heading {"level":3} --> <h3>More information about the studies and forecasts discussed in this essay</h3> <!-- /wp:heading --> <!-- wp:columns {"className":"is-style-sticky-right"} --> <div class="wp-block-columns is-style-sticky-right"><!-- wp:column --> <div class="wp-block-column"><!-- wp:paragraph --> <p>The three cited AI experts surveys are:</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><li>Katja Grace, Zach Stein-Perlman, and Benjamin Weinstein-Raun (2022) – “<a href="https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/">2022 Expert Survey on Progress in AI</a>.” AI Impacts, 3 Aug. 2022.</li><li>Baobao Zhang, Noemi Dreksler, Markus Anderljung, Lauren Kahn, Charlie Giattino, Allan Dafoe, and Michael Horowitz (2022) – <a href="https://doi.org/10.48550/arXiv.2206.04132">Forecasting AI Progress: Evidence from a Survey of Machine Learning Researchers</a>. Published on arXiv June 8, 2022. </li><li>Ross Gruetzemacher, David Paradice, and Kang Bok Lee (2019) – <a href="https://arxiv.org/abs/1901.08579">Forecasting Transformative AI: An Expert Survey</a>, published on arXiv.</li></ul> <!-- /wp:list --> <!-- wp:paragraph --> <p>The surveys were conducted during the following times:</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><li>Grace et al. was completed between 12 June and 3 August 2022.</li><li>Zhang et al. was completed mainly between 16 September and 13 October 2019; but due to an error some experts completed the survey between 10-14 March 2020.</li><li>Gruetzemacher et al. was completed in the "summer of 2018.”</li></ul> <!-- /wp:list --> <!-- wp:paragraph --> <p>The surveys differ in how the question was asked and how the AI system in question was defined. In the following sections we discuss this in detail for all cited studies.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":5} --> <h5>The study by Grace et al. published in 2022</h5> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Survey respondents were given the following text regarding the definition of high-level machine intelligence: </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>“The following questions ask about ‘high-level machine intelligence’ (HLMI). Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g., being accepted as a jury member. Think feasibility, not adoption. For the purposes of this question, assume that human scientific activity continues without major negative disruption.”</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Each respondent was randomly assigned to give their forecasts under one of two different framings: “fixed-probability” and “fixed-years.”</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Those in the fixed-probability framing were asked, “How many years until you expect: A 10% probability of HLMI existing? A 50% probability of HLMI existing? A 90% probability of HLMI existing?” They responded by giving a number of years from the day they took the survey.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Those in the fixed-years framing were asked, “How likely is it that HLMI exists: In 10 years? In 20 years? In 40 years?” They responded by giving a probability of that happening.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Several studies have shown that the framing affects respondents’ timelines, with the fixed-years framing leading to longer timelines (i.e., that HLMI is further in the future). For example, in the previous edition of this survey (which asked identical questions), respondents who got the fixed-years framing gave a 50% chance of HLMI by 2068; those who got fixed-probability gave the year 2054.{ref}Grace et al (2018) Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research. We read both of these numbers of the chart in this publication, these years are not directly reported.{/ref} The framing results from the 2022 edition of the survey have not yet been published.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In addition to this framing effect, there is a larger effect driven by how the concept of HLMI is defined. We can see this in the results from the previous edition of this survey (the result from the 2022 survey hasn’t yet been published). For respondents who were given the HLMI definition above, the average forecast for a 50% chance of HLMI was 2061. A small subset of respondents was instead given another, logically similar question that asked about the full automation of labor; their average forecast for a 50% probability was 2138, a full 77 years later than the first group.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The full automation of labor group was asked: “Say an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g., being accepted as a jury member. Think feasibility, not adoption. Say we have reached ‘full automation of labor’ when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers.” This question was asked under both the fixed-probability and fixed-years framings.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":5} --> <h5>The study by Zhang et al. published in 2022</h5> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Survey respondents were given the following definition of human-level machine intelligence: “Human-level machine intelligence (HLMI) is reached when machines are collectively able to perform almost all tasks (>90% of all tasks) that are economically relevant better than the median human paid to do that task in 2019. You should ignore tasks that are legally or culturally restricted to humans, such as serving on a jury.”</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>“Economically relevant” tasks were defined as those included in the <a href="https://www.onetcenter.org/database.html#individual-files">Occupational Information Network (O*NET) database</a>. O*NET is a widely used dataset of tasks carried out across a wide range of occupations.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>As in Grace et al 2022, each survey respondent was randomly assigned to give their forecasts under one of two different framings: “fixed-probability” and “fixed-years.” As was found before, the fixed-years framing resulted in longer timelines on average: the year 2070 for a 50% chance of HLMI, compared to 2050 under the fixed-probability framing.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":5} --> <h5>The study by Gruetzemacher et al. published in 2019</h5> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Survey respondents were asked the following: “These questions will ask your opinion of future AI progress with regard to human tasks. We define human tasks as all unique tasks that humans are currently paid to do. We consider human tasks as different from jobs in that an algorithm may be able to replace humans at some portion of tasks a job requires while not being able to replace humans for all of the job requirements. For example, an AI system(s) may not replace a lawyer entirely but may be able to accomplish 50% of the tasks a lawyer typically performs. In how many years do you expect AI systems to collectively be able to accomplish 99% of human tasks at or above the level of a typical human? Think feasibility.”</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>We show the results using this definition of AI in the chart, as we judged this definition to be most comparable to the other studies included in the chart.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In addition to this definition, respondents were asked about AI systems that are able to collectively accomplish 50% and 90% of human tasks, as well as “broadly capable AI systems” that are able to accomplish 90% and 99% of human tasks.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>All respondents in this survey received a fixed-probability framing.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":5} --> <h5>The study by Ajeya Cotra published in 2020</h5> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Cotra’s overall aim was to estimate when we might expect “transformative artificial intelligence” (TAI), defined as “ ‘software’... that has at least as profound an impact on the world’s trajectory as the Industrial Revolution did.”</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Cotra focused on “a relatively concrete and easy-to-picture way that TAI could manifest: as a single computer program which performs a large enough diversity of intellectual labor at a high enough level of performance that it alone can drive a transition similar to the Industrial Revolution.”</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>One intuitive example of such a program is the ‘virtual professional’, “a model that can do roughly everything economically productive that an intelligent and educated human could do remotely from a computer connected to the internet at a hundred-fold speedup, for costs similar to or lower than the costs of employing such a human.”</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>When might we expect something like a virtual professional to exist?</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>To answer this, Cotra first estimated the amount of computation that would be required to train such a system using the machine learning architectures and algorithms available to researchers in 2020. She then estimated when that amount of computation would be available at a low enough cost based on extrapolating past trends.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The estimate of training computation relies on an estimate of the amount of computation performed by the human brain each second, combined with different hypotheses for how much training would be required to reach a high enough level of capability.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>For example, the “lifetime anchor” hypothesis estimates the total computation performed by the human brain up to age ~32.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Each aspect of these estimates comes with a very high degree of uncertainty. Cotra writes: “The question of whether there is a sensible notion of ‘brain computation’ that can be measured in FLOP/s—and if so, what range of numerical estimates for brain FLOP/s would be reasonable—is conceptually fraught and empirically murky.”</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>For anyone who is interested in the question of future AI, the study of Cotra is very much worth reading in detail. She lays out good and transparent reasons for her estimates and communicates her reasoning in great detail.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Her research was announced in various places, including the AI Alignment Forum: Ajeya Cotra (2020) – <a href="https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines">Draft report on AI timelines</a>. As far as I know the report itself always remained a ‘draft report’ and was published <a href="https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP">here on Google Docs</a> (it is not uncommon in the field of AI research that articles get published in non-standard ways). In 2022 Ajeya Cotra published a <a href="https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines">Two-year update on my personal AI timelines</a>.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":5} --> <h5>Other studies</h5> <!-- /wp:heading --> <!-- wp:paragraph --> <p>A very different kind of forecast that is also relevant here is the work of David Roodman. In his article <a href="https://www.openphilanthropy.org/research/modeling-the-human-trajectory/">Modeling the Human Trajectory</a> he studies the history of global economic output to think about the future. He asks whether it is plausible to see economic growth that could be considered ‘transformative’ – an annual growth rate of the world economy higher than 30% – within this century. One of his conclusions is that "if the patterns of long-term history continue, some sort of economic explosion will take place again, the most plausible channel being AI.”</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>And another very different kind of forecast is Tom Davidson’s <a href="https://www.openphilanthropy.org/research/report-on-semi-informative-priors/">Report on Semi-informative Priors</a> published in 2021.</p> <!-- /wp:paragraph --></div> <!-- /wp:column --> <!-- wp:column --> <div class="wp-block-column"><!-- wp:html /--></div> <!-- /wp:column --></div> <!-- /wp:columns --> <!-- /wp:owid/additional-information --> | { "id": "wp-54836", "slug": "ai-timelines", "content": { "toc": [], "body": [ { "type": "text", "value": [ { "text": "Our World in Data presents the data and research to make progress against the world\u2019s largest problems.", "spanType": "span-simple-text" }, { "spanType": "span-newline" }, { "text": "This article draws on data and research discussed in our entry on ", "spanType": "span-simple-text" }, { "children": [ { "url": "https://ourworldindata.org/artificial-intelligence", "children": [ { "text": "Artificial Intelligence", "spanType": "span-simple-text" } ], "spanType": "span-link" } ], "spanType": "span-bold" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Artificial intelligence (AI) that surpasses our own intelligence sounds like the stuff from science-fiction books or films. What do experts in the field of AI research think about such scenarios? Do they dismiss these ideas as fantasy, or are they taking such prospects seriously?", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "A human-level AI would be a machine, or a network of machines, capable of carrying out the same range of tasks that we humans are capable of. It would be a machine that is \u201cable to learn to do anything that a human can do\u201d, as Norvig and Russell put it in their textbook on AI.{ref}Peter Norvig and Stuart Russell (2021) \u2013 Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "It would be able to choose actions that allow the machine to achieve its goals and then carry out those actions. It would be able to do the work of a translator, a doctor, an illustrator, a teacher, a therapist, a driver, or the work of an investor.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In recent years, several research teams contacted AI experts and asked them about their expectations for the future of machine intelligence. Such expert surveys are one of the pieces of information that we can rely on to form an idea of what the future of AI might look like.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The chart shows the answers of 352 experts. This is from the most recent study by Katja Grace and her colleagues, conducted in the summer of 2022.{ref}A total of 4,271 AI experts were contacted; 738 responded (a 17% rate), of which 352 provided complete answers to the human-level AI question.", "spanType": "span-simple-text" }, { "spanType": "span-newline" }, { "spanType": "span-newline" }, { "text": "It\u2019s possible that the respondents were not representative of all the AI experts contacted \u2013 that is, that there was \u201csample bias.\u201d There is not enough data to rule out all potential sources of sample bias. After all, we don\u2019t know what the people who didn\u2019t respond to the survey, or others who weren\u2019t even contacted, believe about AI. However, there is evidence from similar surveys to suggest that at least some potential sources of bias are minimal.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In similar surveys (e.g., ", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/pdf/2206.04132.pdf", "children": [ { "text": "Zhang et al. 2022", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": "; ", "spanType": "span-simple-text" }, { "url": "https://jair.org/index.php/jair/article/view/11222", "children": [ { "text": "Grace et al. 2018", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": "), the researchers compared the group of respondents with a randomly selected, similarly sized group of non-respondents to see if they differed on measurable demographic characteristics, such as where they were educated, their gender, how many citations they had, years in the field, etc.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In these similar surveys, the researchers found some differences between the respondents and non-respondents, but they were small. So while other, unmeasured sources of sample bias couldn\u2019t be ruled out, large bias due to the demographic characteristics that were measured could be ruled out.{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Experts were asked when they believe there is a 50% chance that human-level AI exists.{ref}Much of the literature on AI timelines focuses on the 50% probability threshold. I think it would be valuable if this literature would additionally also focus on higher thresholds, say a probability of 80% for the development of a particular technology. In future updates of this article we will aim to broaden the focus and include such higher thresholds.{/ref} Human-level AI was defined as unaided machines being able to accomplish every task better and more cheaply than human workers. More information about the study can be found in the fold-out box at the end of this text.{ref}A discussion of the two most widely used concepts for thinking about the future of powerful AI systems \u2013 human-level AI and transformative AI \u2013 can be found in this ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/ai-impact", "children": [ { "text": "companion article", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Each vertical line in this chart represents the answer of one expert. The fact that there are such large differences in answers makes it clear that experts do not agree on how long it will take until such a system might be developed. A few believe that this level of technology will never be developed. Some think that it\u2019s possible, but it will take a long time. And many believe that it will be developed within the next few decades.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "As highlighted in the annotations, half of the experts gave a date before 2061, and 90% gave a date within the next 100 years.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "alt": "", "size": "wide", "type": "image", "filename": "When-do-experts-expect-Artificial-General-Intelligence-grace.png", "parseErrors": [] }, { "type": "text", "value": [ { "text": "Other surveys of AI experts come to similar conclusions. In the following visualization, I have added the timelines from two earlier surveys conducted in 2018 and 2019. It is helpful to look at different surveys, as they differ in how they asked the question and how they defined human-level AI. You can find more details about these studies at the end of this text.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In all three surveys, we see a large disagreement between experts and they also express large uncertainties about their own individual forecasts.{ref}The visualization shows when individual experts gave a 50% chance of human-level machine intelligence. The surveys also include data on when these experts gave much lower chances (e.g., ~10%) as well as much higher ones (~90%), and the spread between the respective dates is often considerable, expressing the AI experts range of their individual uncertainty. For example, the average across individual experts in the Zhang et al study gave a 10% chance of human-level machine intelligence by 2035, a 50% chance by 2060, and a 90% chance by 2105.{/ref}\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "alt": "", "size": "wide", "type": "image", "filename": "When-do-experts-expect-Artificial-General-Intelligence-surveys.png", "parseErrors": [] }, { "text": [ { "text": "What should we make of the timelines of AI experts?", "spanType": "span-simple-text" } ], "type": "heading", "level": 2, "parseErrors": [] }, { "type": "text", "value": [ { "text": "Expert surveys are one piece of information to consider when we think about the future of AI, but we should not overstate the results of these surveys. Experts in a particular technology are not necessarily experts in making predictions about the future of that technology.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Experts in many fields do not have a good track record in making forecasts about their own field, as researchers including Barbara Mellers, Phil Tetlock, and others have shown.{ref}Mellers, B., Tetlock, P., & Arkes, H. R. (2019). Forecasting tournaments, epistemic humility and attitude depolarization. Cognition, 188, 19-26.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Tetlock, P. (2005)\u00a0\u2013\u00a0Expert political judgment: How good is it? How can we know? Princeton, NJ: Princeton University Press", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Philip E. Tetlock and Dan Gardner (2015) \u2013 Superforecasting: The Art and Science of Prediction.{/ref} The history of flight includes a striking example of such failure. Wilbur Wright is quoted as saying, \"I confess that in 1901, I said to my brother Orville that man would not fly for 50 years.\" Two years later, \u2018man\u2019 was not only flying, but it was these very men who achieved the feat.{ref}Another example is Ernest Rutherford, father of nuclear physics, calling the possibility of harnessing nuclear energy \"moonshine.\" The research paper by John Jenkin discusses why. John G. Jenkin (2011) \u2013 Atomic Energy is \u2018\u2018Moonshine\u2019\u2019: What did Rutherford Really Mean?. Published in Physics in Perspective. DOI 10.1007/s00016-010-0038-1{/ref}\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Additionally these studies often find large \u2018framing effects\u2019, two logically identical questions get answered in very different ways depending on how exactly the questions are worded.{ref}This is discussed in some more detail for the study by Grace et al. in the Appendix.{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "What I do take away from these surveys however, is that the majority of AI experts take the prospect of very powerful AI technology seriously. It is not the case that AI researchers dismiss extremely powerful AI as mere fantasy.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The huge majority thinks that in the coming decades there is an even chance that we will see AI technology which will have a transformative impact on our world. While some have long timelines, many think it is possible that we have very little time before these technologies arrive. Across the three surveys more than half think that there is a 50% chance that a human-level AI would be developed before some point in the 2060s, a time well within the lifetime of today\u2019s young people.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "The forecast of the Metaculus community", "spanType": "span-simple-text" } ], "type": "heading", "level": 2, "parseErrors": [] }, { "type": "text", "value": [ { "text": "In the big visualization on AI timelines below, I have included the forecast by the Metaculus forecaster community.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The forecasters on the online platform ", "spanType": "span-simple-text" }, { "url": "https://www.metaculus.com/", "children": [ { "text": "Metaculus.com", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " are not experts in AI but people who dedicate their energy to making good forecasts. Research on forecasting has documented that groups of people can assign surprisingly accurate probabilities to future events when given the right incentives and good feedback.{ref}See the previously cited literature on forecasting by Barbara Mellers, Phil Tetlock, and others.{/ref} To receive this feedback, the online community at Metaculus ", "spanType": "span-simple-text" }, { "url": "https://www.metaculus.com/questions/track-record/", "children": [ { "text": "tracks", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " how well they perform in their forecasts.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "What does this group of forecasters expect for the future of AI?", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "At the time of writing, in November 2022, the forecasters believe that there is a 50/50-chance for an \u2018Artificial General Intelligence\u2019 to be \u2018devised, tested, and publicly announced\u2019 by the year 2040, less than 20 years from now.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "On ", "spanType": "span-simple-text" }, { "url": "https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/", "children": [ { "text": "their", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " page about this specific question, you can find the precise definition of the AI system in question, how the timeline of their forecasts has changed, and the arguments of individual forecasters for how they arrived at their predictions.{ref}There are two other relevant questions on Metaculus. The ", "spanType": "span-simple-text" }, { "url": "https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/", "children": [ { "text": "first one", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " asks for the date when weakly General AI will be publicly known. And ", "spanType": "span-simple-text" }, { "url": "https://www.metaculus.com/questions/384/humanmachine-intelligence-parity-by-2040/", "children": [ { "text": "the second one", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " is asking for the probability of \u2018human/machine intelligence parity\u2019 by 2040.{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The timelines of the Metaculus community have become much shorter recently. The expected timelines have shortened by about a decade in the spring of 2022, when several impressive AI breakthroughs happened faster than many had anticipated.{ref}Metaculus\u2019s community prediction fell from the year 2058 in March 2022 to the year 2040 in July 2022.{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "The forecast by Ajeya Cotra", "spanType": "span-simple-text" } ], "type": "heading", "level": 2, "parseErrors": [] }, { "type": "text", "value": [ { "text": "The last shown forecast stems from the research by Ajeya Cotra, who works for the nonprofit Open Philanthropy.{ref}Her research was announced in various places, including the AI Alignment Forum: Ajeya Cotra (2020) \u2013\u00a0", "spanType": "span-simple-text" }, { "url": "https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines", "children": [ { "text": "Draft report on AI timelines", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". As far as I know the report itself always remained a \u2018draft report\u2019 and was published ", "spanType": "span-simple-text" }, { "url": "https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP", "children": [ { "text": "here on Google Docs", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In 2022 Ajeya Cotra published a ", "spanType": "span-simple-text" }, { "url": "https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines", "children": [ { "text": "Two-year update on my personal AI timelines", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".{/ref} In 2020 she published a detailed and influential study asking when the world will see transformative AI. Her timeline is not based on surveys, but on the study of long-term trends in the computation used to train AI systems. I present and discuss the long-run trends in training computation in ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/brief-history-of-ai", "children": [ { "text": "this companion article.", "spanType": "span-simple-text" } ], "spanType": "span-link" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Cotra estimated that there is a 50% chance that a transformative AI system will become possible and affordable by the year 2050. This is her central estimate in her \u201cmedian scenario.\u201d Cotra emphasizes that there are substantial uncertainties around this median scenario, and also explored two other, more extreme, scenarios. The timelines for these two scenarios \u2013 her \u201cmost aggressive plausible\u201d scenario and her \u201cmost conservative plausible\u201d scenario \u2013 are also shown in the visualization. The span from 2040 to 2090 in Cotra\u2019s \u201cplausible\u201d forecasts highlights that she believes that the uncertainty is large.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The visualization also shows that Cotra updated her forecast two years after its initial publication. In 2022 Cotra published an update in which she shortened her median timeline by a full ten years.{ref}Ajeya Cotra\u2019s ", "spanType": "span-simple-text" }, { "url": "https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines", "children": [ { "text": "Two-year update on my personal AI timelines", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".{/ref}\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "It is important to note that the definitions of the AI systems in question differ very much across these various studies. For example, the system that Cotra speaks about would have a much more transformative impact on the world than the system that the Metaculus forecasters focus on. More details can be found in the appendix and within the respective studies.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "alt": "", "size": "wide", "type": "image", "filename": "When-do-experts-expect-Artificial-General-Intelligence-big.png", "parseErrors": [] }, { "text": [ { "text": "What can we learn from the forecasts?", "spanType": "span-simple-text" } ], "type": "heading", "level": 2, "parseErrors": [] }, { "type": "text", "value": [ { "text": "The visualization shows the forecasts of 1128 people \u2013 812 individual AI experts, the aggregated estimates of 315 forecasters from the Metaculus platform, and the findings of the detailed study by Ajeya Cotra.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "There are two big takeaways from these forecasts on AI timelines:", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "numbered-list", "items": [ { "type": "text", "value": [ { "text": "There is no consensus, and the uncertainty is high. There is huge disagreement between experts about when human-level AI will be developed. Some believe that it is decades away, while others think it is probable that such systems will be developed within the next few years or months.", "spanType": "span-simple-text" }, { "spanType": "span-newline" }, { "spanType": "span-newline" }, { "text": "There is not just disagreement ", "spanType": "span-simple-text" }, { "children": [ { "text": "between", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " experts; individual experts also emphasize the large uncertainty around their own individual estimate. As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, { "type": "numbered-list", "items": [ { "type": "text", "value": [ { "text": "At the same time, there is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that. The majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case it would plausibly be the biggest transformation in the lifetime of our children, or even in our own lifetime.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The public discourse and the decision-making at major institutions have not caught up with these prospects. In discussions on the future of our world \u2013 from the future of our climate, to the future of our economies, to the future of our political institutions \u2013 the prospect of transformative AI is rarely central to the conversation. Often it is not mentioned at all, not even in a footnote.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "We seem to be in a situation where most people hardly think about the future of artificial intelligence, while the few who dedicate their attention to it find it plausible that one of the biggest transformations in humanity\u2019s history is likely to happen within our lifetimes.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "children": [ { "text": "Acknowledgements:", "spanType": "span-simple-text" } ], "spanType": "span-bold" }, { "text": " I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Bastian Herre, Edouard Mathieu, Esteban Ortiz-Ospina and Hannah Ritchie for their helpful comments to drafts of this essay.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "And I would like to thank my colleague Charlie Giattino who calculated the timelines for individual experts based on the data from the three survey studies and supported the work on this essay. Charlie is also one of the authors of the cited study by Zhang et al. on timelines of AI experts.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "gray-section", "items": [ { "text": [ { "text": "Additional information", "spanType": "span-simple-text" } ], "type": "heading", "level": 2, "parseErrors": [] }, { "type": "expandable-paragraph", "items": [ { "type": "text", "value": [ { "text": "The three cited AI experts surveys are:", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "list", "items": [ { "type": "text", "value": [ { "text": "Katja Grace, Zach Stein-Perlman, and Benjamin Weinstein-Raun (2022) \u2013 \u201c", "spanType": "span-simple-text" }, { "url": "https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/", "children": [ { "text": "2022 Expert Survey on Progress in AI", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".\u201d AI Impacts, 3 Aug. 2022.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Baobao Zhang, Noemi Dreksler, Markus Anderljung, Lauren Kahn, Charlie Giattino, Allan Dafoe, and Michael Horowitz (2022) \u2013 ", "spanType": "span-simple-text" }, { "url": "https://doi.org/10.48550/arXiv.2206.04132", "children": [ { "text": "Forecasting AI Progress: Evidence from a Survey of Machine Learning Researchers", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". Published on arXiv June 8, 2022.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Ross Gruetzemacher, David Paradice, and Kang Bok Lee (2019) \u2013 ", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/abs/1901.08579", "children": [ { "text": "Forecasting Transformative AI: An Expert Survey", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", published on arXiv.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The surveys were conducted during the following times:", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "list", "items": [ { "type": "text", "value": [ { "text": "Grace et al. was completed between 12 June and 3 August 2022.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Zhang et al. was completed mainly between 16 September and 13 October 2019; but due to an error some experts completed the survey between 10-14 March 2020.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Gruetzemacher et al. was completed in the \"summer of 2018.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The surveys differ in how the question was asked and how the AI system in question was defined. In the following sections we discuss this in detail for all cited studies.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "The study by Grace et al. published in 2022", "spanType": "span-simple-text" } ], "type": "heading", "level": 5, "parseErrors": [] }, { "type": "text", "value": [ { "text": "Survey respondents were given the following text regarding the definition of high-level machine intelligence:\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "\u201cThe following questions ask about \u2018high-level machine intelligence\u2019 (HLMI). Say we have \u2018high-level machine intelligence\u2019 when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g., being accepted as a jury member. Think feasibility, not adoption. For the purposes of this question, assume that human scientific activity continues without major negative disruption.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Each respondent was randomly assigned to give their forecasts under one of two different framings: \u201cfixed-probability\u201d and \u201cfixed-years.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Those in the fixed-probability framing were asked, \u201cHow many years until you expect: A 10% probability of HLMI existing? A 50% probability of HLMI existing? A 90% probability of HLMI existing?\u201d They responded by giving a number of years from the day they took the survey.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Those in the fixed-years framing were asked, \u201cHow likely is it that HLMI exists: In 10 years? In 20 years? In 40 years?\u201d They responded by giving a probability of that happening.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Several studies have shown that the framing affects respondents\u2019 timelines, with the fixed-years framing leading to longer timelines (i.e., that HLMI is further in the future). For example, in the previous edition of this survey (which asked identical questions), respondents who got the fixed-years framing gave a 50% chance of HLMI by 2068; those who got fixed-probability gave the year 2054.{ref}Grace et al (2018) Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research. We read both of these numbers of the chart in this publication, these years are not directly reported.{/ref} The framing results from the 2022 edition of the survey have not yet been published.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In addition to this framing effect, there is a larger effect driven by how the concept of HLMI is defined. We can see this in the results from the previous edition of this survey (the result from the 2022 survey hasn\u2019t yet been published). For respondents who were given the HLMI definition above, the average forecast for a 50% chance of HLMI was 2061. A small subset of respondents was instead given another, logically similar question that asked about the full automation of labor; their average forecast for a 50% probability was 2138, a full 77 years later than the first group.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The full automation of labor group was asked: \u201cSay an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g., being accepted as a jury member. Think feasibility, not adoption. Say we have reached \u2018full automation of labor\u2019 when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers.\u201d This question was asked under both the fixed-probability and fixed-years framings.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "The study by Zhang et al. published in 2022", "spanType": "span-simple-text" } ], "type": "heading", "level": 5, "parseErrors": [] }, { "type": "text", "value": [ { "text": "Survey respondents were given the following definition of human-level machine intelligence: \u201cHuman-level machine intelligence (HLMI) is reached when machines are collectively able to perform almost all tasks (>90% of all tasks) that are economically relevant better than the median human paid to do that task in 2019. You should ignore tasks that are legally or culturally restricted to humans, such as serving on a jury.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "\u201cEconomically relevant\u201d tasks were defined as those included in the ", "spanType": "span-simple-text" }, { "url": "https://www.onetcenter.org/database.html#individual-files", "children": [ { "text": "Occupational Information Network (O*NET) database", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". O*NET is a widely used dataset of tasks carried out across a wide range of occupations.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "As in Grace et al 2022, each survey respondent was randomly assigned to give their forecasts under one of two different framings: \u201cfixed-probability\u201d and \u201cfixed-years.\u201d As was found before, the fixed-years framing resulted in longer timelines on average: the year 2070 for a 50% chance of HLMI, compared to 2050 under the fixed-probability framing.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "The study by Gruetzemacher et al. published in 2019", "spanType": "span-simple-text" } ], "type": "heading", "level": 5, "parseErrors": [] }, { "type": "text", "value": [ { "text": "Survey respondents were asked the following: \u201cThese questions will ask your opinion of future AI progress with regard to human tasks. We define human tasks as all unique tasks that humans are currently paid to do. We consider human tasks as different from jobs in that an algorithm may be able to replace humans at some portion of tasks a job requires while not being able to replace humans for all of the job requirements. For example, an AI system(s) may not replace a lawyer entirely but may be able to accomplish 50% of the tasks a lawyer typically performs. In how many years do you expect AI systems to collectively be able to accomplish 99% of human tasks at or above the level of a typical human? Think feasibility.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "We show the results using this definition of AI in the chart, as we judged this definition to be most comparable to the other studies included in the chart.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In addition to this definition, respondents were asked about AI systems that are able to collectively accomplish 50% and 90% of human tasks, as well as \u201cbroadly capable AI systems\u201d that are able to accomplish 90% and 99% of human tasks.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "All respondents in this survey received a fixed-probability framing.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "The study by Ajeya Cotra published in 2020", "spanType": "span-simple-text" } ], "type": "heading", "level": 5, "parseErrors": [] }, { "type": "text", "value": [ { "text": "Cotra\u2019s overall aim was to estimate when we might expect \u201ctransformative artificial intelligence\u201d (TAI), defined as \u201c \u2018software\u2019... that has at least as profound an impact on the world\u2019s trajectory as the Industrial Revolution did.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Cotra focused on \u201ca relatively concrete and easy-to-picture way that TAI could manifest: as a single computer program which performs a large enough diversity of intellectual labor at a high enough level of performance that it alone can drive a transition similar to the Industrial Revolution.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "One intuitive example of such a program is the \u2018virtual professional\u2019, \u201ca model that can do roughly everything economically productive that an intelligent and educated human could do remotely from a computer connected to the internet at a hundred-fold speedup, for costs similar to or lower than the costs of employing such a human.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "When might we expect something like a virtual professional to exist?", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "To answer this, Cotra first estimated the amount of computation that would be required to train such a system using the machine learning architectures and algorithms available to researchers in 2020. She then estimated when that amount of computation would be available at a low enough cost based on extrapolating past trends.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The estimate of training computation relies on an estimate of the amount of computation performed by the human brain each second, combined with different hypotheses for how much training would be required to reach a high enough level of capability.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "For example, the \u201clifetime anchor\u201d hypothesis estimates the total computation performed by the human brain up to age ~32.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Each aspect of these estimates comes with a very high degree of uncertainty. Cotra writes: \u201cThe question of whether there is a sensible notion of \u2018brain computation\u2019 that can be measured in FLOP/s\u2014and if so, what range of numerical estimates for brain FLOP/s would be reasonable\u2014is conceptually fraught and empirically murky.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "For anyone who is interested in the question of future AI, the study of Cotra is very much worth reading in detail. She lays out good and transparent reasons for her estimates and communicates her reasoning in great detail.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Her research was announced in various places, including the AI Alignment Forum: Ajeya Cotra (2020) \u2013\u00a0", "spanType": "span-simple-text" }, { "url": "https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines", "children": [ { "text": "Draft report on AI timelines", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". As far as I know the report itself always remained a \u2018draft report\u2019 and was published ", "spanType": "span-simple-text" }, { "url": "https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP", "children": [ { "text": "here on Google Docs", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " (it is not uncommon in the field of AI research that articles get published in non-standard ways). In 2022 Ajeya Cotra published a ", "spanType": "span-simple-text" }, { "url": "https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines", "children": [ { "text": "Two-year update on my personal AI timelines", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "Other studies", "spanType": "span-simple-text" } ], "type": "heading", "level": 5, "parseErrors": [] }, { "type": "text", "value": [ { "text": "A very different kind of forecast that is also relevant here is the work of David Roodman. In his article ", "spanType": "span-simple-text" }, { "url": "https://www.openphilanthropy.org/research/modeling-the-human-trajectory/", "children": [ { "text": "Modeling the Human Trajectory", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " he studies the history of global economic output to think about the future. He asks whether it is plausible to see economic growth that could be considered \u2018transformative\u2019 \u2013 an annual growth rate of the world economy higher than 30% \u2013\u00a0within this century. One of his conclusions is that \"if the patterns of long-term history continue, some sort of economic explosion will take place again, the most plausible channel being AI.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "And another very different kind of forecast is Tom Davidson\u2019s ", "spanType": "span-simple-text" }, { "url": "https://www.openphilanthropy.org/research/report-on-semi-informative-priors/", "children": [ { "text": "Report on Semi-informative Priors", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " published in 2021.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] } ], "parseErrors": [] } ], "type": "article", "title": "AI timelines: What do experts in artificial intelligence expect for the future?", "authors": [ "Max Roser" ], "excerpt": "Many AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.", "dateline": "February 7, 2023", "subtitle": "Many AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.", "sidebar-toc": false, "featured-image": "featured-image-When-do-experts-expect-Artificial-General-Intelligence.png" }, "createdAt": "2022-12-02T17:35:18.000Z", "published": false, "updatedAt": "2023-10-11T08:44:20.000Z", "revisionId": null, "publishedAt": "2023-02-07T11:00:00.000Z", "relatedCharts": [], "publicationContext": "listed" } |
{ "errors": [ { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag image" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag image" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag image" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag list" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag list" }, { "name": "unhandled html tag found", "details": "Encountered the unhandled tag hr" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag separator" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag spacer" }, { "name": "unhandled html tag found", "details": "Encountered the unhandled tag hr" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag separator" } ], "numBlocks": 47, "numErrors": 10, "wpTagCounts": { "html": 2, "list": 4, "image": 3, "column": 6, "spacer": 1, "columns": 3, "heading": 10, "paragraph": 68, "separator": 2, "owid/additional-information": 1 }, "htmlTagCounts": { "p": 69, "h3": 1, "h4": 4, "h5": 5, "hr": 2, "ol": 2, "ul": 2, "div": 11, "figure": 3 } } |
2023-02-07 11:00:00 | 2024-02-16 14:22:54 | 1OznrkRcYj3wrfD8L_JgMlwPs8OeUjBwPN5BMN_v5i9o | [ "Max Roser" ] |
Many AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner. | 2022-12-02 17:35:18 | 2023-10-11 08:44:20 | https://ourworldindata.org/wp-content/uploads/2022/12/featured-image-When-do-experts-expect-Artificial-General-Intelligence.png | {} |
Our World in Data presents the data and research to make progress against the world’s largest problems. This article draws on data and research discussed in our entry on **[Artificial Intelligence](https://ourworldindata.org/artificial-intelligence)**. Artificial intelligence (AI) that surpasses our own intelligence sounds like the stuff from science-fiction books or films. What do experts in the field of AI research think about such scenarios? Do they dismiss these ideas as fantasy, or are they taking such prospects seriously? A human-level AI would be a machine, or a network of machines, capable of carrying out the same range of tasks that we humans are capable of. It would be a machine that is “able to learn to do anything that a human can do”, as Norvig and Russell put it in their textbook on AI.{ref}Peter Norvig and Stuart Russell (2021) – Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.{/ref} It would be able to choose actions that allow the machine to achieve its goals and then carry out those actions. It would be able to do the work of a translator, a doctor, an illustrator, a teacher, a therapist, a driver, or the work of an investor. In recent years, several research teams contacted AI experts and asked them about their expectations for the future of machine intelligence. Such expert surveys are one of the pieces of information that we can rely on to form an idea of what the future of AI might look like. The chart shows the answers of 352 experts. This is from the most recent study by Katja Grace and her colleagues, conducted in the summer of 2022.{ref}A total of 4,271 AI experts were contacted; 738 responded (a 17% rate), of which 352 provided complete answers to the human-level AI question. It’s possible that the respondents were not representative of all the AI experts contacted – that is, that there was “sample bias.” There is not enough data to rule out all potential sources of sample bias. After all, we don’t know what the people who didn’t respond to the survey, or others who weren’t even contacted, believe about AI. However, there is evidence from similar surveys to suggest that at least some potential sources of bias are minimal. In similar surveys (e.g., [Zhang et al. 2022](https://arxiv.org/pdf/2206.04132.pdf); [Grace et al. 2018](https://jair.org/index.php/jair/article/view/11222)), the researchers compared the group of respondents with a randomly selected, similarly sized group of non-respondents to see if they differed on measurable demographic characteristics, such as where they were educated, their gender, how many citations they had, years in the field, etc. In these similar surveys, the researchers found some differences between the respondents and non-respondents, but they were small. So while other, unmeasured sources of sample bias couldn’t be ruled out, large bias due to the demographic characteristics that were measured could be ruled out.{/ref} Experts were asked when they believe there is a 50% chance that human-level AI exists.{ref}Much of the literature on AI timelines focuses on the 50% probability threshold. I think it would be valuable if this literature would additionally also focus on higher thresholds, say a probability of 80% for the development of a particular technology. In future updates of this article we will aim to broaden the focus and include such higher thresholds.{/ref} Human-level AI was defined as unaided machines being able to accomplish every task better and more cheaply than human workers. More information about the study can be found in the fold-out box at the end of this text.{ref}A discussion of the two most widely used concepts for thinking about the future of powerful AI systems – human-level AI and transformative AI – can be found in this [companion article](https://ourworldindata.org/ai-impact).{/ref} Each vertical line in this chart represents the answer of one expert. The fact that there are such large differences in answers makes it clear that experts do not agree on how long it will take until such a system might be developed. A few believe that this level of technology will never be developed. Some think that it’s possible, but it will take a long time. And many believe that it will be developed within the next few decades. As highlighted in the annotations, half of the experts gave a date before 2061, and 90% gave a date within the next 100 years. <Image filename="When-do-experts-expect-Artificial-General-Intelligence-grace.png" alt=""/> Other surveys of AI experts come to similar conclusions. In the following visualization, I have added the timelines from two earlier surveys conducted in 2018 and 2019. It is helpful to look at different surveys, as they differ in how they asked the question and how they defined human-level AI. You can find more details about these studies at the end of this text. In all three surveys, we see a large disagreement between experts and they also express large uncertainties about their own individual forecasts.{ref}The visualization shows when individual experts gave a 50% chance of human-level machine intelligence. The surveys also include data on when these experts gave much lower chances (e.g., ~10%) as well as much higher ones (~90%), and the spread between the respective dates is often considerable, expressing the AI experts range of their individual uncertainty. For example, the average across individual experts in the Zhang et al study gave a 10% chance of human-level machine intelligence by 2035, a 50% chance by 2060, and a 90% chance by 2105.{/ref} <Image filename="When-do-experts-expect-Artificial-General-Intelligence-surveys.png" alt=""/> ## What should we make of the timelines of AI experts? Expert surveys are one piece of information to consider when we think about the future of AI, but we should not overstate the results of these surveys. Experts in a particular technology are not necessarily experts in making predictions about the future of that technology. Experts in many fields do not have a good track record in making forecasts about their own field, as researchers including Barbara Mellers, Phil Tetlock, and others have shown.{ref}Mellers, B., Tetlock, P., & Arkes, H. R. (2019). Forecasting tournaments, epistemic humility and attitude depolarization. Cognition, 188, 19-26. Tetlock, P. (2005) – Expert political judgment: How good is it? How can we know? Princeton, NJ: Princeton University Press Philip E. Tetlock and Dan Gardner (2015) – Superforecasting: The Art and Science of Prediction.{/ref} The history of flight includes a striking example of such failure. Wilbur Wright is quoted as saying, "I confess that in 1901, I said to my brother Orville that man would not fly for 50 years." Two years later, ‘man’ was not only flying, but it was these very men who achieved the feat.{ref}Another example is Ernest Rutherford, father of nuclear physics, calling the possibility of harnessing nuclear energy "moonshine." The research paper by John Jenkin discusses why. John G. Jenkin (2011) – Atomic Energy is ‘‘Moonshine’’: What did Rutherford Really Mean?. Published in Physics in Perspective. DOI 10.1007/s00016-010-0038-1{/ref} Additionally these studies often find large ‘framing effects’, two logically identical questions get answered in very different ways depending on how exactly the questions are worded.{ref}This is discussed in some more detail for the study by Grace et al. in the Appendix.{/ref} What I do take away from these surveys however, is that the majority of AI experts take the prospect of very powerful AI technology seriously. It is not the case that AI researchers dismiss extremely powerful AI as mere fantasy. The huge majority thinks that in the coming decades there is an even chance that we will see AI technology which will have a transformative impact on our world. While some have long timelines, many think it is possible that we have very little time before these technologies arrive. Across the three surveys more than half think that there is a 50% chance that a human-level AI would be developed before some point in the 2060s, a time well within the lifetime of today’s young people. ## The forecast of the Metaculus community In the big visualization on AI timelines below, I have included the forecast by the Metaculus forecaster community. The forecasters on the online platform [Metaculus.com](https://www.metaculus.com/) are not experts in AI but people who dedicate their energy to making good forecasts. Research on forecasting has documented that groups of people can assign surprisingly accurate probabilities to future events when given the right incentives and good feedback.{ref}See the previously cited literature on forecasting by Barbara Mellers, Phil Tetlock, and others.{/ref} To receive this feedback, the online community at Metaculus [tracks](https://www.metaculus.com/questions/track-record/) how well they perform in their forecasts. What does this group of forecasters expect for the future of AI? At the time of writing, in November 2022, the forecasters believe that there is a 50/50-chance for an ‘Artificial General Intelligence’ to be ‘devised, tested, and publicly announced’ by the year 2040, less than 20 years from now. On [their](https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/) page about this specific question, you can find the precise definition of the AI system in question, how the timeline of their forecasts has changed, and the arguments of individual forecasters for how they arrived at their predictions.{ref}There are two other relevant questions on Metaculus. The [first one](https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/) asks for the date when weakly General AI will be publicly known. And [the second one](https://www.metaculus.com/questions/384/humanmachine-intelligence-parity-by-2040/) is asking for the probability of ‘human/machine intelligence parity’ by 2040.{/ref} The timelines of the Metaculus community have become much shorter recently. The expected timelines have shortened by about a decade in the spring of 2022, when several impressive AI breakthroughs happened faster than many had anticipated.{ref}Metaculus’s community prediction fell from the year 2058 in March 2022 to the year 2040 in July 2022.{/ref} ## The forecast by Ajeya Cotra The last shown forecast stems from the research by Ajeya Cotra, who works for the nonprofit Open Philanthropy.{ref}Her research was announced in various places, including the AI Alignment Forum: Ajeya Cotra (2020) – [Draft report on AI timelines](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines). As far as I know the report itself always remained a ‘draft report’ and was published [here on Google Docs](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP). In 2022 Ajeya Cotra published a [Two-year update on my personal AI timelines](https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines).{/ref} In 2020 she published a detailed and influential study asking when the world will see transformative AI. Her timeline is not based on surveys, but on the study of long-term trends in the computation used to train AI systems. I present and discuss the long-run trends in training computation in [this companion article.](https://ourworldindata.org/brief-history-of-ai) Cotra estimated that there is a 50% chance that a transformative AI system will become possible and affordable by the year 2050. This is her central estimate in her “median scenario.” Cotra emphasizes that there are substantial uncertainties around this median scenario, and also explored two other, more extreme, scenarios. The timelines for these two scenarios – her “most aggressive plausible” scenario and her “most conservative plausible” scenario – are also shown in the visualization. The span from 2040 to 2090 in Cotra’s “plausible” forecasts highlights that she believes that the uncertainty is large. The visualization also shows that Cotra updated her forecast two years after its initial publication. In 2022 Cotra published an update in which she shortened her median timeline by a full ten years.{ref}Ajeya Cotra’s [Two-year update on my personal AI timelines](https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines).{/ref} It is important to note that the definitions of the AI systems in question differ very much across these various studies. For example, the system that Cotra speaks about would have a much more transformative impact on the world than the system that the Metaculus forecasters focus on. More details can be found in the appendix and within the respective studies. <Image filename="When-do-experts-expect-Artificial-General-Intelligence-big.png" alt=""/> ## What can we learn from the forecasts? The visualization shows the forecasts of 1128 people – 812 individual AI experts, the aggregated estimates of 315 forecasters from the Metaculus platform, and the findings of the detailed study by Ajeya Cotra. There are two big takeaways from these forecasts on AI timelines: 0. There is no consensus, and the uncertainty is high. There is huge disagreement between experts about when human-level AI will be developed. Some believe that it is decades away, while others think it is probable that such systems will be developed within the next few years or months. There is not just disagreement _between_ experts; individual experts also emphasize the large uncertainty around their own individual estimate. As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare. 0. At the same time, there is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that. The majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case it would plausibly be the biggest transformation in the lifetime of our children, or even in our own lifetime. The public discourse and the decision-making at major institutions have not caught up with these prospects. In discussions on the future of our world – from the future of our climate, to the future of our economies, to the future of our political institutions – the prospect of transformative AI is rarely central to the conversation. Often it is not mentioned at all, not even in a footnote. We seem to be in a situation where most people hardly think about the future of artificial intelligence, while the few who dedicate their attention to it find it plausible that one of the biggest transformations in humanity’s history is likely to happen within our lifetimes. **Acknowledgements:** I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Bastian Herre, Edouard Mathieu, Esteban Ortiz-Ospina and Hannah Ritchie for their helpful comments to drafts of this essay. And I would like to thank my colleague Charlie Giattino who calculated the timelines for individual experts based on the data from the three survey studies and supported the work on this essay. Charlie is also one of the authors of the cited study by Zhang et al. on timelines of AI experts. ## Additional information The three cited AI experts surveys are: * Katja Grace, Zach Stein-Perlman, and Benjamin Weinstein-Raun (2022) – “[2022 Expert Survey on Progress in AI](https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/).” AI Impacts, 3 Aug. 2022. * Baobao Zhang, Noemi Dreksler, Markus Anderljung, Lauren Kahn, Charlie Giattino, Allan Dafoe, and Michael Horowitz (2022) – [Forecasting AI Progress: Evidence from a Survey of Machine Learning Researchers](https://doi.org/10.48550/arXiv.2206.04132). Published on arXiv June 8, 2022. * Ross Gruetzemacher, David Paradice, and Kang Bok Lee (2019) – [Forecasting Transformative AI: An Expert Survey](https://arxiv.org/abs/1901.08579), published on arXiv. The surveys were conducted during the following times: * Grace et al. was completed between 12 June and 3 August 2022. * Zhang et al. was completed mainly between 16 September and 13 October 2019; but due to an error some experts completed the survey between 10-14 March 2020. * Gruetzemacher et al. was completed in the "summer of 2018.” The surveys differ in how the question was asked and how the AI system in question was defined. In the following sections we discuss this in detail for all cited studies. ##### The study by Grace et al. published in 2022 Survey respondents were given the following text regarding the definition of high-level machine intelligence: “The following questions ask about ‘high-level machine intelligence’ (HLMI). Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g., being accepted as a jury member. Think feasibility, not adoption. For the purposes of this question, assume that human scientific activity continues without major negative disruption.” Each respondent was randomly assigned to give their forecasts under one of two different framings: “fixed-probability” and “fixed-years.” Those in the fixed-probability framing were asked, “How many years until you expect: A 10% probability of HLMI existing? A 50% probability of HLMI existing? A 90% probability of HLMI existing?” They responded by giving a number of years from the day they took the survey. Those in the fixed-years framing were asked, “How likely is it that HLMI exists: In 10 years? In 20 years? In 40 years?” They responded by giving a probability of that happening. Several studies have shown that the framing affects respondents’ timelines, with the fixed-years framing leading to longer timelines (i.e., that HLMI is further in the future). For example, in the previous edition of this survey (which asked identical questions), respondents who got the fixed-years framing gave a 50% chance of HLMI by 2068; those who got fixed-probability gave the year 2054.{ref}Grace et al (2018) Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research. We read both of these numbers of the chart in this publication, these years are not directly reported.{/ref} The framing results from the 2022 edition of the survey have not yet been published. In addition to this framing effect, there is a larger effect driven by how the concept of HLMI is defined. We can see this in the results from the previous edition of this survey (the result from the 2022 survey hasn’t yet been published). For respondents who were given the HLMI definition above, the average forecast for a 50% chance of HLMI was 2061. A small subset of respondents was instead given another, logically similar question that asked about the full automation of labor; their average forecast for a 50% probability was 2138, a full 77 years later than the first group. The full automation of labor group was asked: “Say an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g., being accepted as a jury member. Think feasibility, not adoption. Say we have reached ‘full automation of labor’ when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers.” This question was asked under both the fixed-probability and fixed-years framings. ##### The study by Zhang et al. published in 2022 Survey respondents were given the following definition of human-level machine intelligence: “Human-level machine intelligence (HLMI) is reached when machines are collectively able to perform almost all tasks (>90% of all tasks) that are economically relevant better than the median human paid to do that task in 2019. You should ignore tasks that are legally or culturally restricted to humans, such as serving on a jury.” “Economically relevant” tasks were defined as those included in the [Occupational Information Network (O*NET) database](https://www.onetcenter.org/database.html#individual-files). O*NET is a widely used dataset of tasks carried out across a wide range of occupations. As in Grace et al 2022, each survey respondent was randomly assigned to give their forecasts under one of two different framings: “fixed-probability” and “fixed-years.” As was found before, the fixed-years framing resulted in longer timelines on average: the year 2070 for a 50% chance of HLMI, compared to 2050 under the fixed-probability framing. ##### The study by Gruetzemacher et al. published in 2019 Survey respondents were asked the following: “These questions will ask your opinion of future AI progress with regard to human tasks. We define human tasks as all unique tasks that humans are currently paid to do. We consider human tasks as different from jobs in that an algorithm may be able to replace humans at some portion of tasks a job requires while not being able to replace humans for all of the job requirements. For example, an AI system(s) may not replace a lawyer entirely but may be able to accomplish 50% of the tasks a lawyer typically performs. In how many years do you expect AI systems to collectively be able to accomplish 99% of human tasks at or above the level of a typical human? Think feasibility.” We show the results using this definition of AI in the chart, as we judged this definition to be most comparable to the other studies included in the chart. In addition to this definition, respondents were asked about AI systems that are able to collectively accomplish 50% and 90% of human tasks, as well as “broadly capable AI systems” that are able to accomplish 90% and 99% of human tasks. All respondents in this survey received a fixed-probability framing. ##### The study by Ajeya Cotra published in 2020 Cotra’s overall aim was to estimate when we might expect “transformative artificial intelligence” (TAI), defined as “ ‘software’... that has at least as profound an impact on the world’s trajectory as the Industrial Revolution did.” Cotra focused on “a relatively concrete and easy-to-picture way that TAI could manifest: as a single computer program which performs a large enough diversity of intellectual labor at a high enough level of performance that it alone can drive a transition similar to the Industrial Revolution.” One intuitive example of such a program is the ‘virtual professional’, “a model that can do roughly everything economically productive that an intelligent and educated human could do remotely from a computer connected to the internet at a hundred-fold speedup, for costs similar to or lower than the costs of employing such a human.” When might we expect something like a virtual professional to exist? To answer this, Cotra first estimated the amount of computation that would be required to train such a system using the machine learning architectures and algorithms available to researchers in 2020. She then estimated when that amount of computation would be available at a low enough cost based on extrapolating past trends. The estimate of training computation relies on an estimate of the amount of computation performed by the human brain each second, combined with different hypotheses for how much training would be required to reach a high enough level of capability. For example, the “lifetime anchor” hypothesis estimates the total computation performed by the human brain up to age ~32. Each aspect of these estimates comes with a very high degree of uncertainty. Cotra writes: “The question of whether there is a sensible notion of ‘brain computation’ that can be measured in FLOP/s—and if so, what range of numerical estimates for brain FLOP/s would be reasonable—is conceptually fraught and empirically murky.” For anyone who is interested in the question of future AI, the study of Cotra is very much worth reading in detail. She lays out good and transparent reasons for her estimates and communicates her reasoning in great detail. Her research was announced in various places, including the AI Alignment Forum: Ajeya Cotra (2020) – [Draft report on AI timelines](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines). As far as I know the report itself always remained a ‘draft report’ and was published [here on Google Docs](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP) (it is not uncommon in the field of AI research that articles get published in non-standard ways). In 2022 Ajeya Cotra published a [Two-year update on my personal AI timelines](https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines). ##### Other studies A very different kind of forecast that is also relevant here is the work of David Roodman. In his article [Modeling the Human Trajectory](https://www.openphilanthropy.org/research/modeling-the-human-trajectory/) he studies the history of global economic output to think about the future. He asks whether it is plausible to see economic growth that could be considered ‘transformative’ – an annual growth rate of the world economy higher than 30% – within this century. One of his conclusions is that "if the patterns of long-term history continue, some sort of economic explosion will take place again, the most plausible channel being AI.” And another very different kind of forecast is Tom Davidson’s [Report on Semi-informative Priors](https://www.openphilanthropy.org/research/report-on-semi-informative-priors/) published in 2021. | { "id": 54836, "date": "2023-02-07T11:00:00", "guid": { "rendered": "https://owid.cloud/?p=54836" }, "link": "https://owid.cloud/ai-timelines", "meta": { "owid_publication_context_meta_field": { "latest": true, "homepage": true, "immediate_newsletter": true } }, "slug": "ai-timelines", "tags": [], "type": "post", "title": { "rendered": "AI timelines: What do experts in artificial intelligence expect for the future?" }, "_links": { "self": [ { "href": "https://owid.cloud/wp-json/wp/v2/posts/54836" } ], "about": [ { "href": "https://owid.cloud/wp-json/wp/v2/types/post" } ], "author": [ { "href": "https://owid.cloud/wp-json/wp/v2/users/2", "embeddable": true } ], "curies": [ { "href": "https://api.w.org/{rel}", "name": "wp", "templated": true } ], "replies": [ { "href": "https://owid.cloud/wp-json/wp/v2/comments?post=54836", "embeddable": true } ], "wp:term": [ { "href": "https://owid.cloud/wp-json/wp/v2/categories?post=54836", "taxonomy": "category", "embeddable": true }, { "href": "https://owid.cloud/wp-json/wp/v2/tags?post=54836", "taxonomy": "post_tag", "embeddable": true } ], "collection": [ { "href": "https://owid.cloud/wp-json/wp/v2/posts" } ], "wp:attachment": [ { "href": "https://owid.cloud/wp-json/wp/v2/media?parent=54836" } ], "version-history": [ { "href": "https://owid.cloud/wp-json/wp/v2/posts/54836/revisions", "count": 17 } ], "wp:featuredmedia": [ { "href": "https://owid.cloud/wp-json/wp/v2/media/54859", "embeddable": true } ], "predecessor-version": [ { "id": 58295, "href": "https://owid.cloud/wp-json/wp/v2/posts/54836/revisions/58295" } ] }, "author": 2, "format": "standard", "status": "publish", "sticky": false, "content": { "rendered": "\n<div class=\"blog-info\">\n<p>Our World in Data presents the data and research to make progress against the world\u2019s largest problems.<br>This article draws on data and research discussed in our entry on <strong><a href=\"https://ourworldindata.org/artificial-intelligence\" target=\"_blank\" rel=\"noopener\">Artificial Intelligence</a></strong>.</p>\n</div>\n\n\n\n<div class=\"wp-block-columns\">\n<div class=\"wp-block-column\">\n<p>Artificial intelligence (AI) that surpasses our own intelligence sounds like the stuff from science-fiction books or films. What do experts in the field of AI research think about such scenarios? Do they dismiss these ideas as fantasy, or are they taking such prospects seriously?</p>\n\n\n\n<p>A human-level AI would be a machine, or a network of machines, capable of carrying out the same range of tasks that we humans are capable of. It would be a machine that is \u201cable to learn to do anything that a human can do\u201d, as Norvig and Russell put it in their textbook on AI.{ref}Peter Norvig and Stuart Russell (2021) \u2013 Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.{/ref}</p>\n\n\n\n<p>It would be able to choose actions that allow the machine to achieve its goals and then carry out those actions. It would be able to do the work of a translator, a doctor, an illustrator, a teacher, a therapist, a driver, or the work of an investor. </p>\n\n\n\n<p>In recent years, several research teams contacted AI experts and asked them about their expectations for the future of machine intelligence. Such expert surveys are one of the pieces of information that we can rely on to form an idea of what the future of AI might look like.</p>\n\n\n\n<p>The chart shows the answers of 352 experts. This is from the most recent study by Katja Grace and her colleagues, conducted in the summer of 2022.{ref}A total of 4,271 AI experts were contacted; 738 responded (a 17% rate), of which 352 provided complete answers to the human-level AI question.<br><br>It\u2019s possible that the respondents were not representative of all the AI experts contacted \u2013 that is, that there was \u201csample bias.\u201d There is not enough data to rule out all potential sources of sample bias. After all, we don\u2019t know what the people who didn\u2019t respond to the survey, or others who weren\u2019t even contacted, believe about AI. However, there is evidence from similar surveys to suggest that at least some potential sources of bias are minimal.</p>\n\n\n\n<p>In similar surveys (e.g., <a rel=\"noreferrer noopener\" href=\"https://arxiv.org/pdf/2206.04132.pdf\" target=\"_blank\">Zhang et al. 2022</a>; <a rel=\"noreferrer noopener\" href=\"https://jair.org/index.php/jair/article/view/11222\" target=\"_blank\">Grace et al. 2018</a>), the researchers compared the group of respondents with a randomly selected, similarly sized group of non-respondents to see if they differed on measurable demographic characteristics, such as where they were educated, their gender, how many citations they had, years in the field, etc.</p>\n\n\n\n<p>In these similar surveys, the researchers found some differences between the respondents and non-respondents, but they were small. So while other, unmeasured sources of sample bias couldn\u2019t be ruled out, large bias due to the demographic characteristics that were measured could be ruled out.{/ref}</p>\n\n\n\n<p>Experts were asked when they believe there is a 50% chance that human-level AI exists.{ref}Much of the literature on AI timelines focuses on the 50% probability threshold. I think it would be valuable if this literature would additionally also focus on higher thresholds, say a probability of 80% for the development of a particular technology. In future updates of this article we will aim to broaden the focus and include such higher thresholds.{/ref} Human-level AI was defined as unaided machines being able to accomplish every task better and more cheaply than human workers. More information about the study can be found in the fold-out box at the end of this text.{ref}A discussion of the two most widely used concepts for thinking about the future of powerful AI systems \u2013 human-level AI and transformative AI \u2013 can be found in this <a href=\"https://ourworldindata.org/ai-impact\">companion article</a>.{/ref}</p>\n\n\n\n<p>Each vertical line in this chart represents the answer of one expert. The fact that there are such large differences in answers makes it clear that experts do not agree on how long it will take until such a system might be developed. A few believe that this level of technology will never be developed. Some think that it\u2019s possible, but it will take a long time. And many believe that it will be developed within the next few decades.</p>\n\n\n\n<p>As highlighted in the annotations, half of the experts gave a date before 2061, and 90% gave a date within the next 100 years.</p>\n</div>\n\n\n\n<div class=\"wp-block-column\"></div>\n</div>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"6102\" height=\"1726\" src=\"https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-grace.png\" alt=\"\" class=\"wp-image-54847\" srcset=\"https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-grace.png 6102w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-grace-400x113.png 400w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-grace-800x226.png 800w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-grace-150x42.png 150w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-grace-768x217.png 768w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-grace-1536x434.png 1536w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-grace-2048x579.png 2048w\" sizes=\"(max-width: 6102px) 100vw, 6102px\" /></figure>\n\n\n\n<div class=\"wp-block-columns\">\n<div class=\"wp-block-column\">\n<p>Other surveys of AI experts come to similar conclusions. In the following visualization, I have added the timelines from two earlier surveys conducted in 2018 and 2019. It is helpful to look at different surveys, as they differ in how they asked the question and how they defined human-level AI. You can find more details about these studies at the end of this text.</p>\n\n\n\n<p>In all three surveys, we see a large disagreement between experts and they also express large uncertainties about their own individual forecasts.{ref}The visualization shows when individual experts gave a 50% chance of human-level machine intelligence. The surveys also include data on when these experts gave much lower chances (e.g., ~10%) as well as much higher ones (~90%), and the spread between the respective dates is often considerable, expressing the AI experts range of their individual uncertainty. For example, the average across individual experts in the Zhang et al study gave a 10% chance of human-level machine intelligence by 2035, a 50% chance by 2060, and a 90% chance by 2105.{/ref} </p>\n</div>\n\n\n\n<div class=\"wp-block-column\"></div>\n</div>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"9571\" height=\"3275\" src=\"https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-surveys.png\" alt=\"\" class=\"wp-image-54848\" srcset=\"https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-surveys.png 9571w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-surveys-400x137.png 400w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-surveys-800x274.png 800w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-surveys-150x51.png 150w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-surveys-768x263.png 768w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-surveys-1536x526.png 1536w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-surveys-2048x701.png 2048w\" sizes=\"(max-width: 9571px) 100vw, 9571px\" /></figure>\n\n\n\n<h4>What should we make of the timelines of AI experts?</h4>\n\n\n\n<p>Expert surveys are one piece of information to consider when we think about the future of AI, but we should not overstate the results of these surveys. Experts in a particular technology are not necessarily experts in making predictions about the future of that technology.</p>\n\n\n\n<p>Experts in many fields do not have a good track record in making forecasts about their own field, as researchers including Barbara Mellers, Phil Tetlock, and others have shown.{ref}Mellers, B., Tetlock, P., & Arkes, H. R. (2019). Forecasting tournaments, epistemic humility and attitude depolarization. Cognition, 188, 19-26.</p>\n\n\n\n<p>Tetlock, P. (2005) \u2013 Expert political judgment: How good is it? How can we know? Princeton, NJ: Princeton University Press</p>\n\n\n\n<p>Philip E. Tetlock and Dan Gardner (2015) \u2013 Superforecasting: The Art and Science of Prediction.{/ref} The history of flight includes a striking example of such failure. Wilbur Wright is quoted as saying, “I confess that in 1901, I said to my brother Orville that man would not fly for 50 years.” Two years later, \u2018man\u2019 was not only flying, but it was these very men who achieved the feat.{ref}Another example is Ernest Rutherford, father of nuclear physics, calling the possibility of harnessing nuclear energy “moonshine.” The research paper by John Jenkin discusses why. John G. Jenkin (2011) \u2013 Atomic Energy is \u2018\u2018Moonshine\u2019\u2019: What did Rutherford Really Mean?. Published in Physics in Perspective. DOI 10.1007/s00016-010-0038-1{/ref} </p>\n\n\n\n<p>Additionally these studies often find large \u2018framing effects\u2019, two logically identical questions get answered in very different ways depending on how exactly the questions are worded.{ref}This is discussed in some more detail for the study by Grace et al. in the Appendix.{/ref}</p>\n\n\n\n<p>What I do take away from these surveys however, is that the majority of AI experts take the prospect of very powerful AI technology seriously. It is not the case that AI researchers dismiss extremely powerful AI as mere fantasy. </p>\n\n\n\n<p>The huge majority thinks that in the coming decades there is an even chance that we will see AI technology which will have a transformative impact on our world. While some have long timelines, many think it is possible that we have very little time before these technologies arrive. Across the three surveys more than half think that there is a 50% chance that a human-level AI would be developed before some point in the 2060s, a time well within the lifetime of today\u2019s young people.</p>\n\n\n\n<h4>The forecast of the Metaculus community</h4>\n\n\n\n<p>In the big visualization on AI timelines below, I have included the forecast by the Metaculus forecaster community.</p>\n\n\n\n<p>The forecasters on the online platform <a href=\"https://www.metaculus.com/\">Metaculus.com</a> are not experts in AI but people who dedicate their energy to making good forecasts. Research on forecasting has documented that groups of people can assign surprisingly accurate probabilities to future events when given the right incentives and good feedback.{ref}See the previously cited literature on forecasting by Barbara Mellers, Phil Tetlock, and others.{/ref} To receive this feedback, the online community at Metaculus <a href=\"https://www.metaculus.com/questions/track-record/\">tracks</a> how well they perform in their forecasts. </p>\n\n\n\n<p>What does this group of forecasters expect for the future of AI?</p>\n\n\n\n<p>At the time of writing, in November 2022, the forecasters believe that there is a 50/50-chance for an \u2018Artificial General Intelligence\u2019 to be \u2018devised, tested, and publicly announced\u2019 by the year 2040, less than 20 years from now.</p>\n\n\n\n<p>On <a href=\"https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/\">their</a> page about this specific question, you can find the precise definition of the AI system in question, how the timeline of their forecasts has changed, and the arguments of individual forecasters for how they arrived at their predictions.{ref}There are two other relevant questions on Metaculus. The <a href=\"https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/\">first one</a> asks for the date when weakly General AI will be publicly known. And <a href=\"https://www.metaculus.com/questions/384/humanmachine-intelligence-parity-by-2040/\">the second one</a> is asking for the probability of \u2018human/machine intelligence parity\u2019 by 2040.{/ref}</p>\n\n\n\n<p>The timelines of the Metaculus community have become much shorter recently. The expected timelines have shortened by about a decade in the spring of 2022, when several impressive AI breakthroughs happened faster than many had anticipated.{ref}Metaculus\u2019s community prediction fell from the year 2058 in March 2022 to the year 2040 in July 2022.{/ref}</p>\n\n\n\n<h4>The forecast by Ajeya Cotra</h4>\n\n\n\n<p>The last shown forecast stems from the research by Ajeya Cotra, who works for the nonprofit Open Philanthropy.{ref}Her research was announced in various places, including the AI Alignment Forum: Ajeya Cotra (2020) \u2013 <a href=\"https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines\">Draft report on AI timelines</a>. As far as I know the report itself always remained a \u2018draft report\u2019 and was published <a href=\"https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP\">here on Google Docs</a>.</p>\n\n\n\n<p>In 2022 Ajeya Cotra published a <a href=\"https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines\">Two-year update on my personal AI timelines</a>.{/ref} In 2020 she published a detailed and influential study asking when the world will see transformative AI. Her timeline is not based on surveys, but on the study of long-term trends in the computation used to train AI systems. I present and discuss the long-run trends in training computation in <a href=\"https://ourworldindata.org/brief-history-of-ai\">this companion article.</a> </p>\n\n\n\n<p>Cotra estimated that there is a 50% chance that a transformative AI system will become possible and affordable by the year 2050. This is her central estimate in her \u201cmedian scenario.\u201d Cotra emphasizes that there are substantial uncertainties around this median scenario, and also explored two other, more extreme, scenarios. The timelines for these two scenarios \u2013 her \u201cmost aggressive plausible\u201d scenario and her \u201cmost conservative plausible\u201d scenario \u2013 are also shown in the visualization. The span from 2040 to 2090 in Cotra\u2019s \u201cplausible\u201d forecasts highlights that she believes that the uncertainty is large.</p>\n\n\n\n<p>The visualization also shows that Cotra updated her forecast two years after its initial publication. In 2022 Cotra published an update in which she shortened her median timeline by a full ten years.{ref}Ajeya Cotra\u2019s <a href=\"https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines\">Two-year update on my personal AI timelines</a>.{/ref} </p>\n\n\n\n<p>It is important to note that the definitions of the AI systems in question differ very much across these various studies. For example, the system that Cotra speaks about would have a much more transformative impact on the world than the system that the Metaculus forecasters focus on. More details can be found in the appendix and within the respective studies.</p>\n\n\n\n<figure class=\"wp-block-image size-full wp-block-full-content-width\"><img loading=\"lazy\" width=\"9615\" height=\"5435\" src=\"https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-big.png\" alt=\"\" class=\"wp-image-54846\" srcset=\"https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-big.png 9615w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-big-400x226.png 400w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-big-800x452.png 800w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-big-150x85.png 150w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-big-768x434.png 768w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-big-1536x868.png 1536w, https://owid.cloud/app/uploads/2022/12/When-do-experts-expect-Artificial-General-Intelligence-big-2048x1158.png 2048w\" sizes=\"(max-width: 9615px) 100vw, 9615px\" /></figure>\n\n\n\n<h4>What can we learn from the forecasts?</h4>\n\n\n\n<p>The visualization shows the forecasts of 1128 people \u2013 812 individual AI experts, the aggregated estimates of 315 forecasters from the Metaculus platform, and the findings of the detailed study by Ajeya Cotra.</p>\n\n\n\n<p>There are two big takeaways from these forecasts on AI timelines:</p>\n\n\n\n<ol><li>There is no consensus, and the uncertainty is high. There is huge disagreement between experts about when human-level AI will be developed. Some believe that it is decades away, while others think it is probable that such systems will be developed within the next few years or months.<br><br>There is not just disagreement <em>between</em> experts; individual experts also emphasize the large uncertainty around their own individual estimate. As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare. </li></ol>\n\n\n\n<ol start=\"2\"><li>At the same time, there is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that. The majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case it would plausibly be the biggest transformation in the lifetime of our children, or even in our own lifetime.</li></ol>\n\n\n\n<p>The public discourse and the decision-making at major institutions have not caught up with these prospects. In discussions on the future of our world \u2013 from the future of our climate, to the future of our economies, to the future of our political institutions \u2013 the prospect of transformative AI is rarely central to the conversation. Often it is not mentioned at all, not even in a footnote.</p>\n\n\n\n<p>We seem to be in a situation where most people hardly think about the future of artificial intelligence, while the few who dedicate their attention to it find it plausible that one of the biggest transformations in humanity\u2019s history is likely to happen within our lifetimes.</p>\n\n\n\n<p></p>\n\n\n\n<p></p>\n\n\n\n<hr class=\"wp-block-separator\"/>\n\n\n\n<div style=\"height:100px\" aria-hidden=\"true\" class=\"wp-block-spacer\"></div>\n\n\n\n<p><strong>Acknowledgements:</strong> I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Bastian Herre, Edouard Mathieu, Esteban Ortiz-Ospina and Hannah Ritchie for their helpful comments to drafts of this essay.</p>\n\n\n\n<p>And I would like to thank my colleague Charlie Giattino who calculated the timelines for individual experts based on the data from the three survey studies and supported the work on this essay. Charlie is also one of the authors of the cited study by Zhang et al. on timelines of AI experts.</p>\n\n\n\n<hr class=\"wp-block-separator\"/>\n\n\n\t<block type=\"additional-information\" default-open=\"false\">\n\t\t<content>\n\n<h3>More information about the studies and forecasts discussed in this essay</h3>\n\n\n\n<div class=\"wp-block-columns is-style-sticky-right\">\n<div class=\"wp-block-column\">\n<p>The three cited AI experts surveys are:</p>\n\n\n\n<ul><li>Katja Grace, Zach Stein-Perlman, and Benjamin Weinstein-Raun (2022) \u2013 \u201c<a href=\"https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/\">2022 Expert Survey on Progress in AI</a>.\u201d AI Impacts, 3 Aug. 2022.</li><li>Baobao Zhang, Noemi Dreksler, Markus Anderljung, Lauren Kahn, Charlie Giattino, Allan Dafoe, and Michael Horowitz (2022) \u2013 <a href=\"https://doi.org/10.48550/arXiv.2206.04132\">Forecasting AI Progress: Evidence from a Survey of Machine Learning Researchers</a>. Published on arXiv June 8, 2022. </li><li>Ross Gruetzemacher, David Paradice, and Kang Bok Lee (2019) \u2013 <a href=\"https://arxiv.org/abs/1901.08579\">Forecasting Transformative AI: An Expert Survey</a>, published on arXiv.</li></ul>\n\n\n\n<p>The surveys were conducted during the following times:</p>\n\n\n\n<ul><li>Grace et al. was completed between 12 June and 3 August 2022.</li><li>Zhang et al. was completed mainly between 16 September and 13 October 2019; but due to an error some experts completed the survey between 10-14 March 2020.</li><li>Gruetzemacher et al. was completed in the “summer of 2018.\u201d</li></ul>\n\n\n\n<p>The surveys differ in how the question was asked and how the AI system in question was defined. In the following sections we discuss this in detail for all cited studies.</p>\n\n\n\n<h5>The study by Grace et al. published in 2022</h5>\n\n\n\n<p>Survey respondents were given the following text regarding the definition of high-level machine intelligence: </p>\n\n\n\n<p>\u201cThe following questions ask about \u2018high-level machine intelligence\u2019 (HLMI). Say we have \u2018high-level machine intelligence\u2019 when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g., being accepted as a jury member. Think feasibility, not adoption. For the purposes of this question, assume that human scientific activity continues without major negative disruption.\u201d</p>\n\n\n\n<p>Each respondent was randomly assigned to give their forecasts under one of two different framings: \u201cfixed-probability\u201d and \u201cfixed-years.\u201d</p>\n\n\n\n<p>Those in the fixed-probability framing were asked, \u201cHow many years until you expect: A 10% probability of HLMI existing? A 50% probability of HLMI existing? A 90% probability of HLMI existing?\u201d They responded by giving a number of years from the day they took the survey.</p>\n\n\n\n<p>Those in the fixed-years framing were asked, \u201cHow likely is it that HLMI exists: In 10 years? In 20 years? In 40 years?\u201d They responded by giving a probability of that happening.</p>\n\n\n\n<p>Several studies have shown that the framing affects respondents\u2019 timelines, with the fixed-years framing leading to longer timelines (i.e., that HLMI is further in the future). For example, in the previous edition of this survey (which asked identical questions), respondents who got the fixed-years framing gave a 50% chance of HLMI by 2068; those who got fixed-probability gave the year 2054.{ref}Grace et al (2018) Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research. We read both of these numbers of the chart in this publication, these years are not directly reported.{/ref} The framing results from the 2022 edition of the survey have not yet been published.</p>\n\n\n\n<p>In addition to this framing effect, there is a larger effect driven by how the concept of HLMI is defined. We can see this in the results from the previous edition of this survey (the result from the 2022 survey hasn\u2019t yet been published). For respondents who were given the HLMI definition above, the average forecast for a 50% chance of HLMI was 2061. A small subset of respondents was instead given another, logically similar question that asked about the full automation of labor; their average forecast for a 50% probability was 2138, a full 77 years later than the first group.</p>\n\n\n\n<p>The full automation of labor group was asked: \u201cSay an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g., being accepted as a jury member. Think feasibility, not adoption. Say we have reached \u2018full automation of labor\u2019 when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers.\u201d This question was asked under both the fixed-probability and fixed-years framings.</p>\n\n\n\n<h5>The study by Zhang et al. published in 2022</h5>\n\n\n\n<p>Survey respondents were given the following definition of human-level machine intelligence: \u201cHuman-level machine intelligence (HLMI) is reached when machines are collectively able to perform almost all tasks (>90% of all tasks) that are economically relevant better than the median human paid to do that task in 2019. You should ignore tasks that are legally or culturally restricted to humans, such as serving on a jury.\u201d</p>\n\n\n\n<p>\u201cEconomically relevant\u201d tasks were defined as those included in the <a href=\"https://www.onetcenter.org/database.html#individual-files\">Occupational Information Network (O*NET) database</a>. O*NET is a widely used dataset of tasks carried out across a wide range of occupations.</p>\n\n\n\n<p>As in Grace et al 2022, each survey respondent was randomly assigned to give their forecasts under one of two different framings: \u201cfixed-probability\u201d and \u201cfixed-years.\u201d As was found before, the fixed-years framing resulted in longer timelines on average: the year 2070 for a 50% chance of HLMI, compared to 2050 under the fixed-probability framing.</p>\n\n\n\n<h5>The study by Gruetzemacher et al. published in 2019</h5>\n\n\n\n<p>Survey respondents were asked the following: \u201cThese questions will ask your opinion of future AI progress with regard to human tasks. We define human tasks as all unique tasks that humans are currently paid to do. We consider human tasks as different from jobs in that an algorithm may be able to replace humans at some portion of tasks a job requires while not being able to replace humans for all of the job requirements. For example, an AI system(s) may not replace a lawyer entirely but may be able to accomplish 50% of the tasks a lawyer typically performs. In how many years do you expect AI systems to collectively be able to accomplish 99% of human tasks at or above the level of a typical human? Think feasibility.\u201d</p>\n\n\n\n<p>We show the results using this definition of AI in the chart, as we judged this definition to be most comparable to the other studies included in the chart.</p>\n\n\n\n<p>In addition to this definition, respondents were asked about AI systems that are able to collectively accomplish 50% and 90% of human tasks, as well as \u201cbroadly capable AI systems\u201d that are able to accomplish 90% and 99% of human tasks.</p>\n\n\n\n<p>All respondents in this survey received a fixed-probability framing.</p>\n\n\n\n<h5>The study by Ajeya Cotra published in 2020</h5>\n\n\n\n<p>Cotra\u2019s overall aim was to estimate when we might expect \u201ctransformative artificial intelligence\u201d (TAI), defined as \u201c \u2018software\u2019… that has at least as profound an impact on the world\u2019s trajectory as the Industrial Revolution did.\u201d</p>\n\n\n\n<p>Cotra focused on \u201ca relatively concrete and easy-to-picture way that TAI could manifest: as a single computer program which performs a large enough diversity of intellectual labor at a high enough level of performance that it alone can drive a transition similar to the Industrial Revolution.\u201d</p>\n\n\n\n<p>One intuitive example of such a program is the \u2018virtual professional\u2019, \u201ca model that can do roughly everything economically productive that an intelligent and educated human could do remotely from a computer connected to the internet at a hundred-fold speedup, for costs similar to or lower than the costs of employing such a human.\u201d</p>\n\n\n\n<p>When might we expect something like a virtual professional to exist?</p>\n\n\n\n<p>To answer this, Cotra first estimated the amount of computation that would be required to train such a system using the machine learning architectures and algorithms available to researchers in 2020. She then estimated when that amount of computation would be available at a low enough cost based on extrapolating past trends.</p>\n\n\n\n<p>The estimate of training computation relies on an estimate of the amount of computation performed by the human brain each second, combined with different hypotheses for how much training would be required to reach a high enough level of capability.</p>\n\n\n\n<p>For example, the \u201clifetime anchor\u201d hypothesis estimates the total computation performed by the human brain up to age ~32.</p>\n\n\n\n<p>Each aspect of these estimates comes with a very high degree of uncertainty. Cotra writes: \u201cThe question of whether there is a sensible notion of \u2018brain computation\u2019 that can be measured in FLOP/s\u2014and if so, what range of numerical estimates for brain FLOP/s would be reasonable\u2014is conceptually fraught and empirically murky.\u201d</p>\n\n\n\n<p>For anyone who is interested in the question of future AI, the study of Cotra is very much worth reading in detail. She lays out good and transparent reasons for her estimates and communicates her reasoning in great detail.</p>\n\n\n\n<p>Her research was announced in various places, including the AI Alignment Forum: Ajeya Cotra (2020) \u2013 <a href=\"https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines\">Draft report on AI timelines</a>. As far as I know the report itself always remained a \u2018draft report\u2019 and was published <a href=\"https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP\">here on Google Docs</a> (it is not uncommon in the field of AI research that articles get published in non-standard ways). In 2022 Ajeya Cotra published a <a href=\"https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines\">Two-year update on my personal AI timelines</a>.</p>\n\n\n\n<h5>Other studies</h5>\n\n\n\n<p>A very different kind of forecast that is also relevant here is the work of David Roodman. In his article <a href=\"https://www.openphilanthropy.org/research/modeling-the-human-trajectory/\">Modeling the Human Trajectory</a> he studies the history of global economic output to think about the future. He asks whether it is plausible to see economic growth that could be considered \u2018transformative\u2019 \u2013 an annual growth rate of the world economy higher than 30% \u2013 within this century. One of his conclusions is that “if the patterns of long-term history continue, some sort of economic explosion will take place again, the most plausible channel being AI.\u201d</p>\n\n\n\n<p>And another very different kind of forecast is Tom Davidson\u2019s <a href=\"https://www.openphilanthropy.org/research/report-on-semi-informative-priors/\">Report on Semi-informative Priors</a> published in 2021.</p>\n</div>\n\n\n\n<div class=\"wp-block-column\"></div>\n</div>\n\n</content>\n\t</block>", "protected": false }, "excerpt": { "rendered": "Many AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.", "protected": false }, "date_gmt": "2023-02-07T11:00:00", "modified": "2023-10-11T09:44:20", "template": "", "categories": [ 1 ], "ping_status": "closed", "authors_name": [ "Max Roser" ], "modified_gmt": "2023-10-11T08:44:20", "comment_status": "closed", "featured_media": 54859, "featured_media_paths": { "thumbnail": "/app/uploads/2022/12/featured-image-When-do-experts-expect-Artificial-General-Intelligence-150x86.png", "medium_large": "/app/uploads/2022/12/featured-image-When-do-experts-expect-Artificial-General-Intelligence-768x440.png" } } |