posts: 54765
Data license: CC-BY
This data as json
id | title | slug | type | status | content | archieml | archieml_update_statistics | published_at | updated_at | gdocSuccessorId | authors | excerpt | created_at_in_wordpress | updated_at_in_wordpress | featured_image | formattingOptions | markdown | wpApiSnapshot |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
54765 | The brief history of artificial intelligence: The world has changed fast – what might be next? | brief-history-of-ai | post | publish | <!-- wp:html --> <div class="blog-info"> <p>Our World in Data presents the data and research to make progress against the world’s largest problems.<br>This article draws on data and research discussed in our entry on <strong><a href="https://ourworldindata.org/artificial-intelligence" target="_blank" rel="noopener">Artificial Intelligence</a></strong>.</p> </div> <!-- /wp:html --> <!-- wp:paragraph --> <p>To see what the future might look like it is often helpful to study our history. This is what I will do in this article. I retrace the brief history of computers and artificial intelligence to see what we can expect for the future.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":4} --> <h4>How did we get here?</h4> <!-- /wp:heading --> <!-- wp:columns --> <div class="wp-block-columns"><!-- wp:column --> <div class="wp-block-column"><!-- wp:paragraph --> <p>How rapidly the world has changed becomes clear by how even quite recent computer technology feels ancient to us today. Mobile phones in the ‘90s were big bricks with tiny green displays. Two decades before that the main storage for computers was punch cards. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In a short period computers evolved so quickly and became such an integral part of our daily lives that it is easy to forget how recent this technology is. The first digital computers were only invented about eight decades ago, as the timeline shows.</p> <!-- /wp:paragraph --></div> <!-- /wp:column --> <!-- wp:column --> <div class="wp-block-column"></div> <!-- /wp:column --></div> <!-- /wp:columns --> <!-- wp:image {"id":54772,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image size-full"><img src="https://owid.cloud/app/uploads/2022/12/Computer-timeline.png" alt="" class="wp-image-54772"/></figure> <!-- /wp:image --> <!-- wp:columns --> <div class="wp-block-columns"><!-- wp:column --> <div class="wp-block-column"><!-- wp:paragraph --> <p>Since the early days of this history, some computer scientists have strived to make machines as intelligent as humans. The next timeline shows some of the notable artificial intelligence (AI) systems and describes what they were capable of. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The first system I mention is the Theseus. It was built by Claude Shannon in 1950 and was a remote-controlled mouse that was able to find its way out of a labyrinth and could remember its course.{ref}On the Theseus see Daniel Klein (2019) – <a href="https://web.archive.org/web/20220125004420/https://www.technologyreview.com/2018/12/19/138508/mighty-mouse/">Mighty mouse</a>, Published in MIT Technology Review. And <a href="https://www.youtube.com/watch?v=_9_AEVQ_p74">this video on YouTube</a> of a presentation by its inventor Claude Shannon.{/ref} In seven decades the abilities of artificial intelligence have come a long way.</p> <!-- /wp:paragraph --></div> <!-- /wp:column --> <!-- wp:column --> <div class="wp-block-column"></div> <!-- /wp:column --></div> <!-- /wp:columns --> <!-- wp:image {"id":54774,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image size-full"><img src="https://owid.cloud/app/uploads/2022/12/ai-timeline.png" alt="" class="wp-image-54774"/></figure> <!-- /wp:image --> <!-- wp:heading {"level":4} --> <h4>The language and image recognition capabilities of AI systems have developed very rapidly</h4> <!-- /wp:heading --> <!-- wp:columns --> <div class="wp-block-columns"><!-- wp:column {"width":"50%"} --> <div class="wp-block-column" style="flex-basis:50%"><!-- wp:paragraph --> <p>The chart shows how we got here by zooming into the last two decades of AI development. The plotted data stems from a number of tests in which human and AI performance were evaluated in five different domains, from handwriting recognition to language understanding. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Within each of the five domains the initial performance of the AI system is set to -100, and human performance in these tests is used as a baseline that is set to zero. This means that when the model’s performance crosses the zero line is when the AI system scored more points in the relevant test than the humans who did the same test.{ref}The chart shows that the speed with which these AI technologies developed increased over time. Systems for which development was started early – handwriting and speech recognition – took more than a decade to approach human-level performance, while more recent AI developments led to systems that overtook humans in the span of only a few years. However one should not overstate this point. To some extent this is dependent on when the researchers started to compare machine and human performance. One could have started evaluating the system for language understanding much earlier and its development would appear much slower in this presentation of the data.{/ref}</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Just 10 years ago, no machine could reliably provide language or image recognition at a human level. But, as the chart shows, AI systems have become steadily more capable and are now beating humans in <em>tests</em> in all these domains.{ref}It is important to remember that while these are remarkable achievements — and show very rapid gains — these are the results from specific benchmarking tests. Outside of tests, AI models can fail in surprising ways and do not reliably achieve performance that is comparable with human capabilities.{/ref} </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Outside of these standardized tests the performance of these AIs is mixed. In some real-world cases these systems are still performing much worse than humans. On the other hand, some implementations of such AI systems are already so cheap that they are available on the phone in your pocket: image recognition categorizes your photos and speech recognition transcribes what you dictate.</p> <!-- /wp:paragraph --></div> <!-- /wp:column --> <!-- wp:column {"width":"50%"} --> <div class="wp-block-column" style="flex-basis:50%"><!-- wp:paragraph --> <p><strong>Language and image recognition capabilities of AI systems have improved rapidly</strong>{ref}Data from Kiela et al. (2021) – Dynabench: Rethinking Benchmarking in NLP. arXiv:2104.14337v1; <a href="https://doi.org/10.48550/arXiv.2104.14337">https://doi.org/10.48550/arXiv.2104.14337</a> {/ref}</p> <!-- /wp:paragraph --> <!-- wp:image {"id":54776,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image size-full"><img src="https://owid.cloud/app/uploads/2022/12/AI-performance_Dynabench-paper.png" alt="" class="wp-image-54776"/></figure> <!-- /wp:image --></div> <!-- /wp:column --></div> <!-- /wp:columns --> <!-- wp:heading {"level":4} --> <h4>From image recognition to image generation</h4> <!-- /wp:heading --> <!-- wp:columns --> <div class="wp-block-columns"><!-- wp:column --> <div class="wp-block-column"><!-- wp:paragraph --> <p>The previous chart showed the rapid advances in the perceptive abilities of artificial intelligence. AI systems have also become much more capable of generating images. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>This series of nine images shows the development over the last nine years. None of the people in these images exist; all of them were generated by an AI system.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The series begins with an image from 2014 in the top left, a primitive image of a pixelated face in black and white. As the first image in the second row shows, just three years later AI systems were already able to generate images that were hard to differentiate from a photograph.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In recent years, the capability of AI systems has become much more impressive still. While the early systems focused on generating images of faces, these newer models broadened their capabilities to text-to-image generation based on almost any prompt. The image in the bottom right shows that even the most challenging prompts – such as <em>“A Pomeranian is sitting on the King’s throne wearing a crown. Two tiger soldiers are standing next to the throne”</em> – are turned into photorealistic images within seconds.{ref}Because these systems have become so powerful, the latest AI systems often don’t allow the user to generate images of human faces to prevent abuse.{/ref}</p> <!-- /wp:paragraph --></div> <!-- /wp:column --> <!-- wp:column --> <div class="wp-block-column"><!-- wp:paragraph --> <p><strong>Timeline of images generated by artificial intelligence</strong>{ref}The relevant publications are the following:</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>2014: Goodfellow et al:<a href="https://arxiv.org/abs/1406.2661"> Generative Adversarial Networks</a></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>2015: Radford, Metz, and Chintala:<a href="https://arxiv.org/abs/1511.06434"> Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks</a></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>2016: Liu and Tuzel:<a href="https://arxiv.org/abs/1606.07536"> Coupled Generative Adversarial Networks</a></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>2017: Karras et al:<a href="https://arxiv.org/abs/1710.10196"> Progressive Growing of GANs for Improved Quality, Stability, and Variation</a></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>2018: Karras, Laine, and Aila:<a href="https://arxiv.org/abs/1812.04948"> A Style-Based Generator Architecture for Generative Adversarial Networks</a> (StyleGAN from NVIDIA)</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>2019: Karras et al:<a href="https://arxiv.org/abs/1912.04958"> Analyzing and Improving the Image Quality of StyleGAN</a></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>AI-generated faces generated by this technology can be found on <a href="https://thispersondoesnotexist.com/">thispersondoesnotexist.com</a>.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>2020: Ho, Jain, and Abbeel:<a href="https://arxiv.org/abs/2006.11239"> Denoising Diffusion Probabilistic Models</a></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>2021: Ramesh et al:<a href="https://arxiv.org/abs/2102.12092"> Zero-Shot Text-to-Image Generation</a> (first DALL-E from OpenAI;<a href="https://openai.com/blog/dall-e/"> blog post</a>). See also Ramesh et al (2022) –<a href="https://cdn.openai.com/papers/dall-e-2.pdf"> Hierarchical Text-Conditional Image Generation with CLIP Latents</a> (DALL-E 2 from OpenAI;<a href="https://openai.com/dall-e-2/"> blog post</a>).</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>2022: Saharia et al: <a href="https://arxiv.org/abs/2205.11487">Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding</a> (Google’s Imagen;<a href="https://imagen.research.google/"> blog post</a>){/ref}</p> <!-- /wp:paragraph --> <!-- wp:image {"id":54778,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image size-full"><img src="https://owid.cloud/app/uploads/2022/12/Timeline-of-AI-generated-faces.png" alt="" class="wp-image-54778"/></figure> <!-- /wp:image --></div> <!-- /wp:column --></div> <!-- /wp:columns --> <!-- wp:heading {"level":4} --> <h4>Language recognition and production is developing fast</h4> <!-- /wp:heading --> <!-- wp:columns --> <div class="wp-block-columns"><!-- wp:column {"width":"50%"} --> <div class="wp-block-column" style="flex-basis:50%"><!-- wp:paragraph --> <p>Just as striking as the advances of image-generating AIs is the rapid development of systems that parse and respond to human language. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Shown in the image are examples from an AI system developed by Google called PaLM. In these six examples, the system was asked to explain six different jokes. I find the explanation in the bottom right particularly remarkable: the AI explains an anti-joke that is specifically meant to confuse the listener.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>AIs that produce language have entered our world in many ways over the last few years. Emails get auto-completed, massive amounts of online texts get translated, videos get automatically transcribed, school children use language models to do their homework, reports get auto-generated, and media outlets <a href="https://en.wikipedia.org/wiki/Automated_journalism">publish</a> AI-generated journalism.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>AI systems are not yet able to produce long, coherent texts. In the future, we will see whether the recent developments will slow down – or even end – or whether we will one day read a bestselling novel written by an AI.</p> <!-- /wp:paragraph --></div> <!-- /wp:column --> <!-- wp:column {"width":"50%"} --> <div class="wp-block-column" style="flex-basis:50%"><!-- wp:paragraph --> <p><strong>Output of the AI system PaLM after being asked to interpret six different jokes</strong>{ref}From Chowdhery et al (2022) –<a href="https://arxiv.org/pdf/2204.02311v2.pdf"> PaLM: Scaling Language Modeling with Pathways</a>. Published on arXiv on 7 Apr 2022.{/ref}</p> <!-- /wp:paragraph --> <!-- wp:image --> <figure class="wp-block-image"><img src="https://lh3.googleusercontent.com/jJ58V3vrTBw9cg2lzM-w2xiU7ExufRO7WNJUCp7a3ZGf6c79LjqrusGmFaF8nMUEtn-gH3K7J1CHHG3jMW4WotJnNjGYhCVwt3Ou5g66geZDw81yiwez1OxPB80E6TmoRNHa9dNcch9TLDj5ruQBHDvkPpl5Hpl6TtnPVXvdh0mp4jFmmiBu0Wmxsp1mKA" alt=""/></figure> <!-- /wp:image --></div> <!-- /wp:column --></div> <!-- /wp:columns --> <!-- wp:heading {"level":3} --> <h3>Where we are now: AI is here</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>These rapid advances in AI capabilities have made it possible to use machines in a wide range of new domains:</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>When you book a flight, it is often an artificial intelligence, and no longer a human, that <a href="https://www.bloomberg.com/news/articles/2022-10-20/artificial-intelligence-helps-airlines-find-the-right-prices-for-flight-tickets">decides</a> what you pay. When you get to the airport, it is an AI system that <a href="https://www.sourcesecurity.com/news/co-2166-ga.132.html">monitors</a> what you do at the airport. And once you are on the plane, an AI system assists the pilot in <a href="https://www.airbus.com/en/innovation/industry-4-0/artificial-intelligence">flying</a> you to your destination. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>AI systems also increasingly determine whether you <a href="https://www.brookings.edu/research/reducing-bias-in-ai-based-financial-services/">get a loan</a>, are <a href="https://theconversation.com/ai-algorithms-intended-to-root-out-welfare-fraud-often-end-up-punishing-the-poor-instead-131625">eligible</a> for welfare, or get <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G">hired</a> for a particular job. Increasingly they help determine who gets <a href="https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/">released from jail</a>.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Several governments are purchasing <a href="https://en.wikipedia.org/wiki/Lethal_autonomous_weapon">autonomous weapons systems</a> for warfare, and some are using AI systems for <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html">surveillance and oppression</a>. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>AI systems <a href="https://en.wikipedia.org/wiki/GitHub_Copilot">help</a> to program the software you use and <a href="https://en.wikipedia.org/wiki/Machine_translation">translate</a> the texts you read. <a href="https://en.wikipedia.org/wiki/Virtual_assistant">Virtual assistants</a>, operated by speech recognition, have entered many households over the last decade. Now <a href="https://en.wikipedia.org/wiki/Self-driving_car">self-driving cars</a> are becoming a reality. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In the last few years, AI systems <a href="https://www.nature.com/articles/s42254-022-00518-3">helped</a> <a href="https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/">to</a> <a href="https://www.nature.com/articles/d41586-022-00997-5">make</a> <a href="https://www.nature.com/articles/d41586-022-03209-2">progress</a> on some of the hardest problems in science.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Large AIs called <a href="https://en.wikipedia.org/wiki/Recommender_system">recommender systems</a> determine what you see on social media, which products are shown to you in online shops, and what gets recommended to you on YouTube. Increasingly they are not just recommending the media we consume, but based on their capacity to generate images and texts, they are also <a href="https://www.nature.com/articles/d41586-021-00530-0">creating</a> the media we consume. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Artificial intelligence is no longer a technology of the future; AI is here, and much of what is reality now would have looked like sci-fi just recently. It is a technology that already impacts all of us, and the list above includes just a few of its <a href="https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence">many applications</a>. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The wide range of listed applications makes clear that this is a very general technology that can be used by people for some extremely good goals – and some extraordinarily bad ones, too. For such ‘dual use technologies’, it is important that all of us develop an understanding of what is happening and how we want the technology to be used.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Just two decades ago the world was very different. What might AI technology be capable of in the future?</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":3} --> <h3>What is next? </h3> <!-- /wp:heading --> <!-- wp:columns --> <div class="wp-block-columns"><!-- wp:column --> <div class="wp-block-column"><!-- wp:paragraph --> <p>The AI systems that we just considered are the result of decades of steady advances in AI technology. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The big chart below brings this history over the last eight decades into perspective. It is based on the dataset produced by Jaime Sevilla and colleagues.{ref}See the footnote on the title of the chart for the references and additional information.{/ref}</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Each small circle in this chart represents one AI system. The circle’s position on the horizontal axis indicates when the AI system was built, and its position on the vertical axis shows the amount of computation that was used to train the particular AI system.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Training computation is measured in <em>floating point operations</em>, or FLOP for short. One FLOP is equivalent to one addition, subtraction, multiplication, or division of two decimal numbers. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>All AI systems that rely on machine learning need to be trained, and in these systems training computation is one of the three fundamental factors that are driving the capabilities of the system. The other two factors are the algorithms and <a href="https://ourworldindata.org/grapher/artificial-intelligence-number-training-datapoints">the input data used</a> for the training. The visualization shows that as training computation has increased, AI systems have become more and more powerful.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The timeline goes back to the 1940s, the very beginning of electronic computers. The first shown AI system is ‘Theseus’, Claude Shannon’s robotic mouse from 1950 that I mentioned at the beginning. Towards the other end of the timeline you find AI systems like DALL-E and PaLM, whose abilities to produce photorealistic images and interpret and generate language we have just seen. They are among the AI systems that used the largest amount of training computation to date.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The training computation is plotted on a logarithmic scale, so that from each grid-line to the next it shows a 100-fold increase. This long-run perspective shows a continuous increase. For the first six decades, training computation increased in line with <a href="https://ourworldindata.org/technological-change#moore-s-law-the-exponential-increase-of-the-number-of-transistors-on-integrated-circuits">Moore’s Law</a>, doubling roughly every 20 months. Since about 2010 this exponential growth has sped up further, to a doubling time of just about 6 months. That is an astonishingly fast rate of growth.{ref}At some point in the future, training computation is expected to slow down to the exponential growth rate of Moore's Law. Tamay Besiroglu, Lennart Heim and Jaime Sevilla of the Epoch team estimate in <a href="https://epochai.org/blog/projecting-compute-trends">their report</a> that the highest probability for this reversion occuring is in the early 2030s.{/ref}</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The fast doubling times have accrued to large increases. PaLM’s training computation was 2.5 billion petaFLOP, more than 5 million times larger than that of AlexNet, the AI with the largest training computation just 10 years earlier.{ref}The training computation of PaLM, developed in 2022, was 2,700,000,000 petaFLOP. The training computation of AlexNet, the AI with the largest training computation up to 2012, was 470 petaFLOP. 2,500,000,000 petaFLOP / 470 petaFLOP = 5,319,148.9. At the same time the amount of training computation required to achieve a given performance has been falling exponentially.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The costs have also increased quickly. The cost to train PaLM is estimated to be in the range of $9–$23 million according to Lennart Heim, a researcher in the Epoch team. See: Lennart Heim (2022) – <a href="https://blog.heim.xyz/palm-training-cost/">Estimating PaLM's training cost</a>.{/ref} </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Scale-up was already exponential and has sped up substantially over the past decade. What can we learn from this historical development for the future of AI?</p> <!-- /wp:paragraph --></div> <!-- /wp:column --> <!-- wp:column --> <div class="wp-block-column"><!-- wp:paragraph --> <p><strong>The rise of artificial intelligence over the last 8 decades: As training computation has increased, AI systems have become more powerful</strong>{ref}The data is taken from Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, Pablo Villalobos (2022) – <a href="https://arxiv.org/pdf/2202.05924.pdf">Compute Trends Across Three eras of Machine Learning</a>. Published in arXiv on March 9, 2022. See also <a href="https://www.alignmentforum.org/posts/XKtybmbjhC6mXDm5z/compute-trends-across-three-eras-of-machine-learning">their post on the Alignment Forum</a>. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The authors regularly update and extend their dataset, a very helpful service to the AI research community. At Our World in Data my colleague Charlie Giattino regularly updates <a href="https://ourworldindata.org/grapher/ai-training-computation">the interactive version of this chart</a> with the latest data made available by Sevilla and coauthors.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>See also these two related charts:</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p><a href="https://ourworldindata.org/grapher/artificial-intelligence-parameter-count">Number of parameters in notable artificial intelligence systems</a></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p><a href="https://ourworldindata.org/grapher/artificial-intelligence-number-training-datapoints">Number of datapoints used to train notable artificial intelligence systems</a>{/ref}</p> <!-- /wp:paragraph --> <!-- wp:image {"id":54982,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image size-full"><img src="https://owid.cloud/app/uploads/2022/12/ai-training-computation-3.png" alt="" class="wp-image-54982"/></figure> <!-- /wp:image --> <!-- wp:paragraph --> <p></p> <!-- /wp:paragraph --></div> <!-- /wp:column --></div> <!-- /wp:columns --> <!-- wp:heading {"level":4} --> <h4>Studying the long-run trends to predict the future of AI</h4> <!-- /wp:heading --> <!-- wp:paragraph --> <p>AI researchers study these long-term trends to see what is possible in the future.{ref}Scaling up the size of neural networks – in terms of the number of parameters and the amount of training data and computation – has led to surprising increases in the capabilities of AI systems. This realization motivated the “scaling hypothesis.” See Gwern Branwen (2020) – <a href="https://www.gwern.net/Scaling-hypothesis">The Scaling Hypothesis</a>.{/ref}</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Perhaps the most widely discussed study of this kind was published by AI researcher Ajeya Cotra. She studied the increase in training computation to ask at what point in time the computation to train an AI system could match that of the human brain. The idea is that at this point the AI system would match the capabilities of a human brain. In her latest update, Cotra estimated a 50% probability that such “transformative AI” will be developed by the year 2040, less than two decades from now.{ref}Her research was announced in various places, including in the AI Alignment Forum here: Ajeya Cotra (2020) – <a href="https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines">Draft report on AI timelines</a>. As far as I know the report itself always remained a ‘draft report’ and was published <a href="https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP">here on Google Docs</a>. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The cited estimate stems from Cotra’s <a href="https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines">Two-year update on my personal AI timelines</a>, in which she shortened her median timeline by 10 years.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Cotra emphasizes that there are substantial uncertainties around her estimates and therefore communicates her findings in a range of scenarios. She published her big study in 2020 and her median estimate at the time was that around the year 2050 there will be a 50%-probability that the computation required to train such a model may become affordable. In her “most conservative plausible”-scenario this point in time is pushed back to around the year 2090 and in her “most aggressive plausible”-scenario this point in time is reached in 2040.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The same is true for most other forecasters: all emphasize the large uncertainty associated with any of <a href="https://ourworldindata.org/ai-timelines">their forecasts</a>.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>It is worth emphasizing that the computation of the human brain is highly uncertain. See Joseph Carlsmith's <a href="https://www.openphilanthropy.org/research/new-report-on-how-much-computational-power-it-takes-to-match-the-human-brain/">New Report on How Much Computational Power It Takes to Match the Human Brain</a> from 2020.{/ref} </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In <a href="https://ourworldindata.org/ai-impact">a related article</a>, I discuss what transformative AI would mean for the world. In short, the idea is that such an AI system would be powerful enough to bring the world into a ‘qualitatively different future’. It could lead to a change at the scale of the two earlier major transformations in human history, the agricultural and industrial revolutions. It would certainly represent the most important global change in our lifetimes.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Cotra’s work is particularly relevant in this context as she based her forecast on the kind of historical long-run trend of training computation that we just studied. But it is worth noting that other forecasters who rely on different considerations arrive at broadly similar conclusions. As I show in <a href="https://ourworldindata.org/ai-timelines">my article on AI timelines</a>, many AI experts believe that there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":4} --> <h4>Building a public resource to enable the necessary public conversation</h4> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Computers and artificial intelligence have changed our world immensely, but we are still at the early stages of this history. Because this technology feels so familiar, it is easy to forget that all of these technologies that we interact with are very recent innovations, and that most profound changes are yet to come.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Artificial intelligence has already changed what we see, what we know, and what we do. And this is despite the fact that this technology has had only a brief history. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>There are no signs that these trends are hitting any limits anytime soon. To the contrary, particularly over the course of the last decade, the fundamental trends have accelerated: investments in AI technology have <a href="https://ourworldindata.org/ai-investments">rapidly increased</a>, and the doubling time of training computation has shortened to just six months.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>All major technological innovations lead to a range of positive and negative consequences. This is already true of artificial intelligence. As this technology becomes more and more powerful, we should expect its impact to become greater still. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Because of the importance of AI, we should all be able to form an opinion on where this technology is heading and to understand how this development is changing our world. For this purpose, we are building a repository of AI-related metrics, which you can find on <a href="http://ourworldindata.org/artificial-intelligence">OurWorldinData.org/artificial-intelligence</a>. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>We are still in the early stages of this history and much of what will become possible is yet to come. A technological development as powerful as this should be at the center of our attention. Little might be as important for how the future of our world – and the future of our lives – will play out.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator"/> <!-- /wp:separator --> <!-- wp:spacer --> <div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div> <!-- /wp:spacer --> <!-- wp:paragraph --> <p><strong>Acknowledgements:</strong> I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Julia Broden, Charlie Giattino, Bastian Herre, Edouard Mathieu, and Ike Saunders for their helpful comments to drafts of this essay and their contributions in preparing the visualizations.</p> <!-- /wp:paragraph --> | { "id": "wp-54765", "slug": "brief-history-of-ai", "content": { "toc": [], "body": [ { "type": "text", "value": [ { "text": "Our World in Data presents the data and research to make progress against the world\u2019s largest problems.", "spanType": "span-simple-text" }, { "spanType": "span-newline" }, { "text": "This article draws on data and research discussed in our entry on ", "spanType": "span-simple-text" }, { "children": [ { "url": "https://ourworldindata.org/artificial-intelligence", "children": [ { "text": "Artificial Intelligence", "spanType": "span-simple-text" } ], "spanType": "span-link" } ], "spanType": "span-bold" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "To see what the future might look like it is often helpful to study our history. This is what I will do in this article. I retrace the brief history of computers and artificial intelligence to see what we can expect for the future.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "How did we get here?", "spanType": "span-simple-text" } ], "type": "heading", "level": 3, "parseErrors": [] }, { "type": "text", "value": [ { "text": "How rapidly the world has changed becomes clear by how even quite recent computer technology feels ancient to us today. Mobile phones in the \u201890s were big bricks with tiny green displays. Two decades before that the main storage for computers was punch cards.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In a short period computers evolved so quickly and became such an integral part of our daily lives that it is easy to forget how recent this technology is. The first digital computers were only invented about eight decades ago, as the timeline shows.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "alt": "", "size": "wide", "type": "image", "filename": "Computer-timeline.png", "parseErrors": [] }, { "type": "text", "value": [ { "text": "Since the early days of this history, some computer scientists have strived to make machines as intelligent as humans. The next timeline shows some of the notable artificial intelligence (AI) systems and describes what they were capable of.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The first system I mention is the Theseus. It was built by Claude Shannon in 1950 and was a remote-controlled mouse that was able to find its way out of a labyrinth and could remember its course.{ref}On the Theseus see Daniel Klein (2019) \u2013 ", "spanType": "span-simple-text" }, { "url": "https://web.archive.org/web/20220125004420/https://www.technologyreview.com/2018/12/19/138508/mighty-mouse/", "children": [ { "text": "Mighty mouse", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", Published in MIT Technology Review. And ", "spanType": "span-simple-text" }, { "url": "https://www.youtube.com/watch?v=_9_AEVQ_p74", "children": [ { "text": "this video on YouTube", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " of a presentation by its inventor Claude Shannon.{/ref} In seven decades the abilities of artificial intelligence have come a long way.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "alt": "", "size": "wide", "type": "image", "filename": "ai-timeline.png", "parseErrors": [] }, { "text": [ { "text": "The language and image recognition capabilities of AI systems have developed very rapidly", "spanType": "span-simple-text" } ], "type": "heading", "level": 3, "parseErrors": [] }, { "left": [ { "type": "text", "value": [ { "text": "The chart shows how we got here by zooming into the last two decades of AI development. The plotted data stems from a number of tests in which human and AI performance were evaluated in five different domains, from handwriting recognition to language understanding.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Within each of the five domains the initial performance of the AI system is set to -100, and human performance in these tests is used as a baseline that is set to zero. This means that when the model\u2019s performance crosses the zero line is when the AI system scored more points in the relevant test than the humans who did the same test.{ref}The chart shows that the speed with which these AI technologies developed increased over time. Systems for which development was started early \u2013 handwriting and\u00a0speech recognition \u2013 took more than a decade to approach human-level performance, while more recent AI developments led to systems that overtook humans in the span of only a few years. However one should not overstate this point. To some extent this is dependent on when the researchers started to compare machine and human performance. One could have started evaluating the system for language understanding much earlier and its development would appear much slower in this presentation of the data.{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Just 10 years ago, no machine could reliably provide language or image recognition at a human level. But, as the chart shows, AI systems have become steadily more capable and are now beating humans in ", "spanType": "span-simple-text" }, { "children": [ { "text": "tests", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " in all these domains.{ref}It is important to remember that while these are remarkable achievements \u2014 and show very rapid gains \u2014 these are the results from specific benchmarking tests. Outside of tests, AI models can fail in surprising ways and do not reliably achieve performance that is comparable with human capabilities.{/ref}\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Outside of these standardized tests the performance of these AIs is mixed. In some real-world cases these systems are still performing much worse than humans. On the other hand, some implementations of such AI systems are already so cheap that they are available on the phone in your pocket: image recognition categorizes your photos and speech recognition transcribes what you dictate.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "type": "sticky-right", "right": [ { "type": "text", "value": [ { "children": [ { "text": "Language and image recognition capabilities of AI systems have improved rapidly", "spanType": "span-simple-text" } ], "spanType": "span-bold" }, { "text": "{ref}Data from Kiela et al. (2021) \u2013 Dynabench: Rethinking Benchmarking in NLP. arXiv:2104.14337v1; ", "spanType": "span-simple-text" }, { "url": "https://doi.org/10.48550/arXiv.2104.14337", "children": [ { "text": "https://doi.org/10.48550/arXiv.2104.14337", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " {/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "alt": "", "size": "wide", "type": "image", "filename": "AI-performance_Dynabench-paper.png", "parseErrors": [] } ], "parseErrors": [] }, { "text": [ { "text": "From image recognition to image generation", "spanType": "span-simple-text" } ], "type": "heading", "level": 3, "parseErrors": [] }, { "left": [ { "type": "text", "value": [ { "text": "The previous chart showed the rapid advances in the perceptive abilities of artificial intelligence. AI systems have also become much more capable of generating images.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "This series of nine images shows the development over the last nine years. None of the people in these images exist; all of them were generated by an AI system.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The series begins with an image from 2014 in the top left, a primitive image of a pixelated face in black and white. As the first image in the second row shows, just three years later AI systems were already able to generate images that were hard to differentiate from a photograph.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In recent years, the capability of AI systems has become much more impressive still. While the early systems focused on generating images of faces, these newer models broadened their capabilities to text-to-image generation based on almost any prompt. The image in the bottom right shows that even the most challenging prompts \u2013 such as ", "spanType": "span-simple-text" }, { "children": [ { "text": "\u201cA Pomeranian is sitting on the King\u2019s throne wearing a crown. Two tiger soldiers are standing next to the throne\u201d", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " \u2013 are turned into photorealistic images within seconds.{ref}Because these systems have become so powerful, the latest AI systems often don\u2019t allow the user to generate images of human faces to prevent abuse.{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "type": "sticky-right", "right": [ { "type": "text", "value": [ { "children": [ { "text": "Timeline of images generated by artificial intelligence", "spanType": "span-simple-text" } ], "spanType": "span-bold" }, { "text": "{ref}The relevant publications are the following:", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "2014: Goodfellow et al:", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/abs/1406.2661", "children": [ { "text": " Generative Adversarial Networks", "spanType": "span-simple-text" } ], "spanType": "span-link" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "2015: Radford, Metz, and Chintala:", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/abs/1511.06434", "children": [ { "text": " Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks", "spanType": "span-simple-text" } ], "spanType": "span-link" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "2016: Liu and Tuzel:", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/abs/1606.07536", "children": [ { "text": " Coupled Generative Adversarial Networks", "spanType": "span-simple-text" } ], "spanType": "span-link" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "2017: Karras et al:", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/abs/1710.10196", "children": [ { "text": " Progressive Growing of GANs for Improved Quality, Stability, and Variation", "spanType": "span-simple-text" } ], "spanType": "span-link" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "2018: Karras, Laine, and Aila:", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/abs/1812.04948", "children": [ { "text": " A Style-Based Generator Architecture for Generative Adversarial Networks", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " (StyleGAN from NVIDIA)", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "2019: Karras et al:", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/abs/1912.04958", "children": [ { "text": " Analyzing and Improving the Image Quality of StyleGAN", "spanType": "span-simple-text" } ], "spanType": "span-link" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "AI-generated faces generated by this technology can be found on ", "spanType": "span-simple-text" }, { "url": "https://thispersondoesnotexist.com/", "children": [ { "text": "thispersondoesnotexist.com", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "2020: Ho, Jain, and Abbeel:", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/abs/2006.11239", "children": [ { "text": " Denoising Diffusion Probabilistic Models", "spanType": "span-simple-text" } ], "spanType": "span-link" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "2021: Ramesh et al:", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/abs/2102.12092", "children": [ { "text": " Zero-Shot Text-to-Image Generation", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " (first DALL-E from OpenAI;", "spanType": "span-simple-text" }, { "url": "https://openai.com/blog/dall-e/", "children": [ { "text": " blog post", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": "). See also Ramesh et al (2022) \u2013", "spanType": "span-simple-text" }, { "url": "https://cdn.openai.com/papers/dall-e-2.pdf", "children": [ { "text": " Hierarchical Text-Conditional Image Generation with CLIP Latents", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " (DALL-E 2 from OpenAI;", "spanType": "span-simple-text" }, { "url": "https://openai.com/dall-e-2/", "children": [ { "text": " blog post", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ").", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "2022: Saharia et al: ", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/abs/2205.11487", "children": [ { "text": "Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " (Google\u2019s Imagen;", "spanType": "span-simple-text" }, { "url": "https://imagen.research.google/", "children": [ { "text": " blog post", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": "){/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "alt": "", "size": "wide", "type": "image", "filename": "Timeline-of-AI-generated-faces.png", "parseErrors": [] } ], "parseErrors": [] }, { "text": [ { "text": "Language recognition and production is developing fast", "spanType": "span-simple-text" } ], "type": "heading", "level": 3, "parseErrors": [] }, { "left": [ { "type": "text", "value": [ { "text": "Just as striking as the advances of image-generating AIs is the rapid development of systems that parse and respond to human language.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Shown in the image are examples from an AI system developed by Google called PaLM. In these six examples, the system was asked to explain six different jokes. I find the explanation in the bottom right particularly remarkable: the AI explains an anti-joke that is specifically meant to confuse the listener.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "AIs that produce language have entered our world in many ways over the last few years. Emails get auto-completed, massive amounts of online texts get translated, videos get automatically transcribed, school children use language models to do their homework, reports get auto-generated, and media outlets ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/Automated_journalism", "children": [ { "text": "publish", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " AI-generated journalism.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "AI systems are not yet able to produce long, coherent texts. In the future, we will see whether the recent developments will slow down \u2013 or even end \u2013 or whether we will one day read a bestselling novel written by an AI.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "type": "sticky-right", "right": [ { "type": "text", "value": [ { "children": [ { "text": "Output of the AI system PaLM after being asked to interpret six different jokes", "spanType": "span-simple-text" } ], "spanType": "span-bold" }, { "text": "{ref}From Chowdhery et al (2022) \u2013", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/pdf/2204.02311v2.pdf", "children": [ { "text": "\u00a0PaLM: Scaling Language Modeling with Pathways", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". Published on arXiv on 7 Apr 2022.{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "alt": "", "size": "wide", "type": "image", "filename": "jJ58V3vrTBw9cg2lzM-w2xiU7ExufRO7WNJUCp7a3ZGf6c79LjqrusGmFaF8nMUEtn-gH3K7J1CHHG3jMW4WotJnNjGYhCVwt3Ou5g66geZDw81yiwez1OxPB80E6TmoRNHa9dNcch9TLDj5ruQBHDvkPpl5Hpl6TtnPVXvdh0mp4jFmmiBu0Wmxsp1mKA", "parseErrors": [] } ], "parseErrors": [] }, { "text": [ { "text": "Where we are now: AI is here", "spanType": "span-simple-text" } ], "type": "heading", "level": 2, "parseErrors": [] }, { "type": "text", "value": [ { "text": "These rapid advances in AI capabilities have made it possible to use machines in a wide range of new domains:", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "When you book a flight, it is often an artificial intelligence, and no longer a human, that ", "spanType": "span-simple-text" }, { "url": "https://www.bloomberg.com/news/articles/2022-10-20/artificial-intelligence-helps-airlines-find-the-right-prices-for-flight-tickets", "children": [ { "text": "decides", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " what you pay. When you get to the airport, it is an AI system that ", "spanType": "span-simple-text" }, { "url": "https://www.sourcesecurity.com/news/co-2166-ga.132.html", "children": [ { "text": "monitors", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " what you do at the airport. And once you are on the plane, an AI system assists the pilot in ", "spanType": "span-simple-text" }, { "url": "https://www.airbus.com/en/innovation/industry-4-0/artificial-intelligence", "children": [ { "text": "flying", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " you to your destination.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "AI systems also increasingly determine whether you ", "spanType": "span-simple-text" }, { "url": "https://www.brookings.edu/research/reducing-bias-in-ai-based-financial-services/", "children": [ { "text": "get a loan", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", are ", "spanType": "span-simple-text" }, { "url": "https://theconversation.com/ai-algorithms-intended-to-root-out-welfare-fraud-often-end-up-punishing-the-poor-instead-131625", "children": [ { "text": "eligible", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " for welfare, or get ", "spanType": "span-simple-text" }, { "url": "https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G", "children": [ { "text": "hired", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " for a particular job. Increasingly they help determine who gets ", "spanType": "span-simple-text" }, { "url": "https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/", "children": [ { "text": "released from jail", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Several governments are purchasing ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/Lethal_autonomous_weapon", "children": [ { "text": "autonomous weapons systems", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " for warfare, and some are using AI systems for ", "spanType": "span-simple-text" }, { "url": "https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html", "children": [ { "text": "surveillance and oppression", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "AI systems ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/GitHub_Copilot", "children": [ { "text": "help", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " to program the software you use and ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/Machine_translation", "children": [ { "text": "translate", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " the texts you read. ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/Virtual_assistant", "children": [ { "text": "Virtual assistants", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", operated by speech recognition, have entered many households over the last decade. Now ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/Self-driving_car", "children": [ { "text": "self-driving cars", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " are becoming a reality.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In the last few years, AI systems ", "spanType": "span-simple-text" }, { "url": "https://www.nature.com/articles/s42254-022-00518-3", "children": [ { "text": "helped", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "url": "https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/", "children": [ { "text": "to", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "url": "https://www.nature.com/articles/d41586-022-00997-5", "children": [ { "text": "make", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "url": "https://www.nature.com/articles/d41586-022-03209-2", "children": [ { "text": "progress", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " on some of the hardest problems in science.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Large AIs called ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/Recommender_system", "children": [ { "text": "recommender systems", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " determine what you see on social media, which products are shown to you in online shops, and what gets recommended to you on YouTube. Increasingly they are not just recommending the media we consume, but based on their capacity to generate images and texts, they are also ", "spanType": "span-simple-text" }, { "url": "https://www.nature.com/articles/d41586-021-00530-0", "children": [ { "text": "creating", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " the media we consume.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Artificial intelligence is no longer a technology of the future; AI is here, and much of what is reality now would have looked like sci-fi just recently. It is a technology that already impacts all of us, and the list above includes just a few of its ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence", "children": [ { "text": "many applications", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The wide range of listed applications makes clear that this is a very general technology that can be used by people for some extremely good goals \u2013 and some extraordinarily bad ones, too. For such \u2018dual use technologies\u2019, it is important that all of us develop an understanding of what is happening and how we want the technology to be used.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Just two decades ago the world was very different. What might AI technology be capable of in the future?", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "What is next?\u00a0", "spanType": "span-simple-text" } ], "type": "heading", "level": 2, "parseErrors": [] }, { "left": [ { "type": "text", "value": [ { "text": "The AI systems that we just considered are the result of decades of steady advances in AI technology.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The big chart below brings this history over the last eight decades into perspective. It is based on the dataset produced by Jaime Sevilla and colleagues.{ref}See the footnote on the title of the chart for the references and additional information.{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Each small circle in this chart represents one AI system. The circle\u2019s position on the horizontal axis indicates when the AI system was built, and its position on the vertical axis shows the amount of computation that was used to train the particular AI system.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Training computation is measured in ", "spanType": "span-simple-text" }, { "children": [ { "text": "floating point operations", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": ", or FLOP for short. One FLOP is equivalent to one addition, subtraction, multiplication, or division of two decimal numbers.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "All AI systems that rely on machine learning need to be trained, and in these systems training computation is one of the three fundamental factors\u00a0that are driving the capabilities of the system. The other two factors are the algorithms and ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/grapher/artificial-intelligence-number-training-datapoints", "children": [ { "text": "the input data used", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " for the training. The visualization shows that as training computation has increased, AI systems have become more and more powerful.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The timeline goes back to the 1940s, the very beginning of electronic computers. The first shown AI system is \u2018Theseus\u2019, Claude Shannon\u2019s robotic mouse from 1950 that I mentioned at the beginning. Towards the other end of the timeline you find AI systems like DALL-E and PaLM, whose abilities to produce photorealistic images and interpret and generate language we have just seen. They are among the AI systems that used the largest amount of training computation to date.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The training computation is plotted on a logarithmic scale, so that from each grid-line to the next it shows a 100-fold increase. This long-run perspective shows a continuous increase. For the first six decades, training computation increased in line with ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/technological-change#moore-s-law-the-exponential-increase-of-the-number-of-transistors-on-integrated-circuits", "children": [ { "text": "Moore\u2019s Law", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", doubling roughly every 20 months. Since about 2010 this exponential growth has sped up further, to a doubling time of just about 6 months. That is an astonishingly fast rate of growth.{ref}At some point in the future, training computation is expected to slow down to the exponential growth rate of Moore's Law. Tamay Besiroglu, Lennart Heim and Jaime Sevilla of the Epoch team estimate in ", "spanType": "span-simple-text" }, { "url": "https://epochai.org/blog/projecting-compute-trends", "children": [ { "text": "their report", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " that the highest probability for this reversion occuring is in the early 2030s.{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The fast doubling times have accrued to large increases. PaLM\u2019s training computation was 2.5 billion petaFLOP, more than 5 million times larger than that of AlexNet, the AI with the largest training computation just 10 years earlier.{ref}The training computation of PaLM, developed in 2022, was 2,700,000,000 petaFLOP. The training computation of AlexNet, the AI with the largest training computation up to 2012, was 470 petaFLOP. 2,500,000,000 petaFLOP / 470 petaFLOP = 5,319,148.9. At the same time the amount of training computation required to achieve a given performance has been falling exponentially.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The costs have also increased quickly. The cost to train PaLM is estimated to be in the range of $9\u2013$23 million according to Lennart Heim, a researcher in the Epoch team. See: Lennart Heim (2022) \u2013 ", "spanType": "span-simple-text" }, { "url": "https://blog.heim.xyz/palm-training-cost/", "children": [ { "text": "Estimating PaLM's training cost", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".{/ref}\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Scale-up was already exponential and has sped up substantially over the past decade. What can we learn from this historical development for the future of AI?", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "type": "sticky-right", "right": [ { "type": "text", "value": [ { "children": [ { "text": "The rise of artificial intelligence over the last 8 decades: As training computation has increased, AI systems have become more powerful", "spanType": "span-simple-text" } ], "spanType": "span-bold" }, { "text": "{ref}The data is taken from Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, Pablo Villalobos (2022) \u2013 ", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/pdf/2202.05924.pdf", "children": [ { "text": "Compute Trends Across Three eras of Machine Learning", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". Published in arXiv on March 9, 2022. See also ", "spanType": "span-simple-text" }, { "url": "https://www.alignmentforum.org/posts/XKtybmbjhC6mXDm5z/compute-trends-across-three-eras-of-machine-learning", "children": [ { "text": "their post on the Alignment Forum", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The authors regularly update and extend their dataset, a very helpful service to the AI research community. At Our World in Data my colleague Charlie Giattino regularly updates ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/grapher/ai-training-computation", "children": [ { "text": "the interactive version of this chart", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " with the latest data made available by Sevilla and coauthors.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "See also these two related charts:", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "url": "https://ourworldindata.org/grapher/artificial-intelligence-parameter-count", "children": [ { "text": "Number of parameters in notable artificial intelligence systems", "spanType": "span-simple-text" } ], "spanType": "span-link" } ], "parseErrors": [] }, { "type": "text", "value": [ { "url": "https://ourworldindata.org/grapher/artificial-intelligence-number-training-datapoints", "children": [ { "text": "Number of datapoints used to train notable artificial intelligence systems", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": "{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "alt": "", "size": "wide", "type": "image", "filename": "ai-training-computation-3.png", "parseErrors": [] } ], "parseErrors": [] }, { "text": [ { "text": "Studying the long-run trends to predict the future of AI", "spanType": "span-simple-text" } ], "type": "heading", "level": 3, "parseErrors": [] }, { "type": "text", "value": [ { "text": "AI researchers study these long-term trends to see what is possible in the future.{ref}Scaling up the size of neural networks \u2013 in terms of the number of parameters and the amount of training data and computation \u2013 has led to surprising increases in the capabilities of AI systems. This realization motivated the \u201cscaling hypothesis.\u201d See Gwern Branwen (2020) \u2013 ", "spanType": "span-simple-text" }, { "url": "https://www.gwern.net/Scaling-hypothesis", "children": [ { "text": "The Scaling Hypothesis", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": "\u2060.{/ref}", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Perhaps the most widely discussed study of this kind was published by AI researcher Ajeya Cotra. She studied the increase in training computation to ask at what point in time the computation to train an AI system could match that of the human brain. The idea is that at this point the AI system would match the capabilities of a human brain. In her latest update, Cotra estimated a 50% probability that such \u201ctransformative AI\u201d will be developed by the year 2040, less than two decades from now.{ref}Her research was announced in various places, including in the AI Alignment Forum here: Ajeya Cotra (2020) \u2013\u00a0", "spanType": "span-simple-text" }, { "url": "https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines", "children": [ { "text": "Draft report on AI timelines", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". As far as I know the report itself always remained a \u2018draft report\u2019 and was published ", "spanType": "span-simple-text" }, { "url": "https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP", "children": [ { "text": "here on Google Docs", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The cited estimate stems from Cotra\u2019s ", "spanType": "span-simple-text" }, { "url": "https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines", "children": [ { "text": "Two-year update on my personal AI timelines", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", in which she shortened her median timeline by 10 years.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Cotra emphasizes that there are substantial uncertainties around her estimates and therefore communicates her findings in a range of scenarios. She published her big study in 2020 and her median estimate at the time was that around the year 2050 there will be a 50%-probability that the computation required to train such a model may become affordable. In her \u201cmost conservative plausible\u201d-scenario this point in time is pushed back to around the year 2090 and in her \u201cmost aggressive plausible\u201d-scenario this point in time is reached in 2040.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The same is true for most other forecasters: all emphasize the large uncertainty associated with any of ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/ai-timelines", "children": [ { "text": "their forecasts", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "It is worth emphasizing that the computation of the human brain is highly uncertain. See Joseph Carlsmith's ", "spanType": "span-simple-text" }, { "url": "https://www.openphilanthropy.org/research/new-report-on-how-much-computational-power-it-takes-to-match-the-human-brain/", "children": [ { "text": "New Report on How Much Computational Power It Takes to Match the Human Brain", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " from 2020.{/ref}\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/ai-impact", "children": [ { "text": "a related article", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", I discuss what transformative AI would mean for the world. In short, the idea is that such an AI system would be powerful enough to bring the world into a \u2018qualitatively different future\u2019. It could lead to a change at the scale of the two earlier major transformations in human history, the agricultural and industrial revolutions. It would certainly represent the most important global change in our lifetimes.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Cotra\u2019s work is particularly relevant in this context as she based her forecast on the kind of historical long-run trend of training computation that we just studied. But it is worth noting that other forecasters who rely on different considerations arrive at broadly similar conclusions. As I show in ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/ai-timelines", "children": [ { "text": "my article on AI timelines", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", many AI experts believe that there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "Building a public resource to enable the necessary public conversation", "spanType": "span-simple-text" } ], "type": "heading", "level": 3, "parseErrors": [] }, { "type": "text", "value": [ { "text": "Computers and artificial intelligence have changed our world immensely, but we are still at the early stages of this history. Because this technology feels so familiar, it is easy to forget that all of these technologies that we interact with are very recent innovations, and that most profound changes are yet to come.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Artificial intelligence has already changed what we see, what we know, and what we do. And this is despite the fact that this technology has had only a brief history.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "There are no signs that these trends are hitting any limits anytime soon. To the contrary, particularly over the course of the last decade, the fundamental trends have accelerated: investments in AI technology have ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/ai-investments", "children": [ { "text": "rapidly increased", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", and the doubling time of training computation has shortened to just six months.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "All major technological innovations lead to a range of positive and negative consequences. This is already true of artificial intelligence. As this technology becomes more and more powerful, we should expect its impact to become greater still.\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Because of the importance of AI, we should all be able to form an opinion on where this technology is heading and to understand how this development is changing our world. For this purpose, we are building a repository of AI-related metrics, which you can find on ", "spanType": "span-simple-text" }, { "url": "http://ourworldindata.org/artificial-intelligence", "children": [ { "text": "OurWorldinData.org/artificial-intelligence", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".\u00a0", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "We are still in the early stages of this history and much of what will become possible is yet to come. A technological development as powerful as this should be at the center of our attention. Little might be as important for how the future of our world \u2013 and the future of our lives \u2013 will play out.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "children": [ { "text": "Acknowledgements:", "spanType": "span-simple-text" } ], "spanType": "span-bold" }, { "text": " I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Julia Broden, Charlie Giattino, Bastian Herre, Edouard Mathieu, and Ike Saunders for their helpful comments to drafts of this essay and their contributions in preparing the visualizations.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "type": "article", "title": "The brief history of artificial intelligence: The world has changed fast \u2013 what might be next?", "authors": [ "Max Roser" ], "excerpt": "Despite their brief history, computers and AI have fundamentally changed what we see, what we know, and what we do. Little is as important for the future of the world, and our own lives, as how this history continues.", "dateline": "December 6, 2022", "subtitle": "Despite their brief history, computers and AI have fundamentally changed what we see, what we know, and what we do. Little is as important for the future of the world, and our own lives, as how this history continues.", "sidebar-toc": false, "featured-image": "featured-image-ai-training-computation.png" }, "createdAt": "2022-12-02T12:33:16.000Z", "published": false, "updatedAt": "2023-07-31T13:05:20.000Z", "revisionId": null, "publishedAt": "2022-12-06T01:00:00.000Z", "relatedCharts": [], "publicationContext": "listed" } |
{ "errors": [ { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag image" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag image" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag image" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag image" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag image" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag image" }, { "name": "unhandled html tag found", "details": "Encountered the unhandled tag hr" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag separator" }, { "name": "unexpected wp component tag", "details": "Found unhandled wp:comment tag spacer" } ], "numBlocks": 45, "numErrors": 9, "wpTagCounts": { "html": 1, "image": 6, "column": 12, "spacer": 1, "columns": 6, "heading": 8, "paragraph": 71, "separator": 1 }, "htmlTagCounts": { "p": 72, "h3": 2, "h4": 6, "hr": 1, "div": 20, "figure": 6 } } |
2022-12-06 01:00:00 | 2024-02-16 14:22:54 | 1WGJaY95A4hVjybBPzHOA80iGygKUKcgx0kSs-G_b7aA | [ "Max Roser" ] |
Despite their brief history, computers and AI have fundamentally changed what we see, what we know, and what we do. Little is as important for the future of the world, and our own lives, as how this history continues. | 2022-12-02 12:33:16 | 2023-07-31 13:05:20 | https://ourworldindata.org/wp-content/uploads/2022/12/featured-image-ai-training-computation.png | {} |
Our World in Data presents the data and research to make progress against the world’s largest problems. This article draws on data and research discussed in our entry on **[Artificial Intelligence](https://ourworldindata.org/artificial-intelligence)**. To see what the future might look like it is often helpful to study our history. This is what I will do in this article. I retrace the brief history of computers and artificial intelligence to see what we can expect for the future. ### How did we get here? How rapidly the world has changed becomes clear by how even quite recent computer technology feels ancient to us today. Mobile phones in the ‘90s were big bricks with tiny green displays. Two decades before that the main storage for computers was punch cards. In a short period computers evolved so quickly and became such an integral part of our daily lives that it is easy to forget how recent this technology is. The first digital computers were only invented about eight decades ago, as the timeline shows. <Image filename="Computer-timeline.png" alt=""/> Since the early days of this history, some computer scientists have strived to make machines as intelligent as humans. The next timeline shows some of the notable artificial intelligence (AI) systems and describes what they were capable of. The first system I mention is the Theseus. It was built by Claude Shannon in 1950 and was a remote-controlled mouse that was able to find its way out of a labyrinth and could remember its course.{ref}On the Theseus see Daniel Klein (2019) – [Mighty mouse](https://web.archive.org/web/20220125004420/https://www.technologyreview.com/2018/12/19/138508/mighty-mouse/), Published in MIT Technology Review. And [this video on YouTube](https://www.youtube.com/watch?v=_9_AEVQ_p74) of a presentation by its inventor Claude Shannon.{/ref} In seven decades the abilities of artificial intelligence have come a long way. <Image filename="ai-timeline.png" alt=""/> ### The language and image recognition capabilities of AI systems have developed very rapidly The chart shows how we got here by zooming into the last two decades of AI development. The plotted data stems from a number of tests in which human and AI performance were evaluated in five different domains, from handwriting recognition to language understanding. Within each of the five domains the initial performance of the AI system is set to -100, and human performance in these tests is used as a baseline that is set to zero. This means that when the model’s performance crosses the zero line is when the AI system scored more points in the relevant test than the humans who did the same test.{ref}The chart shows that the speed with which these AI technologies developed increased over time. Systems for which development was started early – handwriting and speech recognition – took more than a decade to approach human-level performance, while more recent AI developments led to systems that overtook humans in the span of only a few years. However one should not overstate this point. To some extent this is dependent on when the researchers started to compare machine and human performance. One could have started evaluating the system for language understanding much earlier and its development would appear much slower in this presentation of the data.{/ref} Just 10 years ago, no machine could reliably provide language or image recognition at a human level. But, as the chart shows, AI systems have become steadily more capable and are now beating humans in _tests_ in all these domains.{ref}It is important to remember that while these are remarkable achievements — and show very rapid gains — these are the results from specific benchmarking tests. Outside of tests, AI models can fail in surprising ways and do not reliably achieve performance that is comparable with human capabilities.{/ref} Outside of these standardized tests the performance of these AIs is mixed. In some real-world cases these systems are still performing much worse than humans. On the other hand, some implementations of such AI systems are already so cheap that they are available on the phone in your pocket: image recognition categorizes your photos and speech recognition transcribes what you dictate. **Language and image recognition capabilities of AI systems have improved rapidly**{ref}Data from Kiela et al. (2021) – Dynabench: Rethinking Benchmarking in NLP. arXiv:2104.14337v1; [https://doi.org/10.48550/arXiv.2104.14337](https://doi.org/10.48550/arXiv.2104.14337) {/ref} <Image filename="AI-performance_Dynabench-paper.png" alt=""/> ### From image recognition to image generation The previous chart showed the rapid advances in the perceptive abilities of artificial intelligence. AI systems have also become much more capable of generating images. This series of nine images shows the development over the last nine years. None of the people in these images exist; all of them were generated by an AI system. The series begins with an image from 2014 in the top left, a primitive image of a pixelated face in black and white. As the first image in the second row shows, just three years later AI systems were already able to generate images that were hard to differentiate from a photograph. In recent years, the capability of AI systems has become much more impressive still. While the early systems focused on generating images of faces, these newer models broadened their capabilities to text-to-image generation based on almost any prompt. The image in the bottom right shows that even the most challenging prompts – such as _“A Pomeranian is sitting on the King’s throne wearing a crown. Two tiger soldiers are standing next to the throne”_ – are turned into photorealistic images within seconds.{ref}Because these systems have become so powerful, the latest AI systems often don’t allow the user to generate images of human faces to prevent abuse.{/ref} **Timeline of images generated by artificial intelligence**{ref}The relevant publications are the following: 2014: Goodfellow et al:[ Generative Adversarial Networks](https://arxiv.org/abs/1406.2661) 2015: Radford, Metz, and Chintala:[ Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks](https://arxiv.org/abs/1511.06434) 2016: Liu and Tuzel:[ Coupled Generative Adversarial Networks](https://arxiv.org/abs/1606.07536) 2017: Karras et al:[ Progressive Growing of GANs for Improved Quality, Stability, and Variation](https://arxiv.org/abs/1710.10196) 2018: Karras, Laine, and Aila:[ A Style-Based Generator Architecture for Generative Adversarial Networks](https://arxiv.org/abs/1812.04948) (StyleGAN from NVIDIA) 2019: Karras et al:[ Analyzing and Improving the Image Quality of StyleGAN](https://arxiv.org/abs/1912.04958) AI-generated faces generated by this technology can be found on [thispersondoesnotexist.com](https://thispersondoesnotexist.com/). 2020: Ho, Jain, and Abbeel:[ Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) 2021: Ramesh et al:[ Zero-Shot Text-to-Image Generation](https://arxiv.org/abs/2102.12092) (first DALL-E from OpenAI;[ blog post](https://openai.com/blog/dall-e/)). See also Ramesh et al (2022) –[ Hierarchical Text-Conditional Image Generation with CLIP Latents](https://cdn.openai.com/papers/dall-e-2.pdf) (DALL-E 2 from OpenAI;[ blog post](https://openai.com/dall-e-2/)). 2022: Saharia et al: [Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding](https://arxiv.org/abs/2205.11487) (Google’s Imagen;[ blog post](https://imagen.research.google/)){/ref} <Image filename="Timeline-of-AI-generated-faces.png" alt=""/> ### Language recognition and production is developing fast Just as striking as the advances of image-generating AIs is the rapid development of systems that parse and respond to human language. Shown in the image are examples from an AI system developed by Google called PaLM. In these six examples, the system was asked to explain six different jokes. I find the explanation in the bottom right particularly remarkable: the AI explains an anti-joke that is specifically meant to confuse the listener. AIs that produce language have entered our world in many ways over the last few years. Emails get auto-completed, massive amounts of online texts get translated, videos get automatically transcribed, school children use language models to do their homework, reports get auto-generated, and media outlets [publish](https://en.wikipedia.org/wiki/Automated_journalism) AI-generated journalism. AI systems are not yet able to produce long, coherent texts. In the future, we will see whether the recent developments will slow down – or even end – or whether we will one day read a bestselling novel written by an AI. **Output of the AI system PaLM after being asked to interpret six different jokes**{ref}From Chowdhery et al (2022) –[ PaLM: Scaling Language Modeling with Pathways](https://arxiv.org/pdf/2204.02311v2.pdf). Published on arXiv on 7 Apr 2022.{/ref} <Image filename="jJ58V3vrTBw9cg2lzM-w2xiU7ExufRO7WNJUCp7a3ZGf6c79LjqrusGmFaF8nMUEtn-gH3K7J1CHHG3jMW4WotJnNjGYhCVwt3Ou5g66geZDw81yiwez1OxPB80E6TmoRNHa9dNcch9TLDj5ruQBHDvkPpl5Hpl6TtnPVXvdh0mp4jFmmiBu0Wmxsp1mKA" alt=""/> ## Where we are now: AI is here These rapid advances in AI capabilities have made it possible to use machines in a wide range of new domains: When you book a flight, it is often an artificial intelligence, and no longer a human, that [decides](https://www.bloomberg.com/news/articles/2022-10-20/artificial-intelligence-helps-airlines-find-the-right-prices-for-flight-tickets) what you pay. When you get to the airport, it is an AI system that [monitors](https://www.sourcesecurity.com/news/co-2166-ga.132.html) what you do at the airport. And once you are on the plane, an AI system assists the pilot in [flying](https://www.airbus.com/en/innovation/industry-4-0/artificial-intelligence) you to your destination. AI systems also increasingly determine whether you [get a loan](https://www.brookings.edu/research/reducing-bias-in-ai-based-financial-services/), are [eligible](https://theconversation.com/ai-algorithms-intended-to-root-out-welfare-fraud-often-end-up-punishing-the-poor-instead-131625) for welfare, or get [hired](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G) for a particular job. Increasingly they help determine who gets [released from jail](https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/). Several governments are purchasing [autonomous weapons systems](https://en.wikipedia.org/wiki/Lethal_autonomous_weapon) for warfare, and some are using AI systems for [surveillance and oppression](https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html). AI systems [help](https://en.wikipedia.org/wiki/GitHub_Copilot) to program the software you use and [translate](https://en.wikipedia.org/wiki/Machine_translation) the texts you read. [Virtual assistants](https://en.wikipedia.org/wiki/Virtual_assistant), operated by speech recognition, have entered many households over the last decade. Now [self-driving cars](https://en.wikipedia.org/wiki/Self-driving_car) are becoming a reality. In the last few years, AI systems [helped](https://www.nature.com/articles/s42254-022-00518-3)[to](https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/)[make](https://www.nature.com/articles/d41586-022-00997-5)[progress](https://www.nature.com/articles/d41586-022-03209-2) on some of the hardest problems in science. Large AIs called [recommender systems](https://en.wikipedia.org/wiki/Recommender_system) determine what you see on social media, which products are shown to you in online shops, and what gets recommended to you on YouTube. Increasingly they are not just recommending the media we consume, but based on their capacity to generate images and texts, they are also [creating](https://www.nature.com/articles/d41586-021-00530-0) the media we consume. Artificial intelligence is no longer a technology of the future; AI is here, and much of what is reality now would have looked like sci-fi just recently. It is a technology that already impacts all of us, and the list above includes just a few of its [many applications](https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence). The wide range of listed applications makes clear that this is a very general technology that can be used by people for some extremely good goals – and some extraordinarily bad ones, too. For such ‘dual use technologies’, it is important that all of us develop an understanding of what is happening and how we want the technology to be used. Just two decades ago the world was very different. What might AI technology be capable of in the future? ## What is next? The AI systems that we just considered are the result of decades of steady advances in AI technology. The big chart below brings this history over the last eight decades into perspective. It is based on the dataset produced by Jaime Sevilla and colleagues.{ref}See the footnote on the title of the chart for the references and additional information.{/ref} Each small circle in this chart represents one AI system. The circle’s position on the horizontal axis indicates when the AI system was built, and its position on the vertical axis shows the amount of computation that was used to train the particular AI system. Training computation is measured in _floating point operations_, or FLOP for short. One FLOP is equivalent to one addition, subtraction, multiplication, or division of two decimal numbers. All AI systems that rely on machine learning need to be trained, and in these systems training computation is one of the three fundamental factors that are driving the capabilities of the system. The other two factors are the algorithms and [the input data used](https://ourworldindata.org/grapher/artificial-intelligence-number-training-datapoints) for the training. The visualization shows that as training computation has increased, AI systems have become more and more powerful. The timeline goes back to the 1940s, the very beginning of electronic computers. The first shown AI system is ‘Theseus’, Claude Shannon’s robotic mouse from 1950 that I mentioned at the beginning. Towards the other end of the timeline you find AI systems like DALL-E and PaLM, whose abilities to produce photorealistic images and interpret and generate language we have just seen. They are among the AI systems that used the largest amount of training computation to date. The training computation is plotted on a logarithmic scale, so that from each grid-line to the next it shows a 100-fold increase. This long-run perspective shows a continuous increase. For the first six decades, training computation increased in line with [Moore’s Law](https://ourworldindata.org/technological-change#moore-s-law-the-exponential-increase-of-the-number-of-transistors-on-integrated-circuits), doubling roughly every 20 months. Since about 2010 this exponential growth has sped up further, to a doubling time of just about 6 months. That is an astonishingly fast rate of growth.{ref}At some point in the future, training computation is expected to slow down to the exponential growth rate of Moore's Law. Tamay Besiroglu, Lennart Heim and Jaime Sevilla of the Epoch team estimate in [their report](https://epochai.org/blog/projecting-compute-trends) that the highest probability for this reversion occuring is in the early 2030s.{/ref} The fast doubling times have accrued to large increases. PaLM’s training computation was 2.5 billion petaFLOP, more than 5 million times larger than that of AlexNet, the AI with the largest training computation just 10 years earlier.{ref}The training computation of PaLM, developed in 2022, was 2,700,000,000 petaFLOP. The training computation of AlexNet, the AI with the largest training computation up to 2012, was 470 petaFLOP. 2,500,000,000 petaFLOP / 470 petaFLOP = 5,319,148.9. At the same time the amount of training computation required to achieve a given performance has been falling exponentially. The costs have also increased quickly. The cost to train PaLM is estimated to be in the range of $9–$23 million according to Lennart Heim, a researcher in the Epoch team. See: Lennart Heim (2022) – [Estimating PaLM's training cost](https://blog.heim.xyz/palm-training-cost/).{/ref} Scale-up was already exponential and has sped up substantially over the past decade. What can we learn from this historical development for the future of AI? **The rise of artificial intelligence over the last 8 decades: As training computation has increased, AI systems have become more powerful**{ref}The data is taken from Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, Pablo Villalobos (2022) – [Compute Trends Across Three eras of Machine Learning](https://arxiv.org/pdf/2202.05924.pdf). Published in arXiv on March 9, 2022. See also [their post on the Alignment Forum](https://www.alignmentforum.org/posts/XKtybmbjhC6mXDm5z/compute-trends-across-three-eras-of-machine-learning). The authors regularly update and extend their dataset, a very helpful service to the AI research community. At Our World in Data my colleague Charlie Giattino regularly updates [the interactive version of this chart](https://ourworldindata.org/grapher/ai-training-computation) with the latest data made available by Sevilla and coauthors. See also these two related charts: [Number of parameters in notable artificial intelligence systems](https://ourworldindata.org/grapher/artificial-intelligence-parameter-count) [Number of datapoints used to train notable artificial intelligence systems](https://ourworldindata.org/grapher/artificial-intelligence-number-training-datapoints){/ref} <Image filename="ai-training-computation-3.png" alt=""/> ### Studying the long-run trends to predict the future of AI AI researchers study these long-term trends to see what is possible in the future.{ref}Scaling up the size of neural networks – in terms of the number of parameters and the amount of training data and computation – has led to surprising increases in the capabilities of AI systems. This realization motivated the “scaling hypothesis.” See Gwern Branwen (2020) – [The Scaling Hypothesis](https://www.gwern.net/Scaling-hypothesis).{/ref} Perhaps the most widely discussed study of this kind was published by AI researcher Ajeya Cotra. She studied the increase in training computation to ask at what point in time the computation to train an AI system could match that of the human brain. The idea is that at this point the AI system would match the capabilities of a human brain. In her latest update, Cotra estimated a 50% probability that such “transformative AI” will be developed by the year 2040, less than two decades from now.{ref}Her research was announced in various places, including in the AI Alignment Forum here: Ajeya Cotra (2020) – [Draft report on AI timelines](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines). As far as I know the report itself always remained a ‘draft report’ and was published [here on Google Docs](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP). The cited estimate stems from Cotra’s [Two-year update on my personal AI timelines](https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines), in which she shortened her median timeline by 10 years. Cotra emphasizes that there are substantial uncertainties around her estimates and therefore communicates her findings in a range of scenarios. She published her big study in 2020 and her median estimate at the time was that around the year 2050 there will be a 50%-probability that the computation required to train such a model may become affordable. In her “most conservative plausible”-scenario this point in time is pushed back to around the year 2090 and in her “most aggressive plausible”-scenario this point in time is reached in 2040. The same is true for most other forecasters: all emphasize the large uncertainty associated with any of [their forecasts](https://ourworldindata.org/ai-timelines). It is worth emphasizing that the computation of the human brain is highly uncertain. See Joseph Carlsmith's [New Report on How Much Computational Power It Takes to Match the Human Brain](https://www.openphilanthropy.org/research/new-report-on-how-much-computational-power-it-takes-to-match-the-human-brain/) from 2020.{/ref} In [a related article](https://ourworldindata.org/ai-impact), I discuss what transformative AI would mean for the world. In short, the idea is that such an AI system would be powerful enough to bring the world into a ‘qualitatively different future’. It could lead to a change at the scale of the two earlier major transformations in human history, the agricultural and industrial revolutions. It would certainly represent the most important global change in our lifetimes. Cotra’s work is particularly relevant in this context as she based her forecast on the kind of historical long-run trend of training computation that we just studied. But it is worth noting that other forecasters who rely on different considerations arrive at broadly similar conclusions. As I show in [my article on AI timelines](https://ourworldindata.org/ai-timelines), many AI experts believe that there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner. ### Building a public resource to enable the necessary public conversation Computers and artificial intelligence have changed our world immensely, but we are still at the early stages of this history. Because this technology feels so familiar, it is easy to forget that all of these technologies that we interact with are very recent innovations, and that most profound changes are yet to come. Artificial intelligence has already changed what we see, what we know, and what we do. And this is despite the fact that this technology has had only a brief history. There are no signs that these trends are hitting any limits anytime soon. To the contrary, particularly over the course of the last decade, the fundamental trends have accelerated: investments in AI technology have [rapidly increased](https://ourworldindata.org/ai-investments), and the doubling time of training computation has shortened to just six months. All major technological innovations lead to a range of positive and negative consequences. This is already true of artificial intelligence. As this technology becomes more and more powerful, we should expect its impact to become greater still. Because of the importance of AI, we should all be able to form an opinion on where this technology is heading and to understand how this development is changing our world. For this purpose, we are building a repository of AI-related metrics, which you can find on [OurWorldinData.org/artificial-intelligence](http://ourworldindata.org/artificial-intelligence). We are still in the early stages of this history and much of what will become possible is yet to come. A technological development as powerful as this should be at the center of our attention. Little might be as important for how the future of our world – and the future of our lives – will play out. **Acknowledgements:** I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Julia Broden, Charlie Giattino, Bastian Herre, Edouard Mathieu, and Ike Saunders for their helpful comments to drafts of this essay and their contributions in preparing the visualizations. | { "id": 54765, "date": "2022-12-06T01:00:00", "guid": { "rendered": "https://owid.cloud/?p=54765" }, "link": "https://owid.cloud/brief-history-of-ai", "meta": { "owid_publication_context_meta_field": { "latest": true, "homepage": true, "immediate_newsletter": true } }, "slug": "brief-history-of-ai", "tags": [], "type": "post", "title": { "rendered": "The brief history of artificial intelligence: The world has changed fast \u2013 what might be next?" }, "_links": { "self": [ { "href": "https://owid.cloud/wp-json/wp/v2/posts/54765" } ], "about": [ { "href": "https://owid.cloud/wp-json/wp/v2/types/post" } ], "author": [ { "href": "https://owid.cloud/wp-json/wp/v2/users/2", "embeddable": true } ], "curies": [ { "href": "https://api.w.org/{rel}", "name": "wp", "templated": true } ], "replies": [ { "href": "https://owid.cloud/wp-json/wp/v2/comments?post=54765", "embeddable": true } ], "wp:term": [ { "href": "https://owid.cloud/wp-json/wp/v2/categories?post=54765", "taxonomy": "category", "embeddable": true }, { "href": "https://owid.cloud/wp-json/wp/v2/tags?post=54765", "taxonomy": "post_tag", "embeddable": true } ], "collection": [ { "href": "https://owid.cloud/wp-json/wp/v2/posts" } ], "wp:attachment": [ { "href": "https://owid.cloud/wp-json/wp/v2/media?parent=54765" } ], "version-history": [ { "href": "https://owid.cloud/wp-json/wp/v2/posts/54765/revisions", "count": 29 } ], "wp:featuredmedia": [ { "href": "https://owid.cloud/wp-json/wp/v2/media/54806", "embeddable": true } ], "predecessor-version": [ { "id": 57947, "href": "https://owid.cloud/wp-json/wp/v2/posts/54765/revisions/57947" } ] }, "author": 2, "format": "standard", "status": "publish", "sticky": false, "content": { "rendered": "\n<div class=\"blog-info\">\n<p>Our World in Data presents the data and research to make progress against the world\u2019s largest problems.<br>This article draws on data and research discussed in our entry on <strong><a href=\"https://ourworldindata.org/artificial-intelligence\" target=\"_blank\" rel=\"noopener\">Artificial Intelligence</a></strong>.</p>\n</div>\n\n\n\n<p>To see what the future might look like it is often helpful to study our history. This is what I will do in this article. I retrace the brief history of computers and artificial intelligence to see what we can expect for the future.</p>\n\n\n\n<h4>How did we get here?</h4>\n\n\n\n<div class=\"wp-block-columns\">\n<div class=\"wp-block-column\">\n<p>How rapidly the world has changed becomes clear by how even quite recent computer technology feels ancient to us today. Mobile phones in the \u201890s were big bricks with tiny green displays. Two decades before that the main storage for computers was punch cards. </p>\n\n\n\n<p>In a short period computers evolved so quickly and became such an integral part of our daily lives that it is easy to forget how recent this technology is. The first digital computers were only invented about eight decades ago, as the timeline shows.</p>\n</div>\n\n\n\n<div class=\"wp-block-column\"></div>\n</div>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"2332\" height=\"328\" src=\"https://owid.cloud/app/uploads/2022/12/Computer-timeline.png\" alt=\"\" class=\"wp-image-54772\" srcset=\"https://owid.cloud/app/uploads/2022/12/Computer-timeline.png 2332w, https://owid.cloud/app/uploads/2022/12/Computer-timeline-400x56.png 400w, https://owid.cloud/app/uploads/2022/12/Computer-timeline-800x113.png 800w, https://owid.cloud/app/uploads/2022/12/Computer-timeline-150x21.png 150w, https://owid.cloud/app/uploads/2022/12/Computer-timeline-768x108.png 768w, https://owid.cloud/app/uploads/2022/12/Computer-timeline-1536x216.png 1536w, https://owid.cloud/app/uploads/2022/12/Computer-timeline-2048x288.png 2048w\" sizes=\"(max-width: 2332px) 100vw, 2332px\" /></figure>\n\n\n\n<div class=\"wp-block-columns\">\n<div class=\"wp-block-column\">\n<p>Since the early days of this history, some computer scientists have strived to make machines as intelligent as humans. The next timeline shows some of the notable artificial intelligence (AI) systems and describes what they were capable of. </p>\n\n\n\n<p>The first system I mention is the Theseus. It was built by Claude Shannon in 1950 and was a remote-controlled mouse that was able to find its way out of a labyrinth and could remember its course.{ref}On the Theseus see Daniel Klein (2019) \u2013 <a href=\"https://web.archive.org/web/20220125004420/https://www.technologyreview.com/2018/12/19/138508/mighty-mouse/\">Mighty mouse</a>, Published in MIT Technology Review. And <a href=\"https://www.youtube.com/watch?v=_9_AEVQ_p74\">this video on YouTube</a> of a presentation by its inventor Claude Shannon.{/ref} In seven decades the abilities of artificial intelligence have come a long way.</p>\n</div>\n\n\n\n<div class=\"wp-block-column\"></div>\n</div>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"2332\" height=\"590\" src=\"https://owid.cloud/app/uploads/2022/12/ai-timeline.png\" alt=\"\" class=\"wp-image-54774\" srcset=\"https://owid.cloud/app/uploads/2022/12/ai-timeline.png 2332w, https://owid.cloud/app/uploads/2022/12/ai-timeline-400x101.png 400w, https://owid.cloud/app/uploads/2022/12/ai-timeline-800x202.png 800w, https://owid.cloud/app/uploads/2022/12/ai-timeline-150x38.png 150w, https://owid.cloud/app/uploads/2022/12/ai-timeline-768x194.png 768w, https://owid.cloud/app/uploads/2022/12/ai-timeline-1536x389.png 1536w, https://owid.cloud/app/uploads/2022/12/ai-timeline-2048x518.png 2048w\" sizes=\"(max-width: 2332px) 100vw, 2332px\" /></figure>\n\n\n\n<h4>The language and image recognition capabilities of AI systems have developed very rapidly</h4>\n\n\n\n<div class=\"wp-block-columns\">\n<div class=\"wp-block-column\" style=\"flex-basis:50%\">\n<p>The chart shows how we got here by zooming into the last two decades of AI development. The plotted data stems from a number of tests in which human and AI performance were evaluated in five different domains, from handwriting recognition to language understanding.\u00a0</p>\n\n\n\n<p>Within each of the five domains the initial performance of the AI system is set to -100, and human performance in these tests is used as a baseline that is set to zero. This means that when the model\u2019s performance crosses the zero line is when the AI system scored more points in the relevant test than the humans who did the same test.{ref}The chart shows that the speed with which these AI technologies developed increased over time. Systems for which development was started early \u2013 handwriting and\u00a0speech recognition \u2013 took more than a decade to approach human-level performance, while more recent AI developments led to systems that overtook humans in the span of only a few years. However one should not overstate this point. To some extent this is dependent on when the researchers started to compare machine and human performance. One could have started evaluating the system for language understanding much earlier and its development would appear much slower in this presentation of the data.{/ref}</p>\n\n\n\n<p>Just 10 years ago, no machine could reliably provide language or image recognition at a human level. But, as the chart shows, AI systems have become steadily more capable and are now beating humans in <em>tests</em> in all these domains.{ref}It is important to remember that while these are remarkable achievements \u2014 and show very rapid gains \u2014 these are the results from specific benchmarking tests. Outside of tests, AI models can fail in surprising ways and do not reliably achieve performance that is comparable with human capabilities.{/ref}\u00a0</p>\n\n\n\n<p>Outside of these standardized tests the performance of these AIs is mixed. In some real-world cases these systems are still performing much worse than humans. On the other hand, some implementations of such AI systems are already so cheap that they are available on the phone in your pocket: image recognition categorizes your photos and speech recognition transcribes what you dictate.</p>\n</div>\n\n\n\n<div class=\"wp-block-column\" style=\"flex-basis:50%\">\n<p><strong>Language and image recognition capabilities of AI systems have improved rapidly</strong>{ref}Data from Kiela et al. (2021) \u2013 Dynabench: Rethinking Benchmarking in NLP. arXiv:2104.14337v1; <a href=\"https://doi.org/10.48550/arXiv.2104.14337\">https://doi.org/10.48550/arXiv.2104.14337</a> {/ref}</p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"2326\" height=\"1046\" src=\"https://owid.cloud/app/uploads/2022/12/AI-performance_Dynabench-paper.png\" alt=\"\" class=\"wp-image-54776\" srcset=\"https://owid.cloud/app/uploads/2022/12/AI-performance_Dynabench-paper.png 2326w, https://owid.cloud/app/uploads/2022/12/AI-performance_Dynabench-paper-400x180.png 400w, https://owid.cloud/app/uploads/2022/12/AI-performance_Dynabench-paper-800x360.png 800w, https://owid.cloud/app/uploads/2022/12/AI-performance_Dynabench-paper-150x67.png 150w, https://owid.cloud/app/uploads/2022/12/AI-performance_Dynabench-paper-768x345.png 768w, https://owid.cloud/app/uploads/2022/12/AI-performance_Dynabench-paper-1536x691.png 1536w, https://owid.cloud/app/uploads/2022/12/AI-performance_Dynabench-paper-2048x921.png 2048w\" sizes=\"(max-width: 2326px) 100vw, 2326px\" /></figure>\n</div>\n</div>\n\n\n\n<h4>From image recognition to image generation</h4>\n\n\n\n<div class=\"wp-block-columns\">\n<div class=\"wp-block-column\">\n<p>The previous chart showed the rapid advances in the perceptive abilities of artificial intelligence. AI systems have also become much more capable of generating images. </p>\n\n\n\n<p>This series of nine images shows the development over the last nine years. None of the people in these images exist; all of them were generated by an AI system.</p>\n\n\n\n<p>The series begins with an image from 2014 in the top left, a primitive image of a pixelated face in black and white. As the first image in the second row shows, just three years later AI systems were already able to generate images that were hard to differentiate from a photograph.</p>\n\n\n\n<p>In recent years, the capability of AI systems has become much more impressive still. While the early systems focused on generating images of faces, these newer models broadened their capabilities to text-to-image generation based on almost any prompt. The image in the bottom right shows that even the most challenging prompts \u2013 such as <em>\u201cA Pomeranian is sitting on the King\u2019s throne wearing a crown. Two tiger soldiers are standing next to the throne\u201d</em> \u2013 are turned into photorealistic images within seconds.{ref}Because these systems have become so powerful, the latest AI systems often don\u2019t allow the user to generate images of human faces to prevent abuse.{/ref}</p>\n</div>\n\n\n\n<div class=\"wp-block-column\">\n<p><strong>Timeline of images generated by artificial intelligence</strong>{ref}The relevant publications are the following:</p>\n\n\n\n<p>2014: Goodfellow et al:<a href=\"https://arxiv.org/abs/1406.2661\"> Generative Adversarial Networks</a></p>\n\n\n\n<p>2015: Radford, Metz, and Chintala:<a href=\"https://arxiv.org/abs/1511.06434\"> Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks</a></p>\n\n\n\n<p>2016: Liu and Tuzel:<a href=\"https://arxiv.org/abs/1606.07536\"> Coupled Generative Adversarial Networks</a></p>\n\n\n\n<p>2017: Karras et al:<a href=\"https://arxiv.org/abs/1710.10196\"> Progressive Growing of GANs for Improved Quality, Stability, and Variation</a></p>\n\n\n\n<p>2018: Karras, Laine, and Aila:<a href=\"https://arxiv.org/abs/1812.04948\"> A Style-Based Generator Architecture for Generative Adversarial Networks</a> (StyleGAN from NVIDIA)</p>\n\n\n\n<p>2019: Karras et al:<a href=\"https://arxiv.org/abs/1912.04958\"> Analyzing and Improving the Image Quality of StyleGAN</a></p>\n\n\n\n<p>AI-generated faces generated by this technology can be found on <a href=\"https://thispersondoesnotexist.com/\">thispersondoesnotexist.com</a>.</p>\n\n\n\n<p>2020: Ho, Jain, and Abbeel:<a href=\"https://arxiv.org/abs/2006.11239\"> Denoising Diffusion Probabilistic Models</a></p>\n\n\n\n<p>2021: Ramesh et al:<a href=\"https://arxiv.org/abs/2102.12092\"> Zero-Shot Text-to-Image Generation</a> (first DALL-E from OpenAI;<a href=\"https://openai.com/blog/dall-e/\"> blog post</a>). See also Ramesh et al (2022) \u2013<a href=\"https://cdn.openai.com/papers/dall-e-2.pdf\"> Hierarchical Text-Conditional Image Generation with CLIP Latents</a> (DALL-E 2 from OpenAI;<a href=\"https://openai.com/dall-e-2/\"> blog post</a>).</p>\n\n\n\n<p>2022: Saharia et al: <a href=\"https://arxiv.org/abs/2205.11487\">Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding</a> (Google\u2019s Imagen;<a href=\"https://imagen.research.google/\"> blog post</a>){/ref}</p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"1978\" height=\"2241\" src=\"https://owid.cloud/app/uploads/2022/12/Timeline-of-AI-generated-faces.png\" alt=\"\" class=\"wp-image-54778\" srcset=\"https://owid.cloud/app/uploads/2022/12/Timeline-of-AI-generated-faces.png 1978w, https://owid.cloud/app/uploads/2022/12/Timeline-of-AI-generated-faces-353x400.png 353w, https://owid.cloud/app/uploads/2022/12/Timeline-of-AI-generated-faces-485x550.png 485w, https://owid.cloud/app/uploads/2022/12/Timeline-of-AI-generated-faces-132x150.png 132w, https://owid.cloud/app/uploads/2022/12/Timeline-of-AI-generated-faces-768x870.png 768w, https://owid.cloud/app/uploads/2022/12/Timeline-of-AI-generated-faces-1356x1536.png 1356w, https://owid.cloud/app/uploads/2022/12/Timeline-of-AI-generated-faces-1808x2048.png 1808w\" sizes=\"(max-width: 1978px) 100vw, 1978px\" /></figure>\n</div>\n</div>\n\n\n\n<h4>Language recognition and production is developing fast</h4>\n\n\n\n<div class=\"wp-block-columns\">\n<div class=\"wp-block-column\" style=\"flex-basis:50%\">\n<p>Just as striking as the advances of image-generating AIs is the rapid development of systems that parse and respond to human language. </p>\n\n\n\n<p>Shown in the image are examples from an AI system developed by Google called PaLM. In these six examples, the system was asked to explain six different jokes. I find the explanation in the bottom right particularly remarkable: the AI explains an anti-joke that is specifically meant to confuse the listener.</p>\n\n\n\n<p>AIs that produce language have entered our world in many ways over the last few years. Emails get auto-completed, massive amounts of online texts get translated, videos get automatically transcribed, school children use language models to do their homework, reports get auto-generated, and media outlets <a href=\"https://en.wikipedia.org/wiki/Automated_journalism\">publish</a> AI-generated journalism.</p>\n\n\n\n<p>AI systems are not yet able to produce long, coherent texts. In the future, we will see whether the recent developments will slow down \u2013 or even end \u2013 or whether we will one day read a bestselling novel written by an AI.</p>\n</div>\n\n\n\n<div class=\"wp-block-column\" style=\"flex-basis:50%\">\n<p><strong>Output of the AI system PaLM after being asked to interpret six different jokes</strong>{ref}From Chowdhery et al (2022) \u2013<a href=\"https://arxiv.org/pdf/2204.02311v2.pdf\"> PaLM: Scaling Language Modeling with Pathways</a>. Published on arXiv on 7 Apr 2022.{/ref}</p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https://lh3.googleusercontent.com/jJ58V3vrTBw9cg2lzM-w2xiU7ExufRO7WNJUCp7a3ZGf6c79LjqrusGmFaF8nMUEtn-gH3K7J1CHHG3jMW4WotJnNjGYhCVwt3Ou5g66geZDw81yiwez1OxPB80E6TmoRNHa9dNcch9TLDj5ruQBHDvkPpl5Hpl6TtnPVXvdh0mp4jFmmiBu0Wmxsp1mKA\" alt=\"\"/></figure>\n</div>\n</div>\n\n\n\n<h3>Where we are now: AI is here</h3>\n\n\n\n<p>These rapid advances in AI capabilities have made it possible to use machines in a wide range of new domains:</p>\n\n\n\n<p>When you book a flight, it is often an artificial intelligence, and no longer a human, that <a href=\"https://www.bloomberg.com/news/articles/2022-10-20/artificial-intelligence-helps-airlines-find-the-right-prices-for-flight-tickets\">decides</a> what you pay. When you get to the airport, it is an AI system that <a href=\"https://www.sourcesecurity.com/news/co-2166-ga.132.html\">monitors</a> what you do at the airport. And once you are on the plane, an AI system assists the pilot in <a href=\"https://www.airbus.com/en/innovation/industry-4-0/artificial-intelligence\">flying</a> you to your destination. </p>\n\n\n\n<p>AI systems also increasingly determine whether you <a href=\"https://www.brookings.edu/research/reducing-bias-in-ai-based-financial-services/\">get a loan</a>, are <a href=\"https://theconversation.com/ai-algorithms-intended-to-root-out-welfare-fraud-often-end-up-punishing-the-poor-instead-131625\">eligible</a> for welfare, or get <a href=\"https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G\">hired</a> for a particular job. Increasingly they help determine who gets <a href=\"https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/\">released from jail</a>.</p>\n\n\n\n<p>Several governments are purchasing <a href=\"https://en.wikipedia.org/wiki/Lethal_autonomous_weapon\">autonomous weapons systems</a> for warfare, and some are using AI systems for <a href=\"https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html\">surveillance and oppression</a>. </p>\n\n\n\n<p>AI systems <a href=\"https://en.wikipedia.org/wiki/GitHub_Copilot\">help</a> to program the software you use and <a href=\"https://en.wikipedia.org/wiki/Machine_translation\">translate</a> the texts you read. <a href=\"https://en.wikipedia.org/wiki/Virtual_assistant\">Virtual assistants</a>, operated by speech recognition, have entered many households over the last decade. Now <a href=\"https://en.wikipedia.org/wiki/Self-driving_car\">self-driving cars</a> are becoming a reality. </p>\n\n\n\n<p>In the last few years, AI systems <a href=\"https://www.nature.com/articles/s42254-022-00518-3\">helped</a> <a href=\"https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/\">to</a> <a href=\"https://www.nature.com/articles/d41586-022-00997-5\">make</a> <a href=\"https://www.nature.com/articles/d41586-022-03209-2\">progress</a> on some of the hardest problems in science.</p>\n\n\n\n<p>Large AIs called <a href=\"https://en.wikipedia.org/wiki/Recommender_system\">recommender systems</a> determine what you see on social media, which products are shown to you in online shops, and what gets recommended to you on YouTube. Increasingly they are not just recommending the media we consume, but based on their capacity to generate images and texts, they are also <a href=\"https://www.nature.com/articles/d41586-021-00530-0\">creating</a> the media we consume. </p>\n\n\n\n<p>Artificial intelligence is no longer a technology of the future; AI is here, and much of what is reality now would have looked like sci-fi just recently. It is a technology that already impacts all of us, and the list above includes just a few of its <a href=\"https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence\">many applications</a>. </p>\n\n\n\n<p>The wide range of listed applications makes clear that this is a very general technology that can be used by people for some extremely good goals \u2013 and some extraordinarily bad ones, too. For such \u2018dual use technologies\u2019, it is important that all of us develop an understanding of what is happening and how we want the technology to be used.</p>\n\n\n\n<p>Just two decades ago the world was very different. What might AI technology be capable of in the future?</p>\n\n\n\n<h3>What is next? </h3>\n\n\n\n<div class=\"wp-block-columns\">\n<div class=\"wp-block-column\">\n<p>The AI systems that we just considered are the result of decades of steady advances in AI technology. </p>\n\n\n\n<p>The big chart below brings this history over the last eight decades into perspective. It is based on the dataset produced by Jaime Sevilla and colleagues.{ref}See the footnote on the title of the chart for the references and additional information.{/ref}</p>\n\n\n\n<p>Each small circle in this chart represents one AI system. The circle\u2019s position on the horizontal axis indicates when the AI system was built, and its position on the vertical axis shows the amount of computation that was used to train the particular AI system.</p>\n\n\n\n<p>Training computation is measured in <em>floating point operations</em>, or FLOP for short. One FLOP is equivalent to one addition, subtraction, multiplication, or division of two decimal numbers. </p>\n\n\n\n<p>All AI systems that rely on machine learning need to be trained, and in these systems training computation is one of the three fundamental factors that are driving the capabilities of the system. The other two factors are the algorithms and <a href=\"https://ourworldindata.org/grapher/artificial-intelligence-number-training-datapoints\">the input data used</a> for the training. The visualization shows that as training computation has increased, AI systems have become more and more powerful.</p>\n\n\n\n<p>The timeline goes back to the 1940s, the very beginning of electronic computers. The first shown AI system is \u2018Theseus\u2019, Claude Shannon\u2019s robotic mouse from 1950 that I mentioned at the beginning. Towards the other end of the timeline you find AI systems like DALL-E and PaLM, whose abilities to produce photorealistic images and interpret and generate language we have just seen. They are among the AI systems that used the largest amount of training computation to date.</p>\n\n\n\n<p>The training computation is plotted on a logarithmic scale, so that from each grid-line to the next it shows a 100-fold increase. This long-run perspective shows a continuous increase. For the first six decades, training computation increased in line with <a href=\"https://ourworldindata.org/technological-change#moore-s-law-the-exponential-increase-of-the-number-of-transistors-on-integrated-circuits\">Moore\u2019s Law</a>, doubling roughly every 20 months. Since about 2010 this exponential growth has sped up further, to a doubling time of just about 6 months. That is an astonishingly fast rate of growth.{ref}At some point in the future, training computation is expected to slow down to the exponential growth rate of Moore’s Law. Tamay Besiroglu, Lennart Heim and Jaime Sevilla of the Epoch team estimate in <a href=\"https://epochai.org/blog/projecting-compute-trends\">their report</a> that the highest probability for this reversion occuring is in the early 2030s.{/ref}</p>\n\n\n\n<p>The fast doubling times have accrued to large increases. PaLM\u2019s training computation was 2.5 billion petaFLOP, more than 5 million times larger than that of AlexNet, the AI with the largest training computation just 10 years earlier.{ref}The training computation of PaLM, developed in 2022, was 2,700,000,000 petaFLOP. The training computation of AlexNet, the AI with the largest training computation up to 2012, was 470 petaFLOP. 2,500,000,000 petaFLOP / 470 petaFLOP = 5,319,148.9. At the same time the amount of training computation required to achieve a given performance has been falling exponentially.</p>\n\n\n\n<p>The costs have also increased quickly. The cost to train PaLM is estimated to be in the range of $9\u2013$23 million according to Lennart Heim, a researcher in the Epoch team. See: Lennart Heim (2022) \u2013 <a href=\"https://blog.heim.xyz/palm-training-cost/\">Estimating PaLM’s training cost</a>.{/ref} </p>\n\n\n\n<p>Scale-up was already exponential and has sped up substantially over the past decade. What can we learn from this historical development for the future of AI?</p>\n</div>\n\n\n\n<div class=\"wp-block-column\">\n<p><strong>The rise of artificial intelligence over the last 8 decades: As training computation has increased, AI systems have become more powerful</strong>{ref}The data is taken from Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, Pablo Villalobos (2022) \u2013 <a href=\"https://arxiv.org/pdf/2202.05924.pdf\">Compute Trends Across Three eras of Machine Learning</a>. Published in arXiv on March 9, 2022. See also <a href=\"https://www.alignmentforum.org/posts/XKtybmbjhC6mXDm5z/compute-trends-across-three-eras-of-machine-learning\">their post on the Alignment Forum</a>. </p>\n\n\n\n<p>The authors regularly update and extend their dataset, a very helpful service to the AI research community. At Our World in Data my colleague Charlie Giattino regularly updates <a href=\"https://ourworldindata.org/grapher/ai-training-computation\">the interactive version of this chart</a> with the latest data made available by Sevilla and coauthors.</p>\n\n\n\n<p>See also these two related charts:</p>\n\n\n\n<p><a href=\"https://ourworldindata.org/grapher/artificial-intelligence-parameter-count\">Number of parameters in notable artificial intelligence systems</a></p>\n\n\n\n<p><a href=\"https://ourworldindata.org/grapher/artificial-intelligence-number-training-datapoints\">Number of datapoints used to train notable artificial intelligence systems</a>{/ref}</p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"2079\" height=\"2560\" src=\"https://owid.cloud/app/uploads/2022/12/ai-training-computation-3.png\" alt=\"\" class=\"wp-image-54982\" srcset=\"https://owid.cloud/app/uploads/2022/12/ai-training-computation-3.png 2079w, https://owid.cloud/app/uploads/2022/12/ai-training-computation-3-325x400.png 325w, https://owid.cloud/app/uploads/2022/12/ai-training-computation-3-447x550.png 447w, https://owid.cloud/app/uploads/2022/12/ai-training-computation-3-122x150.png 122w, https://owid.cloud/app/uploads/2022/12/ai-training-computation-3-768x946.png 768w, https://owid.cloud/app/uploads/2022/12/ai-training-computation-3-1247x1536.png 1247w, https://owid.cloud/app/uploads/2022/12/ai-training-computation-3-1663x2048.png 1663w\" sizes=\"(max-width: 2079px) 100vw, 2079px\" /></figure>\n\n\n\n<p></p>\n</div>\n</div>\n\n\n\n<h4>Studying the long-run trends to predict the future of AI</h4>\n\n\n\n<p>AI researchers study these long-term trends to see what is possible in the future.{ref}Scaling up the size of neural networks \u2013 in terms of the number of parameters and the amount of training data and computation \u2013 has led to surprising increases in the capabilities of AI systems. This realization motivated the \u201cscaling hypothesis.\u201d See Gwern Branwen (2020) \u2013 <a href=\"https://www.gwern.net/Scaling-hypothesis\">The Scaling Hypothesis</a>\u2060.{/ref}</p>\n\n\n\n<p>Perhaps the most widely discussed study of this kind was published by AI researcher Ajeya Cotra. She studied the increase in training computation to ask at what point in time the computation to train an AI system could match that of the human brain. The idea is that at this point the AI system would match the capabilities of a human brain. In her latest update, Cotra estimated a 50% probability that such \u201ctransformative AI\u201d will be developed by the year 2040, less than two decades from now.{ref}Her research was announced in various places, including in the AI Alignment Forum here: Ajeya Cotra (2020) \u2013 <a href=\"https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines\">Draft report on AI timelines</a>. As far as I know the report itself always remained a \u2018draft report\u2019 and was published <a href=\"https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP\">here on Google Docs</a>. </p>\n\n\n\n<p>The cited estimate stems from Cotra\u2019s <a href=\"https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines\">Two-year update on my personal AI timelines</a>, in which she shortened her median timeline by 10 years.</p>\n\n\n\n<p>Cotra emphasizes that there are substantial uncertainties around her estimates and therefore communicates her findings in a range of scenarios. She published her big study in 2020 and her median estimate at the time was that around the year 2050 there will be a 50%-probability that the computation required to train such a model may become affordable. In her \u201cmost conservative plausible\u201d-scenario this point in time is pushed back to around the year 2090 and in her \u201cmost aggressive plausible\u201d-scenario this point in time is reached in 2040.</p>\n\n\n\n<p>The same is true for most other forecasters: all emphasize the large uncertainty associated with any of <a href=\"https://ourworldindata.org/ai-timelines\">their forecasts</a>.</p>\n\n\n\n<p>It is worth emphasizing that the computation of the human brain is highly uncertain. See Joseph Carlsmith’s <a href=\"https://www.openphilanthropy.org/research/new-report-on-how-much-computational-power-it-takes-to-match-the-human-brain/\">New Report on How Much Computational Power It Takes to Match the Human Brain</a> from 2020.{/ref} </p>\n\n\n\n<p>In <a href=\"https://ourworldindata.org/ai-impact\">a related article</a>, I discuss what transformative AI would mean for the world. In short, the idea is that such an AI system would be powerful enough to bring the world into a \u2018qualitatively different future\u2019. It could lead to a change at the scale of the two earlier major transformations in human history, the agricultural and industrial revolutions. It would certainly represent the most important global change in our lifetimes.</p>\n\n\n\n<p>Cotra\u2019s work is particularly relevant in this context as she based her forecast on the kind of historical long-run trend of training computation that we just studied. But it is worth noting that other forecasters who rely on different considerations arrive at broadly similar conclusions. As I show in <a href=\"https://ourworldindata.org/ai-timelines\">my article on AI timelines</a>, many AI experts believe that there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.</p>\n\n\n\n<h4>Building a public resource to enable the necessary public conversation</h4>\n\n\n\n<p>Computers and artificial intelligence have changed our world immensely, but we are still at the early stages of this history. Because this technology feels so familiar, it is easy to forget that all of these technologies that we interact with are very recent innovations, and that most profound changes are yet to come.</p>\n\n\n\n<p>Artificial intelligence has already changed what we see, what we know, and what we do. And this is despite the fact that this technology has had only a brief history. </p>\n\n\n\n<p>There are no signs that these trends are hitting any limits anytime soon. To the contrary, particularly over the course of the last decade, the fundamental trends have accelerated: investments in AI technology have <a href=\"https://ourworldindata.org/ai-investments\">rapidly increased</a>, and the doubling time of training computation has shortened to just six months.</p>\n\n\n\n<p>All major technological innovations lead to a range of positive and negative consequences. This is already true of artificial intelligence. As this technology becomes more and more powerful, we should expect its impact to become greater still. </p>\n\n\n\n<p>Because of the importance of AI, we should all be able to form an opinion on where this technology is heading and to understand how this development is changing our world. For this purpose, we are building a repository of AI-related metrics, which you can find on <a href=\"http://ourworldindata.org/artificial-intelligence\">OurWorldinData.org/artificial-intelligence</a>. </p>\n\n\n\n<p>We are still in the early stages of this history and much of what will become possible is yet to come. A technological development as powerful as this should be at the center of our attention. Little might be as important for how the future of our world \u2013 and the future of our lives \u2013 will play out.</p>\n\n\n\n<hr class=\"wp-block-separator\"/>\n\n\n\n<div style=\"height:100px\" aria-hidden=\"true\" class=\"wp-block-spacer\"></div>\n\n\n\n<p><strong>Acknowledgements:</strong> I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Julia Broden, Charlie Giattino, Bastian Herre, Edouard Mathieu, and Ike Saunders for their helpful comments to drafts of this essay and their contributions in preparing the visualizations.</p>\n", "protected": false }, "excerpt": { "rendered": "Despite their brief history, computers and AI have fundamentally changed what we see, what we know, and what we do. Little is as important for the future of the world, and our own lives, as how this history continues.", "protected": false }, "date_gmt": "2022-12-06T01:00:00", "modified": "2023-07-31T14:05:20", "template": "", "categories": [ 1 ], "ping_status": "closed", "authors_name": [ "Max Roser" ], "modified_gmt": "2023-07-31T13:05:20", "comment_status": "closed", "featured_media": 54806, "featured_media_paths": { "thumbnail": "/app/uploads/2022/12/featured-image-ai-training-computation-150x86.png", "medium_large": "/app/uploads/2022/12/featured-image-ai-training-computation-768x440.png" } } |