posts_gdocs: 1rzaBWNe1jarUZIhWvqJOB3Om4RzgJ1rlYu20Ic7O5Og
Data license: CC-BY
This data as json
id | slug | type | content | published | createdAt | publishedAt | updatedAt | publicationContext | revisionId | breadcrumbs | markdown | title |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1rzaBWNe1jarUZIhWvqJOB3Om4RzgJ1rlYu20Ic7O5Og | ai-impact | article | { "toc": [ { "slug": "the-advantages-and-disadvantages-of-comparing-machine-and-human-intelligence", "text": "The advantages and disadvantages of comparing machine and human intelligence", "title": "The advantages and disadvantages of comparing machine and human intelligence", "supertitle": "", "isSubheading": false }, { "slug": "transformative-artificial-intelligence-is-defined-by-the-impact-this-technology-would-have-on-the-world", "text": "Transformative artificial intelligence is defined by the impact this technology would have on the world", "title": "Transformative artificial intelligence is defined by the impact this technology would have on the world", "supertitle": "", "isSubheading": false }, { "slug": "a-future-of-human-level-or-transformative-ai", "text": "A future of human-level or transformative AI?", "title": "A future of human-level or transformative AI?", "supertitle": "", "isSubheading": false } ], "body": [ { "type": "text", "value": [ { "text": "Why should you care about the development of artificial intelligence?", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Think about what the alternative would look like. If you and the wider public do not get informed and engaged, then we leave it to a few entrepreneurs and engineers to decide how this technology will transform our world.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "That is the status quo. This small number of people at a few tech firms directly working on artificial intelligence (AI) do understand how extraordinarily powerful this technology is ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/brief-history-of-ai", "children": [ { "text": "becoming", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". If the rest of society does not become engaged, then it will be this small elite who decides how this technology will change our lives.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "To change this status quo, I want to answer three questions in this article: Why is it hard to take the prospect of a world transformed by AI seriously? How can we imagine such a world? And what is at stake as this technology becomes more powerful?", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "Why is it hard to take the prospect of a world transformed by artificial intelligence seriously?", "spanType": "span-simple-text" } ], "type": "heading", "level": 1, "parseErrors": [] }, { "type": "text", "value": [ { "text": "In some way, it should be obvious how technology can fundamentally transform the world. We just have to look at how much the world has already changed. If you could invite a family of hunter-gatherers from 20,000 years ago on your next flight, they would be pretty surprised. Technology has changed our world already, so we should expect that it can happen again.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "But while we have seen the world transform before, we have seen these transformations play out over the course of generations. What is different now is how very rapid these technological changes have become. In the past, the technologies that our ancestors used in their childhood were still central to their lives in their old age. This has not been the case anymore for recent generations. Instead, it has ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/technology-long-run", "children": [ { "text": "become common", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " that technologies unimaginable in one's youth become ordinary in later life.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "This is the first reason we might not take the prospect seriously: it is easy to underestimate the speed at which technology can change the world.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The second reason why it is difficult to take the possibility of transformative AI \u2013 potentially even AI as intelligent as humans \u2013 seriously is that it is an idea that we first heard in the cinema. It is not surprising that for many of us, the first reaction to a scenario in which machines have human-like capabilities is the same as if you had asked us to take seriously a future in which vampires, werewolves, or zombies roam the planet.", "spanType": "span-simple-text" }, { "url": "#note-1", "children": [ { "children": [ { "text": "1", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "But, it is plausible that it is both the stuff of sci-fi fantasy ", "spanType": "span-simple-text" }, { "children": [ { "text": "and", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " the central invention that could arrive in our, or our children\u2019s, lifetimes.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The third reason why it is difficult to take this prospect seriously is by failing to see that powerful AI could lead to very large changes. This is also understandable. It is difficult to form an idea of a future that is very different from our own time. There are two concepts that I find helpful in imagining a very different future with artificial intelligence. Let\u2019s look at both of them.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "How to develop an idea of what the future of artificial intelligence might look like?", "spanType": "span-simple-text" } ], "type": "heading", "level": 1, "parseErrors": [] }, { "type": "text", "value": [ { "text": "When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI.", "spanType": "span-simple-text" }, { "url": "#note-2", "children": [ { "children": [ { "text": "2", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" }, { "text": " The first concept highlights the AI\u2019s capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on the world.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "From where we are today, much of this may sound like science fiction. It is therefore worth keeping in mind that the majority of surveyed AI experts ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/ai-timelines", "children": [ { "text": "believe", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "The advantages and disadvantages of comparing machine and human intelligence", "spanType": "span-simple-text" } ], "type": "heading", "level": 2, "parseErrors": [] }, { "type": "text", "value": [ { "text": "One way to think about human-level artificial intelligence is to contrast it with the current state of AI technology. While today\u2019s AI systems often have capabilities similar to a particular, limited part of the human mind, a human-level AI would be a machine that is capable of carrying out the same ", "spanType": "span-simple-text" }, { "children": [ { "text": "range", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " of intellectual tasks that we humans are capable of.", "spanType": "span-simple-text" }, { "url": "#note-3", "children": [ { "children": [ { "text": "3", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" }, { "text": " It is a machine that would be \u201cable to learn to do anything that a human can do,\u201d as Norvig and Russell put it in their textbook on AI.", "spanType": "span-simple-text" }, { "url": "#note-4", "children": [ { "children": [ { "text": "4", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Taken together, the range of abilities that characterize intelligence gives humans the ability to solve problems and achieve a wide variety of goals. A human-level AI would therefore be a system that could solve all those problems that we humans can solve, and do the tasks that humans do today. Such a machine, or collective of machines, would be able to do the work of a translator, an accountant, an illustrator, a teacher, a therapist, a truck driver, or the work of a trader on the world\u2019s financial markets. Like us, it would also be able to do research and science, and to develop new technologies based on that.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The concept of human-level AI has some clear advantages. Using the familiarity of our own intelligence as a reference provides us with some clear guidance on how to imagine the capabilities of this technology.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "However, it also has clear disadvantages. Anchoring the imagination of future AI systems to the familiar reality of human intelligence carries the risk that it obscures the very real differences between them.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Some of these differences are obvious. For example, AI systems will have the immense memory of computer systems, against which our own capacity to store information pales. Another obvious difference is the speed at which a machine can absorb and process information. But information storage and processing speed are not the only differences. The domains in which machines already outperform humans is steadily increasing: in chess, after matching the level of the best human players in the late 90s, AI systems ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/grapher/computer-chess-ability", "children": [ { "text": "reached", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " superhuman levels more than a decade ago. In other games like Go or complex strategy games, this has happened more recently.", "spanType": "span-simple-text" }, { "url": "#note-5", "children": [ { "children": [ { "text": "5", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "These differences mean that an AI that is at least as good as humans in every domain would overall be much more powerful than the human mind. Even the first \u201chuman-level AI\u201d would therefore be quite superhuman in many ways.", "spanType": "span-simple-text" }, { "url": "#note-6", "children": [ { "children": [ { "text": "6", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Human intelligence is also a bad metaphor for machine intelligence in other ways. The way we think is often very different from machines, and as a consequence the output of thinking machines can be very alien to us.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Most perplexing and most concerning are the strange and unexpected ways in which machine intelligence can fail. The AI-generated image of the horse below provides an example: on the one hand, AIs can do what no human can do \u2013 produce an image of anything, in any style (here photorealistic), in mere seconds \u2013 but on the other hand it can fail in ways that no human would fail.", "spanType": "span-simple-text" }, { "url": "#note-7", "children": [ { "children": [ { "text": "7", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" }, { "text": " No human would make the mistake of drawing a horse with five legs.", "spanType": "span-simple-text" }, { "url": "#note-8", "children": [ { "children": [ { "text": "8", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Imagining a powerful future AI as just another human would therefore likely be a mistake. The differences might be so large that it will be a misnomer to call such systems \u201chuman-level.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "children": [ { "text": "AI-generated image of a horse", "spanType": "span-simple-text" } ], "spanType": "span-bold" }, { "url": "#note-9", "children": [ { "children": [ { "text": "9", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "alt": "A brown horse running in a grassy field. The horse appears to have five legs.", "size": "wide", "type": "image", "filename": "ai-generated-image-of-a-horse.png", "parseErrors": [] }, { "text": [ { "text": "Transformative artificial intelligence is defined by the impact this technology would have on the world", "spanType": "span-simple-text" } ], "type": "heading", "level": 2, "parseErrors": [] }, { "type": "text", "value": [ { "text": "In contrast, the concept of transformative AI is not based on a comparison with human intelligence. This has the advantage of sidestepping the problems that the comparisons with our own mind bring. But it has the disadvantage that it is harder to imagine what such a system would look like and be capable of. It requires more from us. It requires us to imagine a world with intelligent actors that are potentially very different from ourselves.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Transformative AI is not defined by any specific capabilities, but by the real-world impact that the AI would have. To qualify as transformative, researchers think of it as AI that is \u201cpowerful enough to bring us into a new, qualitatively different future.\u201d", "spanType": "span-simple-text" }, { "url": "#note-10", "children": [ { "children": [ { "text": "10", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In humanity\u2019s history, there have been two cases of such major transformations, the agricultural and the industrial revolutions.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Transformative AI becoming a reality would be an event on that scale. Like the arrival of agriculture 10,000 years ago, or the transition from hand- to machine-manufacturing, it would be an event that would change the world for billions of people around the globe and for the entire trajectory of ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/longtermism", "children": [ { "text": "humanity\u2019s future", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Technologies that fundamentally change how a wide range of goods or services are produced are called \u2018general-purpose technologies\u2019. The two previous transformative events were caused by the discovery of two particularly significant general-purpose technologies: the change in food production as humanity transitioned from hunting and gathering to farming, and the rise of machine manufacturing in the industrial revolution. Based on the evidence and arguments presented in ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/artificial-intelligence#research-writing", "children": [ { "text": "this series", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " on AI development, I believe it is plausible that powerful AI could represent the introduction of a similarly significant general-purpose technology.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "children": [ { "text": "Timeline of the three transformative events in world history", "spanType": "span-simple-text" } ], "spanType": "span-bold" } ], "parseErrors": [] }, { "type": "text", "value": [], "parseErrors": [] }, { "alt": "", "size": "wide", "type": "image", "filename": "Timeline-of-Transformative-Events.png", "parseErrors": [] }, { "text": [ { "text": "A future of human-level or transformative AI?", "spanType": "span-simple-text" } ], "type": "heading", "level": 2, "parseErrors": [] }, { "type": "text", "value": [ { "text": "The two concepts are closely related, but they are not the same. The creation of a human-level AI would certainly have a transformative impact on our world. If the work of most humans could be carried out by an AI, the lives of millions of people would change.", "spanType": "span-simple-text" }, { "url": "#note-11", "children": [ { "children": [ { "text": "11", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The opposite, however, is not true: we might see transformative AI without developing human-level AI. Since the human mind is in many ways a poor metaphor for the intelligence of machines, we might plausibly develop transformative AI before we develop human-level AI. Depending on how this goes, this might mean that we will never see any machine intelligence for which human intelligence is a helpful comparison.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "When and if AI systems might reach either of these levels is of course difficult to predict. In my ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/ai-timelines", "children": [ { "text": "companion article", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " on this question, I give an overview of what researchers in this field currently believe. Many AI experts believe there is a real chance that such systems will be developed within the next decades, and some believe that they will exist much sooner.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "What is at stake as artificial intelligence becomes more powerful?", "spanType": "span-simple-text" } ], "type": "heading", "level": 1, "parseErrors": [] }, { "type": "text", "value": [ { "text": "All major technological innovations lead to a range of positive and negative consequences. For AI, the spectrum of possible outcomes \u2013 from the most negative to the most positive \u2013 is extraordinarily wide.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "That the use of AI technology can cause harm is clear, because it is already happening.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "AI systems can cause harm when people use them maliciously. For example, when they are used in politically-motivated disinformation campaigns or to enable mass surveillance.", "spanType": "span-simple-text" }, { "url": "#note-12", "children": [ { "children": [ { "text": "12", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "But AI systems can also cause unintended harm, when they act differently than intended or fail. For example, in the Netherlands the authorities used an AI system which falsely claimed that an estimated 26,000 parents made fraudulent claims for child care benefits. The false allegations led to hardship for many poor families, and also resulted in the resignation of the Dutch government in 2021.", "spanType": "span-simple-text" }, { "url": "#note-13", "children": [ { "children": [ { "text": "13", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "As AI becomes more powerful, the possible negative impacts could become much larger. Many of these risks have rightfully received public attention: more powerful AI could lead to mass labor displacement, or extreme concentrations of power and wealth. In the hands of autocrats, it could empower totalitarianism through its suitability for mass surveillance and control.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The so-called ", "spanType": "span-simple-text" }, { "children": [ { "text": "alignment problem", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " of AI is another extreme risk. This is the concern that ", "spanType": "span-simple-text" }, { "children": [ { "text": "nobody", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " would be able to control a powerful AI system, even if the AI takes actions that harm us humans, or humanity as a whole. This risk is unfortunately receiving little attention from the wider public, but it is seen as an extremely large risk by many leading AI researchers.", "spanType": "span-simple-text" }, { "url": "#note-14", "children": [ { "children": [ { "text": "14", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "How could an AI possibly escape human control and end up harming humans?", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The risk is not that an AI becomes self-aware, develops bad intentions, and \u201cchooses\u201d to do this. The risk is that we try to instruct the AI to pursue some specific goal \u2013 even a very worthwhile one \u2013 and in the pursuit of that goal it ends up harming humans. It is about unintended consequences. The AI does what we told it to do, but not what we wanted it to do.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Can\u2019t we just tell the AI to not do those things? It is definitely possible to build an AI that avoids any particular problem we foresee, but it is hard to foresee ", "spanType": "span-simple-text" }, { "children": [ { "text": "all", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " the possible harmful unintended consequences. The alignment problem arises because of \u201cthe impossibility of defining true human purposes correctly and completely,\u201d as AI researcher Stuart Russell puts it.", "spanType": "span-simple-text" }, { "url": "#note-15", "children": [ { "children": [ { "text": "15", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Can\u2019t we then just switch off the AI? This might also not be possible. That is because a powerful AI would know two things: it faces a risk that humans could turn it off, and it can\u2019t achieve its goals once it has been turned off. As a consequence, the AI will pursue a very fundamental goal of ensuring that it won\u2019t be switched off. This is why, once we realize that an extremely intelligent AI is causing unintended harm in the pursuit of some specific goal, it might not be possible to turn it off or change what the system does.", "spanType": "span-simple-text" }, { "url": "#note-16", "children": [ { "children": [ { "text": "16", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "This risk \u2013 that humanity might not be able to stay in control once AI becomes very powerful, and that this might lead to an extreme catastrophe \u2013 has been recognized right from the early days of AI research more than 70 years ago.", "spanType": "span-simple-text" }, { "url": "#note-17", "children": [ { "children": [ { "text": "17", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" }, { "text": " The very rapid development of AI in recent years has made a solution to this problem much more urgent.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "I have tried to summarize some of the risks of AI, but a short article is not enough space to address all possible questions. Especially on the very worst risks of AI systems, and what we can do now to reduce them, I recommend reading the book ", "spanType": "span-simple-text" }, { "url": "https://brianchristian.org/the-alignment-problem/", "children": [ { "text": "The Alignment Problem", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " by Brian Christian and Benjamin Hilton\u2019s article ", "spanType": "span-simple-text" }, { "url": "https://80000hours.org/problem-profiles/artificial-intelligence", "children": [ { "text": "\u2018Preventing an AI-related catastrophe\u2019", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "If we manage to avoid these risks, transformative AI could also lead to very positive consequences. Advances in science and technology were crucial to ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/a-history-of-global-living-conditions-in-5-charts", "children": [ { "text": "the many positive developments", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " in humanity\u2019s history. If artificial ingenuity can augment our own, it could help us make progress on the many large problems we face: from cleaner energy, to the replacement of unpleasant work, to much better healthcare.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "This extremely large contrast between the possible positives and negatives makes clear that the stakes are unusually high with this technology. Reducing the negative risks and solving the alignment problem could mean the difference between a healthy, flourishing, and wealthy future for humanity \u2013 and the destruction of the same.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "text": "How can we make sure that the development of AI goes well?", "spanType": "span-simple-text" } ], "type": "heading", "level": 1, "parseErrors": [] }, { "type": "text", "value": [ { "text": "Making sure that the development of artificial intelligence goes well is not just one of the most crucial questions of our time, but likely one of the most crucial questions in human history. This needs public resources \u2013 public funding, public attention, and public engagement.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Currently, almost all resources that are dedicated to AI aim to speed up the development of this technology. Efforts that aim to increase the safety of AI systems, on the other hand, do not receive the resources they need. Researcher Toby Ord estimated that in 2020 between $10 to $50 million was spent on work to address the alignment problem.", "spanType": "span-simple-text" }, { "url": "#note-18", "children": [ { "children": [ { "text": "18", "spanType": "span-simple-text" } ], "spanType": "span-superscript" } ], "spanType": "span-ref" }, { "text": " Corporate AI investment in the same year was more than 2000-times larger, it ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/grapher/corporate-investment-in-artificial-intelligence-by-type", "children": [ { "text": "summed up", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " to $153 billion.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "This is not only the case for the AI alignment problem. The work on the entire range of negative social consequences from AI is under-resourced compared to the large investments to increase the power and use of AI systems.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "It is frustrating and concerning for society as a whole that AI safety work is extremely neglected and that little public funding is dedicated to this crucial field of research. On the other hand, for each ", "spanType": "span-simple-text" }, { "children": [ { "text": "individual", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " person this neglect means that they have a good chance to actually make a positive difference, if they dedicate themselves to this problem now. And while the field of AI safety is small, it does provide ", "spanType": "span-simple-text" }, { "url": "https://80000hours.org/problem-profiles/artificial-intelligence/#what-can-you-do-concretely-to-help", "children": [ { "text": "good resources", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " on what you can do concretely if you want to work on this problem.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "I hope that more people dedicate their individual careers to this cause, but it needs more than individual efforts. A technology that is transforming our society needs to be a central interest of all of us. As a society we have to think more about the societal impact of AI, become knowledgeable about the technology, and understand what is at stake.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "When our children look back at today, I imagine that they will find it difficult to understand how little attention and resources we dedicated to the development of safe AI. I hope that this changes in the coming years, and that we begin to dedicate more resources to making sure that powerful AI gets developed in a way that benefits us and the next generations.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "If we fail to develop this broad-based understanding, then it will remain the small elite that finances and builds this technology that will determine how one of the \u2013 or plausibly ", "spanType": "span-simple-text" }, { "children": [ { "text": "the", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " \u2013 most powerful technology in human history will transform our world.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "If we leave the development of artificial intelligence entirely to private companies, then we are also leaving it up these private companies what our future \u2014 the future of humanity \u2014 will be.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "text": [ { "type": "text", "value": [ { "text": "With our work at Our World in Data we want to do our small part to enable a better informed public conversation on AI and the future we want to live in. You can find these resources on ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/artificial-intelligence", "children": [ { "text": "OurWorldinData.org/artificial-intelligence", "spanType": "span-simple-text" } ], "spanType": "span-link" } ], "parseErrors": [] } ], "type": "callout", "parseErrors": [] }, { "type": "horizontal-rule", "value": {}, "parseErrors": [] }, { "text": [ { "type": "text", "value": [ { "children": [ { "text": "Acknowledgements:", "spanType": "span-simple-text" } ], "spanType": "span-bold" }, { "children": [ { "text": " ", "spanType": "span-simple-text" } ], "spanType": "span-bold" }, { "text": "I would like to thank my colleagues Daniel Bachler, Charlie Giattino, and Edouard Mathieu for their helpful comments to drafts of this essay.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "type": "callout", "parseErrors": [] } ], "refs": { "errors": [], "definitions": { "15b8d11f280a1dbf1aeb137a5de9c080632bd7c2": { "id": "15b8d11f280a1dbf1aeb137a5de9c080632bd7c2", "index": 2, "content": [ { "type": "text", "value": [ { "text": "The fact that humans are capable of a ", "spanType": "span-simple-text" }, { "children": [ { "text": "range", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " of intellectual tasks means that you arrive at different definitions of intelligence depending on which aspect within that range you focus on (the ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/Intelligence", "children": [ { "text": "Wikipedia entry on intelligence", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", for example, lists a number of definitions from various researchers and different disciplines). As a consequence there are also various definitions of \u2018human-level AI\u2019.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "There are also several closely related terms: Artificial General Intelligence, High-Level Machine Intelligence, Strong AI, or Full AI are sometimes synonymously used, and sometimes defined in similar, yet different ways. In specific discussions, it is necessary to define this concept more narrowly; for example, in ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/ai-timelines", "children": [ { "text": "studies on AI timelines", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " researchers offer more precise definitions of what human-level AI refers to in their particular study.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "32cd94a3d7dd125248573a23c5156035d19b4bbb": { "id": "32cd94a3d7dd125248573a23c5156035d19b4bbb", "index": 17, "content": [ { "type": "text", "value": [ { "text": "Toby Ord \u2013 ", "spanType": "span-simple-text" }, { "url": "https://theprecipice.com/", "children": [ { "text": "The Precipice", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". He makes this projection in footnote 55 of chapter 2. It is based on the 2017 estimate by Farquhar.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "36b08a8447eb6df6bc2e607673339303cde4a5b9": { "id": "36b08a8447eb6df6bc2e607673339303cde4a5b9", "index": 13, "content": [ { "type": "text", "value": [ { "text": "Overviews are provided in Stuart Russell (2019) \u2013 Human Compatible (especially chapter 5) and Brian Christian\u2019s 2020 book ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/The_Alignment_Problem", "children": [ { "text": "The Alignment Problem", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". Christian presents the thinking of many leading AI researchers from the earliest days up to now and presents an excellent overview of this problem. It is also seen as a large risk by some of the leading private firms who work towards powerful AI \u2013 see OpenAI's article \"", "spanType": "span-simple-text" }, { "url": "https://openai.com/blog/our-approach-to-alignment-research/", "children": [ { "text": "Our approach to alignment research", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": "\" from August 2022.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "3803a99b99b604e8cbf87ef5bcfedad6d8a0f314": { "id": "3803a99b99b604e8cbf87ef5bcfedad6d8a0f314", "index": 1, "content": [ { "type": "text", "value": [ { "text": "Both of these concepts are widely used in the scientific literature on artificial intelligence. For example, questions about the timelines for the development of future AI are often framed using these terms. See ", "spanType": "span-simple-text" }, { "url": "https://ourworldindata.org/ai-timelines", "children": [ { "text": "my article on this topic", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "4fe0e7c8a9ed61157baa1a73b31b84abf5c5727c": { "id": "4fe0e7c8a9ed61157baa1a73b31b84abf5c5727c", "index": 8, "content": [ { "type": "text", "value": [ { "text": "Via ", "spanType": "span-simple-text" }, { "url": "https://fchollet.com/", "children": [ { "text": "Fran\u00e7ois Chollet", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", who published it ", "spanType": "span-simple-text" }, { "url": "https://twitter.com/fchollet/status/1573752180720312320?s=46&t=qPwLwDgLdJrLlXxa878BDQ", "children": [ { "text": "here", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". Based on Chollet\u2019s comments it seems that this image was created by the AI system \u2018Stable Diffusion\u2019.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "776753891ec3a927ce8aeb984fd7751a45d25152": { "id": "776753891ec3a927ce8aeb984fd7751a45d25152", "index": 16, "content": [ { "type": "text", "value": [ { "text": "In 1950 the computer science pioneer Alan Turing put it like this: ", "spanType": "span-simple-text" }, { "children": [ { "text": "\u201cIf a machine can think, it might think more intelligently than we do, and then where should we be? \u2026 [T]his new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety. It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. \u2026 I cannot offer any such comfort, for I believe that no such bounds can be set.\u201d", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " Alan. M. Turing (1950) \u2013 ", "spanType": "span-simple-text" }, { "url": "https://doi.org/10.1093/mind/LIX.236.433", "children": [ { "text": "Computing Machinery and Intelligence", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", In Mind, Volume LIX, Issue 236, October 1950, Pages 433\u2013460.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Norbert Wiener is another pioneer who saw the alignment problem very early. One way he put it was \u201cIf we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively \u2026 we had better be quite sure that the purpose put into the machine is the purpose which we really desire.\u201d quoted from Norbert Wiener (1960) \u2013 Some Moral and Technical Consequences of Automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers. In Science.", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "In 1950 \u2013 the same year in which Turing published the cited article \u2013 Wiener published his book The Human Use of Human Beings, whose front-cover blurb reads: \u201cThe \u2018mechanical brain\u2019 and similar machines can destroy human values or enable us to realize them as never before.\u201d", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "7e556c5304f23ca82f3ed158aadb0dd3f564b931": { "id": "7e556c5304f23ca82f3ed158aadb0dd3f564b931", "index": 0, "content": [ { "type": "text", "value": [ { "text": "This problem becomes even larger when we try to imagine how a future with a human-level AI might play out. Any ", "spanType": "span-simple-text" }, { "children": [ { "text": "particular", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " scenario will not only involve the idea that this powerful AI exists, but a whole range of additional assumptions about the future context in which this happens. It is therefore hard to communicate a scenario of a world with human-level AI that does not sound contrived, bizarre or even silly.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "960b4be1a02e0e0cbf607a69487f11c7dc45520d": { "id": "960b4be1a02e0e0cbf607a69487f11c7dc45520d", "index": 6, "content": [ { "type": "text", "value": [ { "text": "An overview of how AI systems can fail can be found in ", "spanType": "span-simple-text" }, { "url": "https://spectrum.ieee.org/ai-failures", "children": [ { "text": "Charles Choi \u2013 7 Revealing Ways AIs Fail", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". It is also worth reading through the ", "spanType": "span-simple-text" }, { "url": "https://www.aiaaic.org/aiaaic-repository/ai-and-algorithmic-incidents-and-controversies", "children": [ { "text": "AIAAIC Repository", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " which \u201cdetails recent incidents and controversies driven by or relating to AI, algorithms, and automation.\"", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "9784e80f3d4b24c2c512a8b421b52c8376dc6453": { "id": "9784e80f3d4b24c2c512a8b421b52c8376dc6453", "index": 12, "content": [ { "type": "text", "value": [ { "text": "See for example the ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal", "children": [ { "text": "Wikipedia entry", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " on the \u2018Dutch childcare benefits scandal\u2019 and Melissa Heikkil\u00e4 (2022) \u2013 ", "spanType": "span-simple-text" }, { "url": "https://web.archive.org/web/20221117053636/https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/", "children": [ { "text": "\u2018Dutch scandal serves as a warning for Europe over risks of using algorithms\u2019", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", in Politico. The technology can also reinforce discrimination in terms of race and gender. See Brian Christian\u2019s book The Alignment Problem and the ", "spanType": "span-simple-text" }, { "url": "https://ainowinstitute.org/reports.html", "children": [ { "text": "reports of the AI Now Institute", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "a57211a6812c3f1b91fbb97b51a92a4a143300a3": { "id": "a57211a6812c3f1b91fbb97b51a92a4a143300a3", "index": 4, "content": [ { "type": "text", "value": [ { "text": "The AI system ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/AlphaGo", "children": [ { "text": "AlphaGo", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", and its various successors, won against Go masters. The AI system ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/Pluribus_(poker_bot)", "children": [ { "text": "Pluribus", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " beat humans at no-limit Texas hold 'em poker. The AI system Cicero can strategize and use human language to win the strategy game Diplomacy. See: Meta Fundamental AI Research Diplomacy Team (FAIR), Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, et al. (2022) \u2013 \u2018Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning\u2019. In ", "spanType": "span-simple-text" }, { "children": [ { "text": "Science", "spanType": "span-simple-text" } ], "spanType": "span-italic" }, { "text": " 0, no. 0 (22 November 2022): eade9097.", "spanType": "span-simple-text" }, { "url": "https://doi.org/10.1126/science.ade9097", "children": [ { "text": " https://doi.org/10.1126/science.ade9097", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "af2007409462ca43d46ceb247c62bdea530b5f37": { "id": "af2007409462ca43d46ceb247c62bdea530b5f37", "index": 3, "content": [ { "type": "text", "value": [ { "text": "Peter Norvig and Stuart Russell (2021) \u2014 Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "c0154595c4bb771c9ef95e78235cce456af9d8ea": { "id": "c0154595c4bb771c9ef95e78235cce456af9d8ea", "index": 15, "content": [ { "type": "text", "value": [ { "text": "A question that follows from this is, why build such a powerful AI in the first place?", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "The incentives are very high. As I emphasize below, this innovation has the potential to lead to very positive developments. In addition to the large social benefits there are also large incentives for those who develop it \u2013 the governments that can use it for their goals, the individuals who can use it to become more powerful and wealthy. Additionally, it is of scientific interest and might help us to understand our own mind and intelligence better. And lastly, even if we wanted to stop building powerful AIs, it is likely very hard to actually achieve it. It is very hard to coordinate across the whole world and agree to stop building more advanced AI \u2013 countries around the world would have to agree and then find ways to actually implement it.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "d3da710183f88e9790f68121e8bdb79d80c48f07": { "id": "d3da710183f88e9790f68121e8bdb79d80c48f07", "index": 7, "content": [ { "type": "text", "value": [ { "text": "I have taken this example from ", "spanType": "span-simple-text" }, { "url": "https://fchollet.com/", "children": [ { "text": "AI researcher Fran\u00e7ois Chollet", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ", who published it ", "spanType": "span-simple-text" }, { "url": "https://twitter.com/fchollet/status/1573752180720312320?s=46&t=qPwLwDgLdJrLlXxa878BDQ", "children": [ { "text": "here", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "d5cff4ba485710dea7e721b63d1e62ee9cbb638f": { "id": "d5cff4ba485710dea7e721b63d1e62ee9cbb638f", "index": 5, "content": [ { "type": "text", "value": [ { "text": "This also poses a problem when we evaluate how the intelligence of a machine compares with the intelligence of humans. If intelligence was a general ability, a single capacity, then we could easily compare and evaluate it, but the fact that it is a range of skills makes it much more difficult to compare across machine and human intelligence. Tests for AI systems are therefore comprising a wide range of tasks. See for example Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt (2020) \u2013\u00a0", "spanType": "span-simple-text" }, { "url": "https://arxiv.org/abs/2009.03300", "children": [ { "text": "Measuring Massive Multitask Language Understanding", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " or the definition of what would qualify as artificial general intelligence in ", "spanType": "span-simple-text" }, { "url": "https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/", "children": [ { "text": "this Metaculus prediction", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "d5d34650c210dc1faabd1ef78e9c786152bf1635": { "id": "d5d34650c210dc1faabd1ef78e9c786152bf1635", "index": 14, "content": [ { "type": "text", "value": [ { "text": "Stuart Russell (2019) \u2013 Human Compatible", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "d785795c377b1e7625cdb90aa7510ea471cb4908": { "id": "d785795c377b1e7625cdb90aa7510ea471cb4908", "index": 9, "content": [ { "type": "text", "value": [ { "text": "This quote is from Holden Karnofsky (2021) \u2013 ", "spanType": "span-simple-text" }, { "url": "https://www.cold-takes.com/where-ai-forecasting-stands-today/", "children": [ { "text": "AI Timelines: Where the Arguments, and the \"Experts,\" Stand", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". For Holden Karnofsky\u2019s earlier thinking on this conceptualization of AI see his 2016 article ", "spanType": "span-simple-text" }, { "url": "https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/#Sec1", "children": [ { "text": "\u2018Some Background on Our Views Regarding Advanced Artificial Intelligence\u2019", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] }, { "type": "text", "value": [ { "text": "Ajeya Cotra, whose research on AI timelines I discuss in other articles of this series, attempts to give a quantitative definition of what would qualify as transformative AI. in her widely cited ", "spanType": "span-simple-text" }, { "url": "https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines", "children": [ { "text": "report on AI timelines", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": " she defines it as a change in software technology that brings the growth rate of gross world product \"to 20%-30% per year\". Several other researchers define TAI in similar terms.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "e76fc141fcfb37efe234648d1215b2738ef11ebc": { "id": "e76fc141fcfb37efe234648d1215b2738ef11ebc", "index": 10, "content": [ { "type": "text", "value": [ { "text": "Human-level AI is typically defined as a software system that can carry out at least 90% or 99% of all economically relevant tasks that humans carry out. A lower-bar definition would be an AI system that can carry out all those tasks that can currently be done by another human who is working remotely on a computer.", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] }, "e9873918d62a99c88e50611c4281f06bcee87c9c": { "id": "e9873918d62a99c88e50611c4281f06bcee87c9c", "index": 11, "content": [ { "type": "text", "value": [ { "text": "On the use of AI in politically-motivated disinformation campaigns see for example John Villasenor (November 2020) \u2013 ", "spanType": "span-simple-text" }, { "url": "https://web.archive.org/web/20220907044354/https://www.brookings.edu/research/how-to-deal-with-ai-enabled-disinformation/", "children": [ { "text": "How to deal with AI-enabled disinformation", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". More generally on this topic see Brundage and Avin et al. (2018) \u2013 The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, published at ", "spanType": "span-simple-text" }, { "url": "https://maliciousaireport.com/", "children": [ { "text": "maliciousaireport.com", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ". A starting point for literature and reporting on mass surveillance by governments is ", "spanType": "span-simple-text" }, { "url": "https://en.wikipedia.org/wiki/List_of_government_mass_surveillance_projects", "children": [ { "text": "the relevant Wikipedia entry", "spanType": "span-simple-text" } ], "spanType": "span-link" }, { "text": ".", "spanType": "span-simple-text" } ], "parseErrors": [] } ], "parseErrors": [] } } }, "type": "article", "title": "Artificial intelligence is transforming our world \u2014 it is on all of us to make sure that it goes well", "authors": [ "Max Roser" ], "excerpt": "How AI gets built is currently decided by a small group of technologists. It should be in all of our interest to become informed and engaged.", "dateline": "December 15, 2022", "subtitle": "How AI gets built is currently decided by a small group of technologists. As this technology is transforming our lives, it should be in all of our interest to become informed and engaged.", "featured-image": "featured-image-Timeline-of-Transformative-Events-1.png" } |
1 | 2023-10-13 06:26:10 | 2022-12-15 05:00:00 | 2023-12-28 16:31:11 | listed | ALBJ4LsU9pT56T0znmJ5pPAg-p879caIZht0P67H_gRP5Gy66J10grB_l8rxNU7CEPXPFC1GV3K86aquyKll8g | Why should you care about the development of artificial intelligence? Think about what the alternative would look like. If you and the wider public do not get informed and engaged, then we leave it to a few entrepreneurs and engineers to decide how this technology will transform our world. That is the status quo. This small number of people at a few tech firms directly working on artificial intelligence (AI) do understand how extraordinarily powerful this technology is [becoming](https://ourworldindata.org/brief-history-of-ai). If the rest of society does not become engaged, then it will be this small elite who decides how this technology will change our lives. To change this status quo, I want to answer three questions in this article: Why is it hard to take the prospect of a world transformed by AI seriously? How can we imagine such a world? And what is at stake as this technology becomes more powerful? # Why is it hard to take the prospect of a world transformed by artificial intelligence seriously? In some way, it should be obvious how technology can fundamentally transform the world. We just have to look at how much the world has already changed. If you could invite a family of hunter-gatherers from 20,000 years ago on your next flight, they would be pretty surprised. Technology has changed our world already, so we should expect that it can happen again. But while we have seen the world transform before, we have seen these transformations play out over the course of generations. What is different now is how very rapid these technological changes have become. In the past, the technologies that our ancestors used in their childhood were still central to their lives in their old age. This has not been the case anymore for recent generations. Instead, it has [become common](https://ourworldindata.org/technology-long-run) that technologies unimaginable in one's youth become ordinary in later life. This is the first reason we might not take the prospect seriously: it is easy to underestimate the speed at which technology can change the world. The second reason why it is difficult to take the possibility of transformative AI – potentially even AI as intelligent as humans – seriously is that it is an idea that we first heard in the cinema. It is not surprising that for many of us, the first reaction to a scenario in which machines have human-like capabilities is the same as if you had asked us to take seriously a future in which vampires, werewolves, or zombies roam the planet.1 But, it is plausible that it is both the stuff of sci-fi fantasy _and_ the central invention that could arrive in our, or our children’s, lifetimes. The third reason why it is difficult to take this prospect seriously is by failing to see that powerful AI could lead to very large changes. This is also understandable. It is difficult to form an idea of a future that is very different from our own time. There are two concepts that I find helpful in imagining a very different future with artificial intelligence. Let’s look at both of them. # How to develop an idea of what the future of artificial intelligence might look like? When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI.2 The first concept highlights the AI’s capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on the world. From where we are today, much of this may sound like science fiction. It is therefore worth keeping in mind that the majority of surveyed AI experts [believe](https://ourworldindata.org/ai-timelines) there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner. ## The advantages and disadvantages of comparing machine and human intelligence One way to think about human-level artificial intelligence is to contrast it with the current state of AI technology. While today’s AI systems often have capabilities similar to a particular, limited part of the human mind, a human-level AI would be a machine that is capable of carrying out the same _range_ of intellectual tasks that we humans are capable of.3 It is a machine that would be “able to learn to do anything that a human can do,” as Norvig and Russell put it in their textbook on AI.4 Taken together, the range of abilities that characterize intelligence gives humans the ability to solve problems and achieve a wide variety of goals. A human-level AI would therefore be a system that could solve all those problems that we humans can solve, and do the tasks that humans do today. Such a machine, or collective of machines, would be able to do the work of a translator, an accountant, an illustrator, a teacher, a therapist, a truck driver, or the work of a trader on the world’s financial markets. Like us, it would also be able to do research and science, and to develop new technologies based on that. The concept of human-level AI has some clear advantages. Using the familiarity of our own intelligence as a reference provides us with some clear guidance on how to imagine the capabilities of this technology. However, it also has clear disadvantages. Anchoring the imagination of future AI systems to the familiar reality of human intelligence carries the risk that it obscures the very real differences between them. Some of these differences are obvious. For example, AI systems will have the immense memory of computer systems, against which our own capacity to store information pales. Another obvious difference is the speed at which a machine can absorb and process information. But information storage and processing speed are not the only differences. The domains in which machines already outperform humans is steadily increasing: in chess, after matching the level of the best human players in the late 90s, AI systems [reached](https://ourworldindata.org/grapher/computer-chess-ability) superhuman levels more than a decade ago. In other games like Go or complex strategy games, this has happened more recently.5 These differences mean that an AI that is at least as good as humans in every domain would overall be much more powerful than the human mind. Even the first “human-level AI” would therefore be quite superhuman in many ways.6 Human intelligence is also a bad metaphor for machine intelligence in other ways. The way we think is often very different from machines, and as a consequence the output of thinking machines can be very alien to us. Most perplexing and most concerning are the strange and unexpected ways in which machine intelligence can fail. The AI-generated image of the horse below provides an example: on the one hand, AIs can do what no human can do – produce an image of anything, in any style (here photorealistic), in mere seconds – but on the other hand it can fail in ways that no human would fail.7 No human would make the mistake of drawing a horse with five legs.8 Imagining a powerful future AI as just another human would therefore likely be a mistake. The differences might be so large that it will be a misnomer to call such systems “human-level.” **AI-generated image of a horse**9 <Image filename="ai-generated-image-of-a-horse.png" alt="A brown horse running in a grassy field. The horse appears to have five legs."/> ## Transformative artificial intelligence is defined by the impact this technology would have on the world In contrast, the concept of transformative AI is not based on a comparison with human intelligence. This has the advantage of sidestepping the problems that the comparisons with our own mind bring. But it has the disadvantage that it is harder to imagine what such a system would look like and be capable of. It requires more from us. It requires us to imagine a world with intelligent actors that are potentially very different from ourselves. Transformative AI is not defined by any specific capabilities, but by the real-world impact that the AI would have. To qualify as transformative, researchers think of it as AI that is “powerful enough to bring us into a new, qualitatively different future.”10 In humanity’s history, there have been two cases of such major transformations, the agricultural and the industrial revolutions. Transformative AI becoming a reality would be an event on that scale. Like the arrival of agriculture 10,000 years ago, or the transition from hand- to machine-manufacturing, it would be an event that would change the world for billions of people around the globe and for the entire trajectory of [humanity’s future](https://ourworldindata.org/longtermism). Technologies that fundamentally change how a wide range of goods or services are produced are called ‘general-purpose technologies’. The two previous transformative events were caused by the discovery of two particularly significant general-purpose technologies: the change in food production as humanity transitioned from hunting and gathering to farming, and the rise of machine manufacturing in the industrial revolution. Based on the evidence and arguments presented in [this series](https://ourworldindata.org/artificial-intelligence#research-writing) on AI development, I believe it is plausible that powerful AI could represent the introduction of a similarly significant general-purpose technology. **Timeline of the three transformative events in world history** <Image filename="Timeline-of-Transformative-Events.png" alt=""/> ## A future of human-level or transformative AI? The two concepts are closely related, but they are not the same. The creation of a human-level AI would certainly have a transformative impact on our world. If the work of most humans could be carried out by an AI, the lives of millions of people would change.11 The opposite, however, is not true: we might see transformative AI without developing human-level AI. Since the human mind is in many ways a poor metaphor for the intelligence of machines, we might plausibly develop transformative AI before we develop human-level AI. Depending on how this goes, this might mean that we will never see any machine intelligence for which human intelligence is a helpful comparison. When and if AI systems might reach either of these levels is of course difficult to predict. In my [companion article](https://ourworldindata.org/ai-timelines) on this question, I give an overview of what researchers in this field currently believe. Many AI experts believe there is a real chance that such systems will be developed within the next decades, and some believe that they will exist much sooner. # What is at stake as artificial intelligence becomes more powerful? All major technological innovations lead to a range of positive and negative consequences. For AI, the spectrum of possible outcomes – from the most negative to the most positive – is extraordinarily wide. That the use of AI technology can cause harm is clear, because it is already happening. AI systems can cause harm when people use them maliciously. For example, when they are used in politically-motivated disinformation campaigns or to enable mass surveillance.12 But AI systems can also cause unintended harm, when they act differently than intended or fail. For example, in the Netherlands the authorities used an AI system which falsely claimed that an estimated 26,000 parents made fraudulent claims for child care benefits. The false allegations led to hardship for many poor families, and also resulted in the resignation of the Dutch government in 2021.13 As AI becomes more powerful, the possible negative impacts could become much larger. Many of these risks have rightfully received public attention: more powerful AI could lead to mass labor displacement, or extreme concentrations of power and wealth. In the hands of autocrats, it could empower totalitarianism through its suitability for mass surveillance and control. The so-called _alignment problem_ of AI is another extreme risk. This is the concern that _nobody_ would be able to control a powerful AI system, even if the AI takes actions that harm us humans, or humanity as a whole. This risk is unfortunately receiving little attention from the wider public, but it is seen as an extremely large risk by many leading AI researchers.14 How could an AI possibly escape human control and end up harming humans? The risk is not that an AI becomes self-aware, develops bad intentions, and “chooses” to do this. The risk is that we try to instruct the AI to pursue some specific goal – even a very worthwhile one – and in the pursuit of that goal it ends up harming humans. It is about unintended consequences. The AI does what we told it to do, but not what we wanted it to do. Can’t we just tell the AI to not do those things? It is definitely possible to build an AI that avoids any particular problem we foresee, but it is hard to foresee _all_ the possible harmful unintended consequences. The alignment problem arises because of “the impossibility of defining true human purposes correctly and completely,” as AI researcher Stuart Russell puts it.15 Can’t we then just switch off the AI? This might also not be possible. That is because a powerful AI would know two things: it faces a risk that humans could turn it off, and it can’t achieve its goals once it has been turned off. As a consequence, the AI will pursue a very fundamental goal of ensuring that it won’t be switched off. This is why, once we realize that an extremely intelligent AI is causing unintended harm in the pursuit of some specific goal, it might not be possible to turn it off or change what the system does.16 This risk – that humanity might not be able to stay in control once AI becomes very powerful, and that this might lead to an extreme catastrophe – has been recognized right from the early days of AI research more than 70 years ago.17 The very rapid development of AI in recent years has made a solution to this problem much more urgent. I have tried to summarize some of the risks of AI, but a short article is not enough space to address all possible questions. Especially on the very worst risks of AI systems, and what we can do now to reduce them, I recommend reading the book [The Alignment Problem](https://brianchristian.org/the-alignment-problem/) by Brian Christian and Benjamin Hilton’s article [‘Preventing an AI-related catastrophe’](https://80000hours.org/problem-profiles/artificial-intelligence). If we manage to avoid these risks, transformative AI could also lead to very positive consequences. Advances in science and technology were crucial to [the many positive developments](https://ourworldindata.org/a-history-of-global-living-conditions-in-5-charts) in humanity’s history. If artificial ingenuity can augment our own, it could help us make progress on the many large problems we face: from cleaner energy, to the replacement of unpleasant work, to much better healthcare. This extremely large contrast between the possible positives and negatives makes clear that the stakes are unusually high with this technology. Reducing the negative risks and solving the alignment problem could mean the difference between a healthy, flourishing, and wealthy future for humanity – and the destruction of the same. # How can we make sure that the development of AI goes well? Making sure that the development of artificial intelligence goes well is not just one of the most crucial questions of our time, but likely one of the most crucial questions in human history. This needs public resources – public funding, public attention, and public engagement. Currently, almost all resources that are dedicated to AI aim to speed up the development of this technology. Efforts that aim to increase the safety of AI systems, on the other hand, do not receive the resources they need. Researcher Toby Ord estimated that in 2020 between $10 to $50 million was spent on work to address the alignment problem.18 Corporate AI investment in the same year was more than 2000-times larger, it [summed up](https://ourworldindata.org/grapher/corporate-investment-in-artificial-intelligence-by-type) to $153 billion. This is not only the case for the AI alignment problem. The work on the entire range of negative social consequences from AI is under-resourced compared to the large investments to increase the power and use of AI systems. It is frustrating and concerning for society as a whole that AI safety work is extremely neglected and that little public funding is dedicated to this crucial field of research. On the other hand, for each _individual_ person this neglect means that they have a good chance to actually make a positive difference, if they dedicate themselves to this problem now. And while the field of AI safety is small, it does provide [good resources](https://80000hours.org/problem-profiles/artificial-intelligence/#what-can-you-do-concretely-to-help) on what you can do concretely if you want to work on this problem. I hope that more people dedicate their individual careers to this cause, but it needs more than individual efforts. A technology that is transforming our society needs to be a central interest of all of us. As a society we have to think more about the societal impact of AI, become knowledgeable about the technology, and understand what is at stake. When our children look back at today, I imagine that they will find it difficult to understand how little attention and resources we dedicated to the development of safe AI. I hope that this changes in the coming years, and that we begin to dedicate more resources to making sure that powerful AI gets developed in a way that benefits us and the next generations. If we fail to develop this broad-based understanding, then it will remain the small elite that finances and builds this technology that will determine how one of the – or plausibly _the_ – most powerful technology in human history will transform our world. If we leave the development of artificial intelligence entirely to private companies, then we are also leaving it up these private companies what our future — the future of humanity — will be. <Callout /> --- <Callout /> The fact that humans are capable of a _range_ of intellectual tasks means that you arrive at different definitions of intelligence depending on which aspect within that range you focus on (the [Wikipedia entry on intelligence](https://en.wikipedia.org/wiki/Intelligence), for example, lists a number of definitions from various researchers and different disciplines). As a consequence there are also various definitions of ‘human-level AI’. There are also several closely related terms: Artificial General Intelligence, High-Level Machine Intelligence, Strong AI, or Full AI are sometimes synonymously used, and sometimes defined in similar, yet different ways. In specific discussions, it is necessary to define this concept more narrowly; for example, in [studies on AI timelines](https://ourworldindata.org/ai-timelines) researchers offer more precise definitions of what human-level AI refers to in their particular study. Toby Ord – [The Precipice](https://theprecipice.com/). He makes this projection in footnote 55 of chapter 2. It is based on the 2017 estimate by Farquhar. Overviews are provided in Stuart Russell (2019) – Human Compatible (especially chapter 5) and Brian Christian’s 2020 book [The Alignment Problem](https://en.wikipedia.org/wiki/The_Alignment_Problem). Christian presents the thinking of many leading AI researchers from the earliest days up to now and presents an excellent overview of this problem. It is also seen as a large risk by some of the leading private firms who work towards powerful AI – see OpenAI's article "[Our approach to alignment research](https://openai.com/blog/our-approach-to-alignment-research/)" from August 2022. Both of these concepts are widely used in the scientific literature on artificial intelligence. For example, questions about the timelines for the development of future AI are often framed using these terms. See [my article on this topic](https://ourworldindata.org/ai-timelines). Via [François Chollet](https://fchollet.com/), who published it [here](https://twitter.com/fchollet/status/1573752180720312320?s=46&t=qPwLwDgLdJrLlXxa878BDQ). Based on Chollet’s comments it seems that this image was created by the AI system ‘Stable Diffusion’. In 1950 the computer science pioneer Alan Turing put it like this: _“If a machine can think, it might think more intelligently than we do, and then where should we be? … [T]his new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety. It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. … I cannot offer any such comfort, for I believe that no such bounds can be set.”_ Alan. M. Turing (1950) – [Computing Machinery and Intelligence](https://doi.org/10.1093/mind/LIX.236.433), In Mind, Volume LIX, Issue 236, October 1950, Pages 433–460. Norbert Wiener is another pioneer who saw the alignment problem very early. One way he put it was “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” quoted from Norbert Wiener (1960) – Some Moral and Technical Consequences of Automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers. In Science. In 1950 – the same year in which Turing published the cited article – Wiener published his book The Human Use of Human Beings, whose front-cover blurb reads: “The ‘mechanical brain’ and similar machines can destroy human values or enable us to realize them as never before.” This problem becomes even larger when we try to imagine how a future with a human-level AI might play out. Any _particular_ scenario will not only involve the idea that this powerful AI exists, but a whole range of additional assumptions about the future context in which this happens. It is therefore hard to communicate a scenario of a world with human-level AI that does not sound contrived, bizarre or even silly. An overview of how AI systems can fail can be found in [Charles Choi – 7 Revealing Ways AIs Fail](https://spectrum.ieee.org/ai-failures). It is also worth reading through the [AIAAIC Repository](https://www.aiaaic.org/aiaaic-repository/ai-and-algorithmic-incidents-and-controversies) which “details recent incidents and controversies driven by or relating to AI, algorithms, and automation." See for example the [Wikipedia entry](https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal) on the ‘Dutch childcare benefits scandal’ and Melissa Heikkilä (2022) – [‘Dutch scandal serves as a warning for Europe over risks of using algorithms’](https://web.archive.org/web/20221117053636/https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/), in Politico. The technology can also reinforce discrimination in terms of race and gender. See Brian Christian’s book The Alignment Problem and the [reports of the AI Now Institute](https://ainowinstitute.org/reports.html). The AI system [AlphaGo](https://en.wikipedia.org/wiki/AlphaGo), and its various successors, won against Go masters. The AI system [Pluribus](https://en.wikipedia.org/wiki/Pluribus_(poker_bot)) beat humans at no-limit Texas hold 'em poker. The AI system Cicero can strategize and use human language to win the strategy game Diplomacy. See: Meta Fundamental AI Research Diplomacy Team (FAIR), Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, et al. (2022) – ‘Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning’. In _Science_ 0, no. 0 (22 November 2022): eade9097.[ https://doi.org/10.1126/science.ade9097](https://doi.org/10.1126/science.ade9097). Peter Norvig and Stuart Russell (2021) — Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson. A question that follows from this is, why build such a powerful AI in the first place? The incentives are very high. As I emphasize below, this innovation has the potential to lead to very positive developments. In addition to the large social benefits there are also large incentives for those who develop it – the governments that can use it for their goals, the individuals who can use it to become more powerful and wealthy. Additionally, it is of scientific interest and might help us to understand our own mind and intelligence better. And lastly, even if we wanted to stop building powerful AIs, it is likely very hard to actually achieve it. It is very hard to coordinate across the whole world and agree to stop building more advanced AI – countries around the world would have to agree and then find ways to actually implement it. I have taken this example from [AI researcher François Chollet](https://fchollet.com/), who published it [here](https://twitter.com/fchollet/status/1573752180720312320?s=46&t=qPwLwDgLdJrLlXxa878BDQ). This also poses a problem when we evaluate how the intelligence of a machine compares with the intelligence of humans. If intelligence was a general ability, a single capacity, then we could easily compare and evaluate it, but the fact that it is a range of skills makes it much more difficult to compare across machine and human intelligence. Tests for AI systems are therefore comprising a wide range of tasks. See for example Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt (2020) – [Measuring Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300) or the definition of what would qualify as artificial general intelligence in [this Metaculus prediction](https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/). Stuart Russell (2019) – Human Compatible This quote is from Holden Karnofsky (2021) – [AI Timelines: Where the Arguments, and the "Experts," Stand](https://www.cold-takes.com/where-ai-forecasting-stands-today/). For Holden Karnofsky’s earlier thinking on this conceptualization of AI see his 2016 article [‘Some Background on Our Views Regarding Advanced Artificial Intelligence’](https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/#Sec1). Ajeya Cotra, whose research on AI timelines I discuss in other articles of this series, attempts to give a quantitative definition of what would qualify as transformative AI. in her widely cited [report on AI timelines](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) she defines it as a change in software technology that brings the growth rate of gross world product "to 20%-30% per year". Several other researchers define TAI in similar terms. Human-level AI is typically defined as a software system that can carry out at least 90% or 99% of all economically relevant tasks that humans carry out. A lower-bar definition would be an AI system that can carry out all those tasks that can currently be done by another human who is working remotely on a computer. On the use of AI in politically-motivated disinformation campaigns see for example John Villasenor (November 2020) – [How to deal with AI-enabled disinformation](https://web.archive.org/web/20220907044354/https://www.brookings.edu/research/how-to-deal-with-ai-enabled-disinformation/). More generally on this topic see Brundage and Avin et al. (2018) – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, published at [maliciousaireport.com](https://maliciousaireport.com/). A starting point for literature and reporting on mass surveillance by governments is [the relevant Wikipedia entry](https://en.wikipedia.org/wiki/List_of_government_mass_surveillance_projects). | Artificial intelligence is transforming our world — it is on all of us to make sure that it goes well |