- Victor Callaghan, School of Computer and Electrical Engineering, University of Essex, UK.
- James Miller, Economics Faculty Smith College, Northampton, MA, USA.
- Roman Yampolskiy Department of Computer Engineering and Computer Science, University of Louisville, Kentucky USA.
- Stuart Armstrong Faculty of Philosophy, University of Oxford, UK.
- An edited volume of Springer-Verlag’s ‘The Frontiers Collection’
- ISSN 1612-3018 ISSN 2197-6619 (electronic)
- ISBN 978-3-662-54031-2 ISBN 978-3-662-54033-6 (eBook)
- DOI 10.1007/978-3-662-54033-6
- Library of Congress Control Number: 2016959969
- Springer-Verlag GmbH Germany, Spring 2017
- Publishers Page For Book
- Amazon Page for Book
About the Book
In simple terms the Technological Singularity may be regarded as the moment in time when artificial intelligence (ie machine intelligence) surpasses human intelligence. Clearly such an extent would have momentous consequences for humanity. This volume contains a selection of authoritative essays exploring the central questions raised by the conjectured technological singularity. In informed yet jargon-free contributions written by active research scientists, philosophers and sociologists, it goes beyond philosophical discussion to provide a detailed account of the risks that the singularity poses to human society and, perhaps most usefully, the possible actions that society and technologists can take to manage the journey to any singularity in a way that ensures a positive rather than a negative impact on society. The discussions provide perspectives that cover technological, political and business issues. The aim is to bring clarity and rigor to the debate in a way that will inform and stimulate both experts and interested general readers.
Foreword: Prof. Kevin Warwick Deputy Vice Chancellor (Research) Coventry University Coventry, UK
- Introduction to the Technological Singularity – Stuart Armstrong, Future of Humanity Institute, Oxford University, UK (p1)
- Risks of the Journey to the Singularity – Kaj Sotala, Foundational Research Institute, Basel, Switzerland and Roman Yampolskiy, Dept Computer Engineering & Computer Science, University of Louisville (p11)
- Responses to the Journey to the Singularity – Kaj Sotala, Foundational Research Institute, Basel, Switzerland and Roman Yampolskiy, Dept Computer Engineering & Computer Science, University of Louisville (p25)
- How Change Agencies Can Affect Our Path Towards a Singularity – Ping Zheng, Business School, Canterbury Christ Church University and Mohammed-Asif Akhmad, Applied Intelligence Laboratories, BAE Systems Plc, Chelmsford, UK (p87)
- Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda – Nate Soares and Benya Fallenstein, both from Machine Intelligence Research Institute, Berkeley, USA (p103)
- Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process – Anthony M. Barrett and Seth D. Baum, both from Global Catastrophic Risk Institute, Washington, D.C., USA (p127)
- Diminishing Returns and Recursive Self Improving Artificial Intelligence – Andrew Majot and Roman Yampolskiy, both of the Department of Computer Engineering and Computer Science, University of Louisville, USA (p141)
- Energy, Complexity, and the Singularity – Kent A. Peacock, The Department of Philosophy, University of Lethbridge, Canada (p153)
- Computer Simulations as a Technological Singularity in the Empirical Sciences – Juan M. Durán, High Performance Computing Center, Stuttgart, Germany (p167)
- Can the Singularity Be Patented? (And Other IP Conundrums for Converging Technologies) – David Koepsel, Research and Strategic Initiatives, Comisión Nacional De Bioética, Mexico City, Mexico (p 181)
- The Emotional Nature of Postcognitive Singularities – Jordi Vallverdú, Philosophy Department, Universitat Autònoma de Barcelona, Spain (p193)
- A Psychoanalytic Approach to the Singularity: Why We Cannot Do Without Auxiliary Constructions – Graham Clarke, Centre for Psychoanalytic Studies, University of Essex, UK (p209)
- Reflections on The Singularity Vision – James D. Miller, Department of Economics, Smith College, Northampton, Massachusetts, USA (p 223)
- Singularity Blog Insights – James D. Miller, Department of Economics, Smith College, Northampton, Massachusetts, USA (p231)
Appendix: The Coming Technological Singularity: How to Survive in the Post-human Era – Vernor Vinge (p247)
Summary of Book Content
The book is divided into three main sections; Part I (Risks of, and Responses to, the Journey to the Singularity – Chapters 1 & 2), which provides an authoritative and academically referenced description of the Technological Singularity, the dangers it presents to society and the main ways researchers have identified for minimising the risks while maximising the potential benefits from any singularity; Part II (Managing the Singularity Journey – Chapters 3 – 12) which presents a set of essays from leading researchers offering important insights into singularity issues and finally; Part III (Visions and Reflections on the Singularity Journey – Chapters 13 & 14) which starts by reflecting on Verno Vinge’s seminal essay “What is the Singularity?” (presented at the 1993 VISION-21 Symposium) before considering more contemporary thinking on the subject, taken from the most influential online blogs. The book starts with a Forward written by the world renowned cybernetics researcher, Prof Kevin Warwick, and concludes with a Vernor Vinge’s seminal singularity essay.
Chapter 1 (Introduction to the Technological Singularity) introduces the term technological singularity, and analyses the varying and ambiguous ways it is used. It looks at the difficulty in predicting what would happen with “human comparable” artificial general intelligence (AGI), and what such an occurrence might mean for society. The track record for predictions by experts is poor. However, there are strong arguments implying that such an AGI could become extremely powerful without, necessarily, requiring the AGI to be “superintelligent”. The chapter also demonstrates that such an AI has a significant chance of being dangerous to humanity as a whole. The chapter argues that the difficulty in reasoning about this subject and the uncertainty surrounding it cannot be used as excuses to do nothing, indeed a position that regarded AGI as being safe would be great overconfidence, far beyond what can be warranted by the evidence this chapter and book presents.
Chapter 2 (Risks of the Journey to the Singularity) presents a rigorous literature review and critical discussion to examine the proposition that future artificial general intelligence (AGI) systems might eventually pose a significant risk to humanity, should they could accumulate significant amounts of power and influence in society, while being indifferent to humans values. The accumulation of power might either happen gradually over time, or it might happen very rapidly (a so-called “hard takeoff”). Gradual accumulation would happen through normal economic mechanisms, as AGIs carry out an increasing share of economic tasks. A hard takeoff might be possible if a situation arose where AGIs required significantly less hardware to run than was available to them, or if they could redesign themselves to run at ever faster speeds with existing hardware, or if they could recursively redesign themselves into more intelligent versions of themselves. All these isses are address in this chapter.
Chapter 3 (Responses to the Journey to the Singularity) provides an in-depth survey of the various intellectual responses that have been made to the possibility of Artificial General Intelligence (AGI) posing a catastrophic risk to humanity. The chapter explains that recommendations given for dealing with the problem can be divided into proposals for societal action, external constraints, and internal constraints. Many of these recommendations suffer from serious problems, or seem to be of limited effectiveness. However, the chapter identifies a small number of recommendations that the authors feel are worthy of further study. The proposal can be grouped into short and long term activities. In the short term, these include ‘regulation‘, ‘merging with machines‘, ‘AGI confinement‘, and designing in ‘external control‘. In the long term, the most promising proposals are ‘value learning‘ and building AGIs to be ‘human-like‘.
Chapter 4 (How Change Agencies Can Affect Our Path Towards a Singularity) uses the perspective of ‘change agencies‘ to analyse how agents (from researchers through entrepreneurs to government) can determine the direction of future technologies and, especially, AGIs. The chapter discusses the general behaviour of change agents towards technological research from a social and economic perspective and argues that the interactions of key change agencies will determine the future path towards a ‘singularity event‘ or possibly an ‘anti-singularity event’. Thus the chapter argues that it is important to understand the fundamental behaviours and motivations of change agencies in technology development as this provides a mechanism to ensure that if, and when, a singularity occurs, it can be controlled and utilised for positive social and economic benefits.
Chapter 5 (Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda) discusses a plethora of technical approaches that AGI scientists believe might ensure that any singularity would have a positive impact. The chapter presents a technical agenda based on three broad categories of research which have good potential to ensure AGI system of the future will be reliably aligned with human interests: 1) Highly Reliable Agent Designs (how to ensure that the right system is built); 2) Error Tolerance (how to ensure that the inevitable flaws are manageable and correctable); 3) Value Specification (how to ensure that the system is pursuing the right sorts of objectives). Since little is known about the design or implementation details of such systems, the research described in this chapter focuses on formal agent foundations for AI alignment research (ie conceptual tools and theory).
Chapter 6 (Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process) surveys established methodologies for risk analysis and risk management in relation to AGI risk. The approach presented is based on modeling the sequences of steps that could result in AGI catastrophe based on a methodology called fault trees or event trees. The approach solicits inputs from experts on various parts of the model. The paper advances the view that for AGI risk management, there are two approaches 1) make ASI technology safer and 2) to manage the human process of AGI R&D. The chapter argues that risk analysis and the related field of decision analysis, can help make better AGI risk management decisions, identifying the most cost-effective options (ie how best to balance AGI risk against cost).
Chapter 7 (Diminishing Returns and Recursive Self Improving Artificial Intelligence) examines the concept of how artificial general intelligence (AGI) might be able to improve itself, through a process called recursive self-improvement (which might lead to a singularity). The chapter explains that such an AGI would have access to its own source code and possibly even its hardware, with the ability to edit both at will. Through constantly improving upon itself the AGI could evolve into an entity that was more intelligent than humans, thereby reaching the technological singularity. The authors argue that there may be natural limits on the ability for an AI to improve upon itself and that the law of diminishing returns will take effect to limit runaway intelligence. Furthermore, they theorize that the AGI drive design and production cycle for creating new hardware will introduce latency into AI improvement, which could be exploited to halt any dangerous situation.
Chapter 8 (Energy, Complexity, and the Singularity) explores the relevance of ecological limitations such as climate change and resource exhaustion to the possibility of a technologically-mediated “intelligence explosion” in the near future. The chapter explains that imminent risks of global carbonization and loss of biodiversity, as well as the dependency of technological development on a healthy biosphere, are greatly underestimated by singularity theorists such as Ray Kurzweil. While the chapters argues that development of information technology should continue, it makes the point that we cannot rely on hypothetical advances in AI to get us out of our present ecological bottleneck. The solution offered by the author is that we should do everything we can to foster human ingenuity, the one factor that has a record of generating the game-changing innovations that our species has relied upon to overcome survival challenges in our past.
Chapter 9 (Computer Simulations as a Technological Singularity in the Empirical Sciences) discusses the conditions necessary for computer simulations to qualify as a technological singularity in the empirical sciences. The author explains that that for computer simulations to be a technological singularity they must fulfill two important measures of a technological singularity that the computer based technology has led to (a) the enhancement of human cognitive capacities, and (b) people’s displacement from the centre of the production of knowledge. The author argues tha point (a) is relatively unproblematic whereas to fulfill the criteria of point (b), it is necessary to establish the reliability of computer simulations (ie that for most of the time they render valid results). To establish reliability, means that simulations should accurately represent the target system and carry out error-free computations which the chapter seeks to do through the use of verification and validation methods.
Chapter 10 (Can the Singularity Be Patented) discusses problems that may arise with “intellectual property” (IP – namely, copyrights and patents) associated with AGIs. The author explains how the nature and trajectory of converging AGI technologies means that IP laws, as they currently exist, may impede the development of AGI. The chapter cites examples of “patent thickets” that appear to impede other rapidly evolving technologies, such as those in the smartphone arena. The author goes in to argue that patents and copyrights may pose even more intriguing problems once the singularity is achieved because our notions of who may own what will likely radically change. The author poses questions such as “will AGIs compete with us over rights to create“, and “will we be legally or morally precluded from ownership rights in technologies that make such agents function“. This chapter discusses some of these legal conundrums.
Chapter 11 (The Emotional Nature of Postcognitive Singularities) explores the proposition that the huge amounts of data that AGIs will deal with will introduce a new level of cognition the author labels as postcognitive. He argues that such AGIs may follow multiple strategies and be controlled by emotional mechanisms. He argues that these factors will combine to make an AGI, endowed with some emotional capabilities, ‘feel’ totally different information about the world, thinking differently following new sets of what he calls paraemotions. He explains that the content of these emotions is not clear as, not only they will collect more information, but the ways AGIs will process and feel it will change radically. He argues that, should this situation occur, there will be new social patterns, still to be defined (ie it is unclear how interaction between, or with, these AGIs will proceed).
Chapter 12 (A Psychoanalytic Approach to the Singularity) takes a view that Psychoanalysis can offer singularity researchers useful insights. The author starts from the premise that the human condition is beset with disappointments (eg sickness, accident, unfair breaks, bad luck, death etc). He explains that to deal with these we frequently phantasise, hoping these dreams might come true. He notes that the singularity, offers the possibility of overcoming the biggest sadness we face, loss of loved ones through death (by achieving a sort of immortality via technology). Furthermore, he observes that some of Kurtweil’s discussion of the singularity is concerned with the possibility of‘resurrecting’ his dead father in virtual space at least. This chapter argues that consistently, throughout the writings on the singularity, there is a dismissal of the emotional aspect of human living in favour of the rational overcoming of our existential condition. His central hypothesis is that we ignore emotional consciousness, that has been the bedrock of human existence to date, at our peril.
Chapter 13 (Reflections on Singularity Visions) discusses Vernor Vinge’s 1993 seminal essay/presentation “What is the Singularity?” (reproduced in the appendix of this book) and four contemporary articles from the blogosphere. It asks various questions about these visions such as why, Vinges’s original essay didn’t convince most technologically literate people that a singularity is near? The chapter suggests various answers such as that the exponential, anti-intuitive nature of the Singularity made a singularity too difficult to visualize or possibly, the superficial absurdity of a singularity prevents most people from giving the concept enough serious consideration. It also questions whether the lack of a clear timeline for the unfolding of a singularity might have caused many to think the idea is unfalsifiable or perhaps, the fact that it won’t take place for a long time meant that an optimal allocation of attention would ignore it in favour of more immediate concerns. In this way this chapter seeks to introduce the reader to the key issues that are implicit in the visions presented in this part of the book.
Chapter 14 (Singularity Blog Insights) presents four articles from the blogosphere. In the first, Eliezer Yudkowsky explains three commonly used concepts relating to the Singularity: Accelerating Change (where the Singularity occurs because of exponential improvements in technology), the Event Horizon (where technology will improve to the point where a human-level intelligence can’t predict what will happen), and the Intelligence Explosion (in which a self-improving artificial intelligence quickly brings about a singularity). In the second article, Stuart Armstrong, one of the editors of this book, analyses many past predictions of AGI development with the aim of casting some light on when a singularity might occur. In the third article Scott Siskind discusses reasons why we shouldn’t wait to research AGI safety. In the final entry, Scott Aaronson explains why he does not think that a singularity is near.
The Appendix contains a reprint of Vernor Vinge’s seminal 1993 essay “The Coming Technological Singularity: How to Survive in the Post-human Era“.
The original call for chapters can be viewed on this page.