Hawking: AI or the End of Human Civilization History Unless Learned to Avoid Danger

Netease Technology News on April 27th, at today’s GMIC conference, thousands of participants all held their breath and listened quietly to the physicist and Cambridge University professor Stephen William Hawking’s speech. The whole speech was more than 20 minutes. There was no gesticulation, no gestures, but everyone listened very seriously, for fear of missing any of Hawking's ideas on artificial intelligence for humans. As Hawking had said before, the rise of artificial intelligence is either the best thing in human history or the worst. He is still not sure whether it is good or bad, but he must do everything he can to ensure that its future development will benefit the human and human environment. Artificial intelligence is constantly evolving, and Hawking has a worry: the result of creating something that can equal or surpass humanity: once artificial intelligence breaks out of the constraints, it redesigns itself in an ever-accelerating state. Human beings are unable to compete with them due to the limitations of long biological evolution and will be replaced. "Artificial intelligence may also be the end of the history of human civilization, unless we learn how to avoid danger." Hawking said, "Discussion of the disaster may scare all of you here. Sorry." Nonetheless, Hawking still believes that humanity is united, calling for international treaty support or signing open letters to governments. Technology leaders and scientists are doing their best to avoid the rise of uncontrollable artificial intelligence. In January 2015, Hawking and technology entrepreneur Elon Musk, as well as many other AI experts, signed an open letter on artificial intelligence to promote serious research on the impact of artificial intelligence on society. “We stand at the entrance of a beautiful new world. This is an exciting world full of uncertainty, and you are the pioneers. I wish you all.” Hawking mentioned. (Cui Yuxian) The following is the full text of the speech of Hawking: "Let AI Help Humanity and Its Survival Home" Throughout my life, I have witnessed profound changes in society. One of the most profound, but also the growing impact on human beings is the rise of artificial intelligence. In short, I think that the rise of powerful artificial intelligence is either the best thing in human history, or it is the worst. I have to say that we are still uncertain whether it is good or bad. However, we should do our best to ensure that its future development will benefit us and our environment. We have no choice. I think that the development of artificial intelligence itself is a trend with problems that must be resolved now and in the future. Research and development of artificial intelligence are rapidly advancing. Perhaps all of us should pause for a moment and repeat our research from improving artificial intelligence capabilities to maximizing the social benefits of artificial intelligence. Based on such considerations, the American Artificial Intelligence Association (AAAI) established the Artificial Intelligence Long-term Future Chief Development Forum from 2008 to 2009, and they have recently devoted a lot of attention to purpose-oriented neutral technologies. But our artificial intelligence system needs to work in accordance with our will. Interdisciplinary research is a possible way forward: from economics, law, philosophy to computer security, formal methods, and of course the various branches of artificial intelligence itself. Learn to smash sulphides, kill ants, kill crickets, kill ants, send crickets, shoot crickets, shoot crickets, shoot crickets, shoot crickets, shoot crickets, smashing, and smashing ribs. 裱 裱 凹 凹 凹 慊诶砺凵 慊诶砺凵 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 蟪 蟪 蟪 蟪 蟪 蟪 蟪 蟪 2 2 2 2 2 2. Wall 尴薜氐 饺 ぶ ぶ ぶ ぶ 镏 晔 蛘吆芸赡鼙 蛘吆芸赡鼙 蛘吆芸赡鼙 娜 娜 娜 娜 娜 娜 倜稹 娜 娜 娜 娜 娜 娜 娜 娜 颐堑 拇 拇 拇Huaiyu Sorrowful Mail Wrath Tomb (13) Henderson's Garden Tomb Arc XII/p> While artificial intelligence has evolved from its original form and proved to be very useful, I am also concerned about the consequences of creating something that can equal or surpass humanity: once artificial intelligence breaks out of the constraints, it redesigns itself in an ever-accelerating state. Human beings are unable to compete with them due to the limitations of long biological evolution and will be replaced. This will cause great damage to our economy. In the future, artificial intelligence can develop self-will, a will to conflict with us. Although I have always been optimistic about humanity, others believe that humans can control the development of technology for a long time, so that we can see the potential of artificial intelligence to solve most of the problems in the world. But I'm not sure. In January 2015, I signed an open letter on artificial intelligence with science and technology entrepreneur Elon Musk and many other artificial intelligence experts. The purpose was to promote serious research on the impact of artificial intelligence on society. Before this, Elon Musk warned people: Superhuman artificial intelligence may bring immeasurable benefits, but if not properly deployed, it may have the opposite effect to humans. I was with him in the Scientific Advisory Board of the Institute for the Future of Life. This is an organization that aims to alleviate the risks that humans are facing. The open letter mentioned earlier was also drafted by this organization. This open call can stop the direct research of potential problems and also gain the potential benefits brought by artificial intelligence. At the same time, it is committed to let artificial intelligence researchers pay more attention to artificial intelligence security. In addition, for the decision makers and the general public, this open letter is informative and not alarmist. Everyone knows that artificial intelligence researchers are seriously thinking about these concerns and ethical issues. We think this is very important. For example, artificial intelligence has the potential to eradicate disease and poverty, but researchers must be able to create controlled artificial intelligence. The open letter, which contained only four paragraphs and entitled “Priority to Study Strong and Useful Artificial Intelligence,” detailed the prioritization of research in its accompanying twelve-page document. In the past 20 years, artificial intelligence has been focusing on the problems arising from the construction of intelligent agents, that is, systems that can be perceived and acted in a specific environment. In this case, intelligence is a rational concept related to statistics and economics. In layman's terms, this is an ability to make good decisions, plans and inferences. Based on these efforts, a great deal of integration and crossbreeding has been applied in artificial intelligence, machine learning, statistics, cybernetics, neuroscience, and other fields. The establishment of a shared theoretical framework, combined with data supply and processing capabilities, has achieved significant success in various segments. For example, speech recognition, image classification, autopilot, machine translation, gait movement and question answering systems. With the development of these fields, a virtuous cycle has been formed from laboratory research to economically valuable technologies. Even a small performance improvement will bring huge economic benefits, and encourage longer-term, greater investment and research. At present, people widely agree that the research on artificial intelligence is developing steadily, and its impact on society is likely to expand. The potential benefits are enormous. Since everything civilization produces is a product of human intelligence, we cannot predict that we may obtain What results, when this intelligence is amplified by artificial intelligence tools. However, as I said, eradicating disease and poverty is not entirely impossible. Due to the enormous potential of artificial intelligence, it is very important to study how to benefit from artificial intelligence and avoid risks. Research on artificial intelligence is now rapidly developing. This study can be discussed in the short and long term. Some short-term concerns are unmanned, from civilian drones to self-driving cars. For example, in an emergency, an unmanned car has to choose between a big accident with a small risk and a small accident with a high probability. Another worry is the deadly intelligent autonomous weapon. Should they be banned? If so, how exactly is "autonomy" defined? If not, how should any faulty use and fault negligence be accounted for? There are other concerns that AI can gradually interpret the privacy and concerns caused by large amounts of surveillance data and how to manage the economic impact of replacing artificial jobs with artificial intelligence. The long-term concern is mainly the potential risk of out-of-control artificial intelligence systems. With the rise of super-intelligence that does not follow human desires, that powerful system threatens humanity. Is the result of such misalignment possible? If so, how did these situations occur? What kind of research should we invest in in order to better understand and resolve the possibility of dangerous super smart rise, or the emergence of intelligent outbreaks? The current tools for controlling artificial intelligence technologies, such as reinforcement learning and simple and practical functions, are not enough to solve this problem. Therefore, we need to further study to find and confirm a reliable solution to control this issue. Recent milestones, such as the aforementioned self-driving cars, and the artificial intelligence that won the Go game, are signs of future trends. Huge investment is devoted to this technology. The achievements we have achieved at present are inevitably dwarfed by the achievements that may be achieved in the coming decades. And we are far from predicting what we can achieve when our mind is enlarged by artificial intelligence. Perhaps with the help of this new technological revolution, we can solve some of the damage caused by industrialization to nature. All aspects related to our lives are about to be changed. In short, the success of artificial intelligence may be the biggest event in the history of human civilization. But artificial intelligence may also be the end of the history of human civilization unless we learn how to avoid danger. I once said that the all-round development of artificial intelligence may lead to the demise of human beings, such as maximizing the use of intelligent autonomous weapons. Earlier this year, I and some scientists from all over the world jointly supported their ban on nuclear weapons at the UN conference. We are anxiously waiting for the results of the negotiations. At present, the nine nuclear powers can control about 14,000 nuclear weapons. Any one of them can razed the city to a level where radioactive waste will pollute farmland in large areas. The most terrible hazard is that nuclear winters, fire and smoke will be induced. Lead to the global ice age. This result has caused the collapse of the global food system and the turbulent doomsday, which is likely to cause the death of most people. As scientists, we have a special responsibility for nuclear weapons because it is the scientists who invented them and find that their influence is more terrible than originally thought. At this stage, my discussion of the disaster may scare everyone here. very sorry. But as an attendee today, it is important that you understand where you are in the future R&D that affects current technology. I believe we are united to call for the support of international treaties or to sign open letters to governments. Technology leaders and scientists are doing their best to avoid the rise of uncontrollable artificial intelligence. In October last year, I established a new institution in Cambridge, England, to try to solve some inconclusive problems that have emerged in the rapid development of artificial intelligence research. The “Liverfield Smart Future Center” is an interdisciplinary research institute devoted to the study of the future of intelligence, which is critical to the future of our civilization and species. We spend a lot of time learning history and looking deeper—mostly about stupid history. So people turn to study the future of intelligence is an exciting change. Although we are aware of potential dangers, I am still optimistic internally and I believe the potential benefits of creating intelligence are enormous. Perhaps with the aid of this new technological revolution, we will be able to reduce the damage that industrialization has caused to the natural world. Every aspect of our lives will be changed. My colleague Hugh Prince at the Institute admitted that the "Liffer Hume Center" can be established partly because the university has established a "risk center." The latter examined human potential problems more extensively, and the scope of the key research of the “Liffer Hume Center” is relatively narrow. Recent advances in artificial intelligence, including the European Parliament’s call for drafting a series of regulations to manage innovation in robotics and artificial intelligence. It is somewhat surprising that this involves a form of electronic personality to ensure the rights and responsibilities of the most capable and most advanced artificial intelligence. A spokesperson for the European Parliament commented that as more and more areas of everyday life are increasingly affected by robots, we need to ensure that robots serve humanity, both now and in the future. The report submitted to the members of the European Parliament clearly stated that the world is at the forefront of the new industrial robot revolution. Whether or not to analyze the report provides the robot with the right as an electronic person, which is equivalent to the legal person's identity and may be possible. The report stresses that at any time, research and design staff should ensure that every robot design includes a stop switch. In Kubrick's film "A Space Odyssey", the malfunctioning supercomputer Hal did not allow scientists to enter the capsule, but it was science fiction. What we have to face is the fact. "We don't recognize whales and gorillas as having personality, so there is no need to rush to accept a robot personality," said Lona Blazell, partner at Osborne Clark International Law Firm. But worry has always existed. The report acknowledges that artificial intelligence may exceed the scope of human intelligence in a period of several decades, and artificial intelligence may exceed the scope of human intelligence and thus challenge human-machine relations. The report finally called for the establishment of a European robotics and artificial intelligence agency to provide technical, ethical and regulatory expertise. If the members of the European Parliament vote in favor of legislation, the report will be submitted to the European Commission. It will decide within three months which legislative steps to take. We should also play a role in ensuring that the next generation not only has the opportunity but also has the determination to fully participate in scientific research at an early stage so that they can continue to realize their potential and help humanity to create a better world. This is what I meant when I first talked about the importance of learning and education. We need to jump out of the theoretical discussion of “how things should be” and take actions to ensure that they have the opportunity to participate. We stand at the entrance of a beautiful new world. This is an exciting world full of uncertainty, and you are the pioneers. I bless you. Thank you