Zhang Shixiu, vice director of China Planning and Research Institute of Building Materials, sent me a text message: "Sohu News: Oxford scientists conclude upon research that humanity will become extinct in next century at the earliest, and high-tech will be the culprit. This is completely consistent with your conclusion! Congratulations! Your views have been validated by international theoretical achievements. Hope you continue with your research and advocate your views."
I searched on the Internet at once to find the following report:
"A study team at the Future of Humanity Institute of Oxford University said recently that the human being would become extinct in the next century at the earliest, and high-tech would be the culprit. The study team consists of a great number of mathematicians, philosophers and scientists. Bostrom, director of the Future of Humanity Institute of Oxford University, said, 'There is a great race on between humanity's technological powers and our wisdom to use those powers well'; to rein technological powers, humankind need sufficient wisdom. But 'I'm worried that the former will pull too far ahead'.
The study team further pointed out that human extinction was not something far away. If human beings could not face up to address this issue, the doomsday of humanity might come in the next century at the earliest."
In retrospect, I have been engaged in the study of human issues for 34 years. "The continuous development of science and technology will definitely exterminate humanity in two or three hundred years at the most, or within this century at the earliest." This is one of my most important conclusions. In July 2007, the marketing of my book Saving Humanity was stopped for some reason only two days right after its publication. After that, I published a series of books and articles, and sought chances to give speeches and presentations in a variety of occasions. I wrote two letters to PRC President, and two letters to US President, and also to other human leaders such as Russian President, French President, UK Prime Minister and UN Secretary-General, to elaborate my views. I requested them to come forward to shoulder the sacred mission of saving the human as leaders, because only the leaders of big powers could have the ability to save the human.
There was not a single reply to any of my letters. My appeal was considered as sensational and flubdub according to the overwhelming majority of comments; there were even a lot of ridicules. Quite unexpectedly, the study results of an authoritative research institute could be fully consistent with mine.
What a bosom friend! I had very mixed feelings, and I thought I had to do something.
First of all, I wrote a letter to Bostrom immediately; I hoped to arrange a face-to-face meeting with him, and it would be best for us to jointly set up a research institute. At the same time, I gave a call to Wang Yong, editor-in-chief of the magazine Global Business. I would like to compose an article on this topic, and he said he could spare a feature column for me; Wang was one of the few friends who supported my views.
Then I called another friend who was the president of a well-known TV station, hoping to air my views on TV through his channel. But he was somewhat hesitant; he said, you might say it like this: "Science and technology must be developed with ration and out of good intentions. For instance, nuclear weapons in the wrong hands will destroy humankind; some sinister people research toxic food additives, and this is harmful. Therefore, the development of science and technology out of bad intentions will destroy humanity."
I frankly expressed my different views: "Science and technology will not only destroy humanity, but also exterminate humanity. Extermination means no one can survive. In addition, it is not that only vicious scientific research will exterminate humanity; rather, development of science and technology, whether it is out of good intentions or not, will exterminate humanity soon. If we do not take immediate measures, humanity will definitely go extinct within the next two or three hundred years, or even in this century; this is not impossible."
My friend considered it as sensational, and the audience might find it difficult to accept. But I insisted the conclusion be a scientific one and the truth.
The reasons why the continuous development will definitely exterminate the human being are as follows:
Firstly, science and technology has the ability to exterminate humanity.
Science and technology is a double-edged sword; it is able to destroy humanity while bringing benefits to us. The greater ability they have to bring benefits to humankind, the greater power they have to destroy humanity. With the development of science and technology to even higher levels, it will eventually obtain the ability to exterminate humanity, namely, extinctive means will inevitably emerge. Compared to future intelligent robots, nanobots or other high-tech stuff, today's nuclear weapons are nothing to even speak of.
Take intelligent robots as an example: their ability to exterminate the human being is beyond question.
Scientists invented machines to replace manual labor, such as planes, trains and ships, which have enhanced physical ability of humankind by tens of thousands of times. Scientists invented computers to do calculation in place of human brains, and the arithmetic speed of today's computers can exceed 1 quadrillion times per second, which means the computing power of humankind has been improved by 1 quadrillion times. Humans also invented killing methods to replace their fists and teeth, and nuclear weapons were created; the explosive power of nuclear weapons may reach ten million tons TNT equivalent, enhancing the slaughter capability of humankind by ten million times.
Then just think of it: intelligent robots are machines able to do complex thinking as human beings. When intelligent robots gain the ability of complex thinking like human beings, they will evolve rapidly, and their ability to think will outdo that of humanity by thousands, millions or billions of times. Hugo de Garis, the father of intelligent robots, has a vivid metaphor: in front of an intelligent robot, the reaction speed of humankind is slower than even stone weathering; compared to intelligent robots, human intelligence is almost zero.
In the nature, it has always been the rule that a species of high IQ scorns or even cooks a species of low IQ; the latter tends to be treated as food or toy. From the moment intelligent robots exceed human beings, humanity is finished.
Some scientists also worry nanobots will exterminate humanity.
A nanobot is a kind of molecular-sized robot, used specially to transport atoms in order to achieve goals of humanity. For instance, nanobots can be sent into human blood vessels to remove cholesterol deposited in the veins; they can be used to track cancer cells in human bodies, and kill the cancer cells when they are only in a small number; they can be used to turn grass mowed down on lawn into bread immediately, or recycled scrap steel into rows of brand-new superior sedans, etc. In short, nano-technology is expected to have a wonderful future.
But there is a problem: compared to the value that can be created by a nanobot, manufacturing a nanobot is very expensive. This is because nanobots are too tiny; although they can do quite significant work, the efficiency turns out to be very low. Even if a nanobot works very hard and unceasingly, the achievements of one day's work have to be calculated by atom. Even if it has done an enormous amount of work, e.g. transporting hundreds of millions of atoms, the total effect is no more than a needle tip.
To address it, scientists have figured out a solution, i.e. giving two orders at the same time when programming for nanobots. The first order is, of course, the work that the nanobot needs to complete; and the second one is to command the nanobot to reproduce itself in a large number, so that many nanobots can jointly complete the work to be done. Since nanobots have the ability to transport atoms, and a nanobot is made up of just a few atoms, such kind of reproduction will be very easy. Like this, if one nanobot can reproduce ten, then ten can reproduce one hundred, and one hundred can reproduce one thousand ... Thus, thousands of trillions of nanobots can be reproduced in a very short time. Therefore, everything is done once the first nanobot is created, as the millions upon millions of nanobots that will be reproduced by it are going to complete the work ordered by humans together with it.
However, there is a troublesome problem: what can we do if the nanobots keep on reproducing without an end? Our human bodies, as well as our Earth, are made up of atoms. If nanobots employ the atoms in our bodies as their production materials, our bodies will be eaten up in a short time. If nanobots never stop the reproduction, the entire Earth will soon be eaten up in the same way. If these nanobots are brought to other planets accidentally by cosmic dust, other planets will be devoured similarly. This is an extremely horrific problem.
However, some scientists are confident to be able to control such a potential disaster. They think they can design a program making nanobots destroy themselves after several generations of reproduction, or design a kind of self-replicating nanobots only under certain conditions. For example, if some nanobots are used specially for rubbish transformation, these nanobots will only reproduce in the environment of garbage, only with rubbish as the material; they will never reproduce in other environments or with other materials.
Such an idea is good, but it is too idealistic. Thus, some more rational scientists begin to question the idea. They have raised the following questions: what to do if the nanobots have program failure and do not terminate reproduction? What to do if any scientist forgets to prepare the program of reproduction control during programming? What to do if a conscienceless or psychologically abnormal scientist purposefully avoids adding the control program when designing nanobots in order to put human beings and the Earth in danger? If any of the above possibilities occurs once, it means humanity will certainly be exterminated, and the Earth will certainly be destroyed. Anyone can understand the principle that one locust does not matter, but hundreds of millions of locusts can destroy everything.
Actually, it is not a long way to go for humanity to completely master these technologies, whether intelligent robots or nanobots. And it can sure that there will be something much more powerful than intelligent robots and nanobots. As long as the science and technology keeping developing, and human beings have not become extinct, such things will be created one day.
It is proposed we organize the topnotch scientists to check on each high-tech. Putting aside the feasibility of this proposal, even if it can be achieved, it is still impossible to thoroughly implement the check. This is because unpredictability is the inherent characteristics of science and technology. Einstein and Newton made a lot of mistakes during their scientific research, and not every scientist can match Einstein or Newton.
Freon is considered good, but it leads to destruction of the ozone layer; DDT is deemed good, but it causes loss of biodiversity. As for the mentioned intelligent robot, people develop them to let them benefit humankind, but when their intelligence exceeds that of humankind, the situation will definitely be out of control. Some people say, as humans can create it, we can certainly control it. It is foolish to take it for granted. Conventional machines may sometimes be out of control, let alone intelligent robots with IQ much higher than that of humanity! Nanobots may be out of control due to the same reason.
I remember when Japan earthquake caused nuclear leakage of Fukushima Nuclear Power Plant in 2011, quite a lot of friends who previously opposed my views contacted me, as they began to accept my idea that it was impossible to prevent science and technology from causing disasters.
Then, how far is human extinction by science and technology?
We know that science and technology was still at a very low level in the mid-18th century before the Industrial Revolution. Only after two hundred years, it has become so advanced: a nuclear bomb can destroy a city with a population of millions; biological toxins transformed by transgenic technology may be even more terrible than nuclear weapons. It keeps moving forward based on such a high level, and we can imagine how the world will be like in fifty years or one hundred years. Moreover, science and technology are developing following a path of fission acceleration: today's development will be much faster than in the past, and the future's development will be much faster than today.
In fact, people's concerns about the safety of many scientific research projects today are no longer how much harm they may bring; they simply worry the research may possibly lead to human extinction. Scientists were worried that atomic bomb and hydrogen bomb experiments would ignite the atmospheric layer, causing human extinction; they also used to worry the European Hadron Collider experiments would lead to human extinction. Of course, these concerns turned out to be unnecessary. Today, scientists once again worry unlimited reproduction of nanobots and intelligent robots being out of control will lead to human extinction. However, these scientific studies are still in progress, and no one can stop them. Then let's think about it rationally: some scientific studies are considered to have the potential of human extinction, and are resolutely opposed as a result, while I think they are safe, and boost them fearlessly. In turn, some other scientific studies I am against are considered safe by you, and I cannot stop you as well. Thus, science and technology is moving forward, forward and forward, madly and out of control. However, unpredictability is the inherent characteristics of science and technology; the more advanced the technology is, the more difficult it is to predict it accurately. Even simple reasoning can tell us that relying on luck cannot stay safe forever, and opponents will one day see their negative prediction come true. One who often walks by riverside will eventually have the shoes wetted, and one who always treks at night will definitely bump into a ghost. When the irrational behavior promotes science and technology to even higher levels step by step, once we bump into a "ghost", the destiny of humanity will come to an end.
In fact, science and technology has been developing so rapidly that people become more and more nonchalant about emerging scientific and technological achievements. In the early 19th century when photography was invented, one had to sit still in the sun for quite a few hours to have his or her photo taken. Even so, curious people were still willing to try. When X-ray was just found, people were extremely astonished; everyone wanted to have a look at the internal structure of their own bodies, including the nobles and aristocrats. When lamp was still in the experimental stage, those reporters were dumbfounded. Today, however, we may notice that almost no invention or new scientific finding can arouse people's amazement and sensation. Numbness towards development will inevitably lead to numbness to crisis. When people can naturally accept all scientific and technological achievements without any thinking, they will undoubtedly become numb towards the negative effects of science and technology. However, disasters always arise out of numbness; before monstrous waves billow, the sea level often appears tranquil, but undercurrents are surging at the bottom of the sea. When the whole society has become insensitive, an extinctive disaster might be around the corner.
Unfortunately, humans are still kept in the dark about the situation, and they severely underestimate the tremendous destructive power of science and technology. Even the most distinguished politicians and scientists have not paid sufficient attention to this issue; to take substantial precautions is something extremely far away from agenda. Human elites are satisfied with the indulgent enjoyment brought by science to humanity. And the measures taken to address the hazards brought by science tend to be fragmented and shallow. As for the extinctive disaster that science will certainly lead to, not a single one of the most powerful leaders can be substantially aware of it.
In fact, there is no much time left to humanity!
Even with the above reasoning, perhaps I still cannot persuade the friend as a TV station president. But now, an authoritative research institute like the Future of Humanity Institute of Oxford University has drawn exactly the same conclusion as mine, so I can feel a silvering lining for the future. I believe that more people will pay attention to this issue concerning survival or extinction of humanity; as long as the world generally agrees with the above views, we can find the way out. Maybe Director Bostrom will not accept my invitation, but I will still take him and his research institute as my bosom friends, because we share common concerns on the fate of humanity.