Imagine 2066. A genetic virus, engineered to spread infertility across mankind, threatens the very basis of the growth of mankind — giving birth. Everyone is being affected with this virus, spread by a sentient authority who have pushed the few unaffected individuals into the very corners of a burning city. In the middle of this seeming apocalypse, a child is born — a gaping flaw in the AI roadmap of the authority who are on the hunt to kill this newborn child. There is no bloodshed, only a microscopic virus disabling humans from reproducing. How long will we survive, then?
Such is the effect that the advancement of new technologies may result in, according to Prof. Stephen Hawking. There may be “new ways things can go wrong” in the survival saga of mankind, said Hawking, while speaking at BBC’s annual Reith Lectures on January 7. He commented upon the advancement of technology in terms of automation and artificial intelligence, stating that an increasing number of threats, in form of both natural and artificial calamities, will be posed against mankind as a result of the progress that we are making. Such calamities and catastrophes may include nuclear warfare, unnatural global warming and genetically engineered viruses. (Remember Children of Men?)
Hawking alarmingly stated that a “disaster” on Planet Earth is almost certain to occur within the next 1,000 to 10,000 years. However, that may not necessarily spell the end of humanity, because we would have found a new home for ourselves by then. However, the Professor joked, “we will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period.”
Sounding this note of caution in the middle of delivering a lecture on the nature of black holes, Hawking’s suggestion is to recognise the dangers that our technologies may impose, and take steps to control them. After all, progress for progress's sake must be controlled, to ensure that the worst do not creep into our genes.
No comments:
Post a Comment