We’ve all seen the movies, we all know the theory; the possibility that one day, machines become superintelligent, develop an individual conscience and decide to enslave the human race, most likely resulting in a dystopian world.

“The Matrix” trilogy is the most mainstream manifestation of this theory in popular media, and “I, Robot” had also made a splash when it was first released. Though seemingly entirely fictional a decade ago, the subject has once again become a popular topic of discussion.

It’s no surprise since we have already developed a primitive form of Artificial Intelligence (Siri, anybody?). We have developed objects literally called “smart”. But this theory of machines rising up against humanity or the implications of devices developing consciousness rarely brings the, well, actual theory to the forefront.

The term “singularity” in this context was first used by mathematician John von Neumann.

In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described

ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue“.

The complete term “technological singularity” was in turn first used by Vernon Vinge, one of the most prominent writers on the subject.

Theoretically, an intelligence explosion could happen one day, based on the principles of acceleration. The intelligence humans have created so far does not surpass the capabilities of the human brain, but one day that machine that is smarter than humans may exist.

Alternatively, intelligence enhancement and transhumanist methods might provide the human brain with capabilities that go beyond its biological design. In turn, it will take its problem detecting and solving skills and create another machine, smarter than itself. And this will continue to accelerate, resulting in an enormous outcome.

Now, if that happens suddenly, it is very hard to predict what the world would look like after the event, or, for that matter, if humanity itself would benefit from it, or even survive it. Although it must be noted that it probably won’t look like what we see in science fiction films.

Debates over the results of a technological singularity include what is known as the “technology paradox”

In this scenario, machines have advanced so much that most jobs can be automated, resulting in extreme rates of unemployment and poverty; subsequently, people will cease to develop technology either to fix or prevent this outcome. Another scenario, of course, is the matter of safety.

Isaac Asimov was the first to raise safety concerns in the relationship between humans and AI, which led him to develop his 3 Laws of Robotics. And even though it might not get to the point where machines actively seek to destroy or enslave humanity, issues might arise when machines no longer require human handling.

It is certainly a very stimulating issue to discuss and speculate upon. Researchers are looking into the possibilities of “friendly” AI, and there seems to be a consensus recently among scientific circles that technological singularity and superintelligence are no longer a question of “if” or “how”, but a matter of “when”.


Copyright © 2012-2024 Learning Mind. All rights reserved. For permission to reprint, contact us.

power of misfits book banner desktop

Like what you are reading? Subscribe to our newsletter to make sure you don’t miss new thought-provoking articles!

This Post Has One Comment

Leave a Reply