Five Creepiest Advances in Artificial Intelligence
Already, the electronic brains of the most advanced robotic models surpass human intelligence and are able to do things that will make some of us shudder uncomfortably. But what is your reaction going to be after learning about recent advances in robotics and artificial intelligence?
5. Schizophrenic robot
Scientists at the University of Texas (Austin) have simulated mental illness for a computer, testing schizophrenia on artificial intelligence units.
The test subject is DISCERN – a supercomputer that functions as a biological neural network and operates using the principles of how human brain functions. In their attempt to recreate the mechanism behind schizophrenia, the scientists have applied the concepts described in the theory of hyper-learning, which states that schizophrenic brain processes and stores too much information too thoroughly by memorizing everything, even the unnecessary details.
The researchers then emulated schizophrenic brain in artificial intelligence by overloading the computer with many stories. At one point, the computer claimed responsibility for a terrorist act, telling researchers about setting off a bomb. Artificial intelligence has reported this incident because of confusion with a third party’s story about the explosion by terrorists, mixed in with its own memory. In another case, the computer began to talk about itself as a third person, because it could not make out what exactly it was at the moment.
Professor Roland Arkin from the School of interactive computing at the University of Georgia presented the results of an experiment in which scientists were able to teach a group of robots to cheat and deceive. The strategy for such fraudulent behavior was based on the behavior of birds and squirrels.
The experiment involved two robots. First robot had to find a place to hide, and the second robot was to discover where the first robot was hiding. Robots had to go through an obstacle course with pre-installed physical objects which turned over as the robots moved along. The first robot led the way, and the second one followed the first robot by analyzing tracks left along the path.
After a while, the hiding robot started deliberately overturning obstacles just to create a diversion and was hiding somewhere away from the mess he had left behind. This strategy was not originally programmed, the robot has developed its own strategy, through trial and error. After all, this was just a harmless university experiment, right?
3. Ruthless robot
The scientists at the Laboratory of Intelligent Systems put a group of robots in the same room with predetermined sources of “food” and “poison.” Machines earned points for being closest to the “food”, and lost points if they approached sources of “poison.” All machines involved in the experiment were fitted with small blue lights, flashing erratically, as well as a sensor camera, which helped to identify the light from the lamps of other robots.
The robots were able to turn off their lights if needed. When the experiment began, it did not take too long for the robots to realize that the largest concentration of blue lights was at the point where the other robots congregated, that is next to the “food.” It turned out that, by blinking their lights, the robots showed the competitors where the correct source was located.
After several phases of the experiment, almost all of the robots turned off their “beacons”, refusing to help each other. But this was not the only outcome of this experiment: some of the other bots managed to divert other bots away from the “food” by blinking more intensely.
2. Supercomputer with imagination
Among the many projects by the Google company, which, without a doubt, one day will put an end to our civilization, there is one project that stands out: a self-learning computer with a neural network simulation system.
In an experiment, this supercomputer was given free access to the Internet and the ability to examine the contents of the network. There were no restrictions or guidelines, the powerful super intelligence was simply allowed to explore the entire human history and experience. And what do you think this supercomputer has chosen out of all this wealth of information? It began browsing though images of kittens.
Yes, as it turned out, we all use the Internet the same way, no matter who we are, human beings or high-tech digital intelligence. A little later, Google has discovered that the computer has even developed its own concept of what a kitten should look like by independently generating the image with an analogue to our cerebral cortex and based on a review of photographs seeing earlier.
1. Robot prophet
“Nautilus” is another self-learning supercomputer. This unit was fed millions of newspaper articles starting from 1945, by basing its search on two criteria: the nature of the publication and location. Using this wealth of information about past events, the computer was asked to come up with suggestions on what would happen in the “future.” And these turned out to be surprisingly accurate guesses. How accurate? Well, for example, it had located Bin Laden.
The same task took 11 years, two wars, two presidents and billions of dollars for the U.S. government and its allies. The “Nautilus” project has taken much less time, and all that was done was just the analysis of the news pertaining to the terrorist leader and connecting dots in his probable whereabouts. As a result of its analysis, the “Nautilus” has narrowed the search area to a 200-km zone in the northern Pakistan, where Osama’s refuge was discovered.
The experiment with the “Nautilus” was retrospective in nature , the computer was given an opportunity to predict events which had already happened. Now scientists are contemplating allowing the machine to predict present day’s future events.
Latest posts by Anna LeMind (see all)
- New Drug Made Cancer Tumors Disappear in Just 12 Weeks! - March 6, 2014
- Scientists Propose to Use Earth’s Heat As a Source of Renewable Energy - March 5, 2014
- Genetically Modified Babies Are Already a Reality - March 3, 2014