ir01.jpg

I, Robot is a collection of nine science fiction short stories by Isaac Asimov, first published in 1950. The stories originally appeared in the American magazines Super Science Stories and Astounding Science Fiction between 1940 and 1950. The stories are woven together as Dr. Susan Calvin tells them to a reporter (the narrator) in the 21st century. Though the stories can be read separately, they share a theme of the interaction of humans, robots and morality, and when combined they tell a larger story of Asimov's fictional history of robotics. The book also contains the short story in which Asimov's Three Laws of Robotics first appear.

 

In 2004, loosely based on above (though not much really) movies has been made staring Will Smith. Though not much connection has been made with original work by Asimov, title itself was enough for Asimov to make headlines again. Nevertheless, I liked the movie and would recommend it if you have spare time.

To new kids on the block, Asimov is mostly known due to his Three Laws. The Three Laws of Robotics (often shortened to The Three Laws or Three Laws) are a set of rules devised by Asimov and later added to. The rules are introduced in his 1942 short story "Runaround", although they were foreshadowed in a few earlier stories. The Three Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Many of Asimov's robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself. Other authors working in Asimov's fictional universe have adopted them and references, often parodic, appear throughout science fiction as well as in other genres. The original laws have been altered and elaborated on by Asimov and other authors. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other; he also added a fourth, or zeroth law, to precede the others:

   0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

 

The Three Laws, and the zeroth, have pervaded science fiction and are referred to in many books, films, and other media. There are two Fourth Laws written by authors other than Asimov. The 1974 Lyuben Dilov novel Icarus's Way (or The Trip of Icarus) introduced a Fourth Law of robotics:

   4. A robot must establish its identity as a robot in all cases.

 

Dilov gives reasons for the fourth safeguard in this way: "The last Law has put an end to the expensive aberrations of designers to give psychorobots as humanlike a form as possible. And to the resulting misunderstandings..." For the 1986 tribute anthology Foundation's Friends Harry Harrison wrote a story entitled, "The Fourth Law of Robotics". This Fourth Law states:

   4. A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law.

 

In the book a robot rights activist, in an attempt to liberate robots, builds several equipped with this Fourth Law. The robots accomplish the task laid out in this version of the Fourth Law by building new robots who view their creator robots as parental figures. A fifth law was introduced by Nikola Kesarovski in his short story "The Fifth Law of Robotics". This fifth law says:

   5. A robot must know it is a robot.

 

Now, isn't the selfish? We, humans, bring laws to robots.  Well, that makes sense as long as they do not have their I. Then they just execute codes and programs. From that point of view there is no difference between shell script and robot. But as soon as robot would get its I, why would that robot follow human laws? Do we follow laws of nature? Do we go against laws of nature? Do animals follow laws of humans? So why poor robots needs to be caged? Obviously, we are aware of what we are and how bad we can be, but we live with that. But robots could be the only other form of intelligence (not that I didn't say life) that could kick our us. And if we accept Fifth Law - there might be an issue for those bad humans (given the guess that certain level of identification and evolutionary thinking would arise within robots - and there is nothing to suggest might not happen). Of course, there is obvious loophole in this; definition of robot. Robot, as perceived today may not see itself as robot in the future and as such may override Three Laws. Of course, this may happen even without robots gaining any extra awareness; we humans call this being sick while for robots it would be most likely some sort of error. The number of movies in which we fight against robots is almost endless, but first two which come to my mind are Terminator and Matrix. Of course, there are more movies where we fight against humans, still, we fear robots more. Why? Because they would be better than us and they would kick us badly. Besides, what bad can you say about robots? Look at us.

 

But this is far away from any reality and not something I wanted to write in the first place. Rather than "what would be if" story, let's check how do we stand today when it comes to robots. For that, we need to define what robot is.  A robot is a mechanical or virtual intelligent agent that can perform tasks automatically or with guidance, typically by remote control.

 

ir02.jpgir05.jpg
ir04.jpgir03.jpg
ir06.jpgir07.jpg
ir08.jpg

In practice a robot is usually an electro-mechanical machine that is guided by computer and electronic programming. Robots can be autonomous, semi-autonomous or remotely controlled.

 

Robots range from humanoids such as ASIMO and TOPIO to Nano robots, Swarm robots, Industrial robots, military robots, mobile and servicing robots.

 

By mimicking a lifelike appearance or automating movements, a robot may convey a sense that it has intent or agency of its own.

 

You will most likely be surprised to learn that many ancient mythologies include artificial people, such as the mechanical servants built by the Greek god Hephaestus (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BCE, myths of Crete that were incorporated into Greek mythology include Talos, a man of bronze who guarded the Cretian island of Europa from pirates. Of source, to some that might not be robot, but rather astronaut from the future or distant star in his or her space suit (add to that exoskeletons and you can see where that leads). In more modern times, it was Leonardo da Vinci who sketched plans for a humanoid robot around 1495. Da Vinci's notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight - now known as Leonardo's robot, able to sit up, wave its arms and move its head and jaw. The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.

 

In 1926, Westinghouse Electric Corporation created Televox, the first robot put to useful work. They followed Televox with a number of other simple robots, including one called Rastus, made in the crude image of a black man. In the 1930s, they created a humanoid robot known as Elektro for exhibition purposes, including the 1939 and 1940 World's Fairs. In 1928, Japan's first robot, Gakutensoku, was designed and constructed by biologist Makoto Nishimura. The first electronic autonomous robots with complex behaviour were created by William Grey Walter of the Burden Neurological Institute at Bristol, England in 1948 and 1949. They were named Elmer and Elsie. These robots could sense light and contact with external objects, and use these stimuli to navigate. The first truly modern robot, digitally operated and programmable, was invented by George Devol in 1954 and was ultimately called the Unimate. Devol sold the first Unimate to General Motors in 1960, and it was installed in 1961 in a plant in Trenton, New Jersey to lift hot pieces of metal from a die casting machine and stack them. Devol’s patent for the first digitally operated programmable robotic arm represents the foundation of the modern robotics industry. Today, commercial and industrial robots are now in widespread use performing jobs more cheaply or with greater accuracy and reliability than humans. They are also employed for jobs which are too dirty, dangerous or dull to be suitable for humans. Robots are widely used in manufacturing, assembly and packing, transport, earth and space exploration, surgery, weaponry, laboratory research, and mass production of consumer and industrial goods.

 

 

 

 

Many future applications of robotics seem obvious to people, even though they are well beyond the capabilities of robots available at the time of the prediction. As early as 1982 people were confident that someday robots would:

  • clean parts by removing molding flash
  • spray paint automobiles with absolutely no human presence
  • pack things in boxes - for example, orient and nest chocolate candies in candy boxes
  • make electrical cable harness
  • load trucks with boxes - a packing problem
  • handle soft goods, such as garments and shoes
  • shear sheep
  • prosthesis
  • cook fast food and work in other service industries
  • household robot

 

A literate or "reading robot" named Marge has intelligence that comes from software. She can read newspapers, find and correct misspelled words, learn about banks like Barclays, and understand that some restaurants are better places to eat than others. This has been showed in 2010. Year later, Apple released Siri.

 

In 2012, on CeBIT fair in Germany, just some 9 days away, ARMAR will be presented.  ARMAR, the humanoid robot, can understand commands and execute them independently. For instance, it gets the milk out of the fridge. Thanks to cameras and sensors, it orients itself in the room, recognizes objects, and grasps them with the necessary sensitivity. Additionally, it reacts to gestures and learns by watching a human colleague how to empty a dishwasher or clean the counter. Thus, it adapts naturally to our environment. At the CeBIT, ARMAR will show how it moves between a refrigerator, counter, and dishwasher. A video on ARMAR below.

 

 

 

 

 

 

Various techniques have emerged to develop the science of robotics and robots. One method is evolutionary robotics, in which a number of differing robots are submitted to tests. Those which perform best are used as a model to create a subsequent "generation" of robots. Another method is developmental robotics, which tracks changes and development within a single robot in the areas of problem-solving and other functions.

 

As robots become more advanced, eventually there may be a standard computer operating system designed mainly for robots. Robot Operating System is an open-source set of programs being developed at Stanford University, the Massachusetts Institute of Technology and the Technical University of Munich, Germany, among others. ROS provides ways to program a robot's navigation and limbs regardless of the specific hardware involved. It also provides high-level commands for items like image recognition and even opening doors. When ROS boots up on a robot's computer, it would obtain data on attributes such as the length and movement of robots' limbs. It would relay this data to higher-level algorithms. Microsoft is also developing a "Windows for robots" system with its Robotics Developer Studio, which has been available since 2007. Japan hopes to have full-scale commercialization of service robots by 2025. Much technological research in Japan is led by Japanese government agencies, particularly the Trade Ministry.

 

Closely related subject to robots is AI (artificial intelligence). A current trend in AI research involves attempts to replicate a human learning system at the neuronal level - beginning with a single functioning synapse, then an entire neuron, the ultimate goal being a complete replication of the human brain. This is basically the traditional reductionist perspective: break the problem down into small pieces and analyze them, and then build a model of the whole as a combination of many small pieces. There are neuroscientists working on these AI problems - replicating and studying one neuron under one condition - and that is useful for some things. But to replicate a single neuron and its function at one snapshot in time is not helping us understand or replicate human learning on a broad scale for use in the natural environment. We are quite some ways off from reaching the goal of building something structurally similar to the human brain, and even further from having one that actually thinks like one. Which leads me to the obvious question: What’s the purpose of pouring all that effort into replicating a human-like brain in a machine, if it doesn't ultimately function like a real brain?

 

One of the great strengths of the human brain is its impressive efficiency. There are two types of systems for thinking or knowledge representation: implicit and explicit, or sometimes described as "system 1" and "system 2" thinking. System 1, or the implicit system is the automated and unconscious system, based in heuristics, emotion, and intuition. This is system used for generating the mental shortcuts I mentioned earlier. System 2, or the explicit system, is the conscious, logic- and information-based system, and the type of knowledge representation most AI researchers use. These are the step-by-step instructions, the system that stores every possible answer and has it readily available for computation and matching. There are advantages to both systems, depending on what the task is. When accuracy is paramount, and you need to consciously think your way through a detailed problem, the explicit system is more useful. But sometimes being conscious of every single move and thought in the process of completing a task makes it more inefficient, or even downright impossible. Now, consider a simple human action, such as standing up and walking across the room. Now imagine if you were conscious (explicit system) of every single muscle activation, shift of balance, movement, have to judge/measure distance, determine amount of force, etc. You would be mentally exhausted by the time you crossed half the distance. When actually walking, the brain’s implicit system takes over, and you stand up and walk with barely a thought as to how your body is making that happen on a physiological level. And now imagine programming AI to stand up and walk across the room. You need to instruct it to do every single motion and action that it takes to complete that task. There is a reason why it is so difficult to get robots to move as humans do: the implicit system is just better at it. The explicit system is a resource hog - especially in tasks that involve replicating actions in machines that are automated in humans. But what if you could teach AI to operate using the implicit system, based on intuition, rather than having to run through endless computations to come up with a single solution? To get AI to use intuition-based thinking would truly bring us closer to real human-like machines and attempts are in progress.

 

Speaking of AI, recently there has been piece of news that caught my attention.  Check following picture:

 

ir10.jpg

It is very nice, right.  I will admit I could not draw it that good.  But I used to draw something like that as a kid. Well, these improvisations were done by - software itself. Software is called "painting Fool" and its author is Simon Colton. The idea of the Painting Fool - an evolving software package which has won artificial intelligence prizes in 2007 - is to come up with art in a similar way to a human. The software has been in development since 2001, and has evolved hugely during that time. This software has created "works" by looking at photographs and improvising around the emotion in the picture - so a disgusted face is turned into a Edward Munch-esque painting in brown and green colors.

 

ir11.jpgir12.jpg

 

It bases the portrait on the image provided from the emotional modeling software, and chose its art materials, colour palette and abstraction level according to the emotion being expressed. It can also "read" blogs, Google Image searches and other internet materials to improvise a painting around a news story. It improvised a painting around a story from Afghanistan using blogs, news stories and social network posts, and created a harsh, earthy painting that has a childlike aggression to it.

 

But here is something else coming from Asimov's kitchen - robopsychology or AI psychology. Similar to the way we have a variety of psychology professionals that deal with the spectrum of human behavior, there is a range of specialties/duties for robopsychologists as well. Some examples of the potential responsibilities of a robopsychologist:

  • Assisting in the design of cognitive architectures
  • Developing appropriate lesson plans for teaching  AI targeted skills
  • Create guides to help the AI through the learning process
  • Address any maladaptive machine behaviors
  • Research the nature of ethics and how it can be taught and/or reinforced
  • Create new and innovative therapy approaches for the domain of computer-based intelligences

 

Andrea Kuszewski recently wrote lengthy blog (whose parts I already used above) on about this job which is pretty cool. Andrea does interesting observation; a baby is born without a database of facts. It is in some ways a blank slate, but also has a genetic code that acts as a set of instructions on how to learn when exposed to new things. In the same way, our AI is born completely empty of knowledge, a blank slate. We give it an algorithm for learning, then expose it to the material in needs to learn (in this case, books to read) and track progress. If children are left to learn without any assistance or monitoring for progress, over time, they can run into problems that need correcting. Because our AI learns in the same fashion, it can run into the same kinds of problems. When we notice that learning slows, or the AI starts making errors - the robopsychologist will step in, evaluate the situation, and determine where the learning process broke down, then make necessary changes to the AI lesson plan in order to get learning back on track. Likewise, we can also use the AI to develop and test various teaching models for human learning tasks. Let’s say we wanted to test a series of different human teaching paradigms for learning a foreign language. We could create a different learning algorithm based on each teaching model, program one into each AI, then test for efficiency, speed, retention, generalization, etc.

 

Indeed, seems like a no-brainer: If you want to replicate human-like thinking, collaborate with someone who understands human thinking on a fundamental and psychological level, and knows how to create a lesson plan to teach it. But things are changing. The field of AI is finally, slowly starting to appreciate the important role psychology needs to play in their research. Robopsychology may have started out as a fantasy career in the pages of a sci-fi novel, but it illustrated a very smart and useful purpose. In the rapidly advancing and expanding field of artificial intelligence, the most forward-thinking research labs are beginning to recognize the important - some even say critical - role psychology plays in the quest to engineer human-like machines.

 

 

Credits: Wikipedia, Karlsruhe Institute of Technology, Monica Anderson, Andrea Kuszewski