John McCarthy – The Robot and the Baby

Professor John McCarthy died on 24th October 2011 at the age of 84. He was an early pioneer of computer technology, computer time-sharing, and inventor of LISP, one of the very first computer programming languages. LISP was, radically, based on symbolic computing. He was an early pioneer of Artificial Intelligence, and indeed originator of the term “AI” which was adopted following the title he gave to a conference at Dartmouth College in Vermont which John convened in the summer of 1956. John received the Turing Award, and many other accolades and honours, including the United States National Medal of Science.

John McCarthy was known to many of us in the Artificial Intelligence community as the “Father of AI”, and I came to know him as very much a baby in the subject. In my student days in the early 1970s he appeared on the BBC TV program “Controversy” in debate with Sir James Lighthill on the value of research on general purpose robots alongside my PhD supervisor, Donald Michie, himself an AI pioneer and war time code breaker who had worked with Alan Turing at Bletchley Park. He wrote and communicated widely on his interests in robot decision making.

Typical of John’s desire to communicate about his field was a short sci-fi story he wrote in 2001, “The Robot and the Baby” which has many interesting themes, and to me epitomises his breadth of interests, politics and fascinating opinions. A capable companion robot – “Robot Model number GenRob337L3, serial number 337942781–R781 for short” – was one of many deployed to assist people and deliberately made unappealing and emasculated by the constraints society had placed on robot use.

The story begins:

“Mistress, your baby is doing poorly. He needs your attention.”
“Stop bothering me, you …” … “Love the … baby, yourself.”

John amusingly includes a long line of reasoning by R781 in the bracketed notation of LISP on probabilities of the baby being harmed if it disobeys its key constraints:

(= (Command 337) (Love Travis))
(True (Not (Executable (Command 337))) (Reason (Impossible-for robot (Action Love))))
(Will-cause (Not (Believes Travis) (Loved Travis)) (Die Travis))
(= (Value (Die Travis)) -0.883)
(Will-cause (Believes Travis (Loves R781 Travis) (Not (Die Travis))))

With this reasoning R781 decided that the value of simulating loving Travis and thereby saving its life was greater by 0.002 than the value of obeying the directive to not simulate a person. There follows a progressively escalating series of events where the whole world is watching the handling of the situation by the authorities, and commenting in real time on the event on social media – anticipating Twitter by some years.

Read the story (at if you want to explore an informed opinion on the ethics, issues and dilemmas involved in human and robot interaction, which one day we may face. The story has many thought provoking elements. I personally feel for the emasculated robot that is left in the Smithsonian.

This entry was posted in IDEL11. Bookmark the permalink.