Oren Etzioni at the Allen Institute for Artificial Intelligence has requested feedback on the following NY Times OpEd:
I provided these notes to Oren on 8th September 2017:
I think its good to have a wide ranging discussion on these matters and to involve the general public. Scare stories could lead to the many potential benefits of AI being lost if there is a negative reaction to the threat that systems can pose… much like happened with GM crops in Europe.
I am not sure that talking of “human operator” is quite the right model. I see future AI systems and robots as “agents” and think the “agency” model is a useful one to include when talking of future AI and robotic/autonomous systems. I think the notion that responsibility lies with the “deployment” or “authorisation” of the agent could help some of the discussion. The idea that such agents are subject to the same laws and regulations and treaties as any other human agent is a good one, and one you cover. Of course that varies by region, and in some lawless or less constrained “off shore” (future “off world”) locations such constraints could be lessened to the detriment of others. So introducing a chain back to those organisations, companies, or individuals who “deploy” or “authorise” the agent may be useful.
Remember, as I am sure you are very aware, that Isaac Asimov’s stories were a warning that the defined three laws could not anticipate all contexts.
My own main concern is the concentration of technology and robotic systems in the hands of a few oligarchs and global companies as systems and devices replace workers. The lack of a social and cooperative approach to this worldwide, and competition for one country or one company to be the “winner takes all” could lead to social unrest and very serious issues. So I am glad to see folks like Bill Gates and others raising issues of taxation on systems and robots in just the same way that there are taxes on workers to pay for the social infrastructure of regions, countries and the world.