I-X – Intelligent Technology

This blog post is intended to provide a quick overview of our work on I-X – “Intelligent Technology”, its underlying <I-N-C-A> ontology, and especially its application to intelligent planning systems and intelligent collaborative spaces using I-Plan and I-Rooms.

Austin TateAustin Tate and the Edinburgh Planning Group

Firstly a brief introduction. I am Professor of Knowledge-Based Systems at the University of Edinburgh and Director of the University’s Artificial Intelligence Applications Institute (AIAI). More information via http://www.aiai.ed.ac.uk/~bat/.

AI planning has been a topic of active research at Edinburgh since the 1960s and I have been exploring this area since the early 1970s. The Planning and Activity Management Group within the Artificial Intelligence Applications Institute (AIAI) in the School of Informatics at the University of Edinburgh is exploring representations and reasoning mechanisms for inter-agent activity support. The agents may be people or computer systems working in a coordinated fashion. The group explores and develops generic approaches by engaging in specific applied studies. Applications include crisis action planning, command and control, space systems, manufacturing, logistics, construction, procedural assistance, help desks, emergency response, etc.

Our long term aim is the creation and use of task-centric virtual organisations involving people, government and non-governmental organisations, automated systems, grid and web services working alongside intelligent robotic, vehicle, building and environmental systems to respond to very dynamic events on scales from local to global.

More on our planning technology, research and applications projects is described at http://www.aiai.ed.ac.uk/project/plan/

I-X and I-Plan

I-X LogoI-X – http://www.aiai.ed.ac.uk/project/ix/ or http://i-x.info – is a systems integration architecture. Its design is based on the earlier O-Plan agent architecture and incorporates a hierarchical viewpoint to it’s systems design. I-X provides an issue-handling style of architecture, with reasoning and functional capabilities provided as plug-ins. Also via plug-ins it allows for sophisticated constraint management, and a wide range of communications and visualisation capabilities. I-X agents may be combined in various ways, and may interwork with other processing capabilities or architectures especially where hybrid cognitive systems are joined to algorithms and data driven sub-cognitive modules where they can all work in an “intelligible” and human level explainable manner. I-X supports applications orientated towards “synthesis” tasks where such as design, configuration and especially planning. It is especially designed to support mixed initiative work between people, robots and computer systems working in a cooperative fashion.

An introductory paper to the approach is available here…

Tate, A. (2000) Intelligible AI Planning, in Research and Development in Intelligent Systems XVII, Proceedings of ES2000, The Twentieth British Computer Society Special Group on Expert Systems International Conference on Knowledge Based Systems and Applied Artificial Intelligence, pp. 3-16, Cambridge, UK, December 2000, Springer.
[http://www.aiai.ed.ac.uk/project/ix/documents/2000/2000-sges-tate-intelligible-planning.pdf]

In a nutshell, all aspects of agent capabilities, activities, tasks, objectives, plans, etc are represented in some way as a specialisation of a set of “issues”, a set of “nodes” (think of activities in a planning context or parts of a designed object), a set of “constraints” of various kinds and a set of “annotations”. We write this as <I-N-C-A>. I-X, our systems architecture, essentially just uses its computational capabilities to handle issues, apply nodes, manage constraints and interpret annotations to inform, explain or support the use of the construct.

<I-N-C-A> Ontology – Issues, Nodes, Constraints and Annotations

Here is a quick intro style paper on the idea of treating all aspects of task specification, planning, environment modelling and lower level activity as “constraints on permissible behaviour” and our <I-N-C-A> Ontology for plans, activity, agent capabilities and all things like that (though it actually is more general and applies also to designed artifacts, scheduled things and configuration tasks).

Tate, A.(2003) <I-N-C-A>: A Shared Model for Mixed-initiative Synthesis Tasks, Proceedings of the Workshop on Mixed-Initiative Intelligent Systems (MIIS) at the International Joint Conference on Artificial Intelligence (IJCAI-03), pp. 125-130, Acapulco, Mexico, August 2003.
[http://www.aiai.ed.ac.uk/project/ix/documents/2003/2003-ijcai-miis-tate-inca.pdf]

My work on hierarchical planning over the years led to a very simple abstract ontology suitable for objectives, tasking, activity specification and capability modelling which is intended to be as flexible (and additive) as required for any application. The concepts within the ontology have been a core of standards such as NIST Process Specification Language (later an ISO process specification standard), etc. We call this <I-N-C-A> – it is an ontology suited for any “synthesised” things… and allows for design, configuration as well as planning applications. which allows for a set of constraints on behaviour where the types of constraint are “issues” to be addressed, “nodes” (which can be thought of in a planning context as “include activity” constraints, “constraints” themselves (in a planning context usually time, and object co-designation/non-co-designation and sometimes spatial), and “annotations” (which we use to capture underlying gIBIS style rationale of how issues/tasks are turned into selected activities under the constraints).

An I-X system can “handle issues”, “apply nodes”, “manage constraints” and “interpret annotations”. The idea is that the components in an <I-N-C-A> inspired system share and communicate constraints up and down, and that lower levels can communicate via partially shared constraints that can be understood between the levels (this often involves time, object = and /= ) so there can be yes, no and “maybe if” information passed between the levels to help home in on a mutually acceptable artefact (design or plan) in a mixed initiative fashion. Great where humans, organisations, robots and environmental systems are all cooperating.

Applications and Use Cases

I-X, I-Plan and <I-N-C-A> have been applied in a number of areas… many reflecting our applications and research funding interests in collaborative systems, operations centres, emergency response, etc. Links to projects are at http://www.aiai.ed.ac.uk/project/plan/.

2010-05-13-Train-for-Success-1We have also explored intelligent instrumented spaces in which people and knowledge-based systems can cooperate in areas such as mixed-reality distributed team operations centres using the concept of an I-Room – a Virtual Space for Intelligent Interaction.

<I-N-C-A> has even been applied as a business modelling approach without any software involvement to collecting information and making business cases for the potential opening of large scale plant across the world for a major food manufacturer.

More Publications

The above papers (as PDF) and others that go into more technical details on the I-X/I-Plan system and how it uses <I-N-C-A> can be found in the following documents index…

A couple of publications that may give a good overview are as follows:

  • Tate, A. (2000) Intelligible AI Planning, in Research and Development in Intelligent Systems XVII, Proceedings of ES2000, The Twentieth British Computer Society Special Group on Expert Systems International Conference on Knowledge Based Systems and Applied Artificial Intelligence, pp. 3-16, Cambridge, UK, December 2000, Springer. [PDF]
  • Tate, A. (2003) : a Shared Model for Mixed-initiative Synthesis Tasks, Proceedings of the Workshop on Mixed-Initiative Intelligent Systems (MIIS) at the International Joint Conference on Artificial Intelligence (IJCAI-03), pp. 125-130, Acapulco, Mexico, August 2003. [PDF]
  • Tate, A. (2014) Using Planning to Adapt to Dynamic Environments, in Suri, N. and Cabri, G. (eds.) (2014) Adaptive, Dynamic, and Resilient Systems, Chapter 13, pp. 243-257, CRC Press, Taylor & Francis. [PDF]

Our earlier O-Plan planner use of <I-N-OVA> (a forerunner of the more abstract upper level <I-N-C-A> ontology) is described in these publications from 1984 to 2003…

  • http://www.aiai.ed.ac.uk/project/oplan/documents/
  • For an overview of O-Plan and its applications see… Tate, A. and Dalton, J. (2003) O-Plan: a Common Lisp Planning Web Service, invited paper, in Proceedings of the International Lisp Conference 2003, October 12-25, 2003, New York, NY, USA, October 12-15, 2003. [ PDF]


How this might be used alongside other Systems Architectures

<I-N-C-A> is intended to act as a simple, easily understood, upper ontology or outer layer for representing all aspects of tasks, activity, objectives and agent capabilities. Alongside an ontology and models for the objects in the domain, which <I-N-C-A> deliberately does not prescribe, it can offer a framework which can easily be deepened to meet the actual requirements while providing a stable overall basis which allows for system reasoning and human understanding.

The work on <I-N-C-A> involves the investigation of the use of shared models for task directed communication between human and computer agents who are jointly exploring a range of alternative options for a product design or for joint activity.

Six concepts are used as the basis for exploring task orientated multi-agent and mixed-initiative work involving users and systems. Together these provide for a shared model of what each agent can and is authorised to do and what those agents can act upon. The concepts are:

  1. Shared Object/Product Model — a structured representation of the object being modelled or produced using a common constraint model of the object or product.
  2. Shared Planning and Activity Model — a rich plan representation for activities, objectives and plans using a common constraint model of activity.
  3. Shared Task Model — Mixed initiative model of “mutually constraining the space of behaviour” (or objects/products/plans).
  4. Shared Space of Options — explicit option management.
  5. Shared Model of Agent Processing Capabilities — handlers for issues, appliers for nodes, constraint (model) managers and (possibly) annotation interpreters.
  6. Shared Understanding of Authority — management of the authority to do work (to handle issues and apply/execute nodes/activities) and which may take into account options and levels of abstraction of the model of the object or product.

Engagement with various standards setting groups has a part of the work: PIF, NIST PSL, OMWG CPR and DARPA SPAR all converge on a single core model of activity which can be related to the more abstract <I-N-C-A>.

IBM Project Intu – “Self” – Cognitive Architectures

Sample Project Intu Conversational Agent Sample Project Intu Conversational Agent

Resources for Discussion

http://www.aiai.ed.ac.uk/~ai/resources/2019-01-16-Briefing/

This entry was posted in Information and tagged , , , , . Bookmark the permalink.

1 Response to I-X – Intelligent Technology

  1. bat says:

    This blog post was initially created to share information with Grady Booch at IBM following a Twitter exchange about systems which employ both cognitive and data driven neural network approaches to intelligent systems. Grady shared links to information about “Project Intu (Self)”.

    https://github.com/watson-intu/self
    https://www.youtube.com/watch?v=ADnCUgrEt0U

Comments are closed.