Automated Cognitive Planning and Plan Recognition

Given some initial condition, planning can find a sequence of actions that can take an agent to their goal. In multi-agent systems, however, it can be important not only for the necessary world states to change, but also for the other agents in the system to hold certain epistemic attitudes. For example, consider a coordination task where agents a1 and a2 both need to push the same color button to progress. Then in order for agent a1 to push the red button, a1 is required to believe that agent a2 agrees to push the red button.

Epistemic attitudes are only a subset of cognitive verbs needed to plan in many real-world settings. Agents must also be able to reason deontically, in order to consider ethical situations. We also must take into account that our perception which takes sensory input and convert it to declarative information may be compromised. To achieve all this, we use a cognitive calculus called DCEC. This allows us to represent the initial condition, precondition and effects of actions, as well as our goals using cognitive operators. These cognitive operators include but are not limited to belief, perception, communication, and obligation.

We built a planner called Spectra, which utilizes ShadowProver to reason over DCEC. Spectra for each situation, will gather the set of applicable actions that an agent can take, and verify if the agent has achieved their goal. Spectra is different from model-based approaches in that we’re proof-theoretic. For each action within a plan, we produce a proof that the agent is indeed able to perform that action. We don’t rely on the closed world or domain closure assumptions common within finite model approaches. This allows us to consider incomplete and uncertain initial conditions that an agent often finds themselves in.

Our planner conducts the search space over a lifted action schema, allowing us to avoid instantiating actions in hard to ground domains. If possible, Spectra will find a set of plans where each plan is a sequence of grounded actions to take the agent from their initial condition to their goal. The plans are guaranteed to take the agent to their goal no matter which specific state in their initial condition they find themselves in.

On top of Spectra, we’re developing an intent recognition framework. Given a set of goals and a sequence of observations, the observer agent wants to infer what is the subset of most likely goals that the actor agent is pursuing. This task in itself is epistemic by default and is part of an ongoing project in the RAIR lab. For example, what role can the inductive cognitive calculi IDCEC play when adjudicating between possible goals? Are the agent’s cooperative in the recognition process such as those in an assisted living setting? With each inferred goal is an argument not only that the agent can achieve said goal, but it is *rational* for the agent to pursue that goal with respect to the other goals.