Symposium

AAAI 2015 Symposium on Deceptive and Counter-Deceptive Machines

AAAI Fall Symposium Main Page
Call For Papers
Important Dates
Tentative Schedule
Registration Information
Organizing Committee

This symposium examines the potential roles and means for deceptive and counter-deceptive machines, and the ethical and social implications thereof.

From the Turing Test to HAL 9000 to Blade Runner to today’s Ex Machina, both rigorous and popular analysis of deception and counter-deception has been part of AI, and part of the larger world fascinated by AI. Moreover, deceptive and counter-deceptive machines are a foreseeable byproduct of our technologized society wherein intelligent systems are rapidly becoming more active and interactive in human physical, economic, and social spheres.

Currently, “socialized” AI systems are being advanced in areas such as affective and persuasive computing, social and cognitive robotics, human-robot interaction, multi-agent and decision-support systems, and e-commerce. The general belief is that socialization enables or significantly improves system efficiency and efficacy. But then, what is role of deception, even altruistic deception, and counter-deception in these systems? For example: Does robo-therapy or affective computing engender false beliefs that AI artifacts are fully sentient and, specifically, genuinely empathetic? Should AI produce machines that deceive for the greater good (e.g., espionage) or should that role be the exclusive province of humans?

The symposium will focus on the emerging science and engineering of machine deception and counter-deception. It will explore questions such as: How and when can machines deceive us and each other? Can we effectively use machines to counter deception perpetrated by machines, and by humans? Can there be both a science and engineering of machine deception and counter-deception? If so, what would it look like? What ethical or policy principles might guide the science of machine deception and counter-deception?

Papers on and demonstrations of deceptive and counter-deceptive machines are invited. Also, papers are invited on topics including, but not limited to, the following:

  • Is purely honest AI engineering possible? Is purely non-deceptive AI, today and tomorrow, possible to engineer? Can a machine truly employ affective cues and emotional language without it being manipulative pretense? How? Can a machine explain its own behavior to a lay human in a way that is both intelligible and honest? How?
  • What is the role of machine/robot ethics? How does the blossoming field of machine/robot ethics relate to the realities of deceptive and counter-deceptive machines? Given that machine ethicists seem to be commendably bent on preventing immoral machines/robots, how is their work consistent with, say, attempts to engineer machines that can deceive in the interests of the common good?
  • What are the computational formalisms at the heart of deceptive and counter-deceptive machines? Are rich declarative formalizations of lying, telling half-truths, trust, etc. crucial? What is the relationship between such work, which is now sizable and growing, and popular and effective modern statistical techniques not tied to explicit logico-mathematical formalizations?
  • Are there limits to the apparent explosiveness of deceptive and counter-deceptive machines on the internet and in cyber? While embodied robots are one potential sub-class of deceptive/counter-deceptive machines, obviously softbots for, say, economic and cyber-warfare, are another, and have potentially enormous impact. Beyond mere policy, are there any technological limitations to a deceptive/counter-deceptive machine arms race?
  • Would counter-deceptive machines violate human values of privacy and autonomy? Should AI agents be used to ascertain when human communication between apparently malicious agents is deceptive? If so, what contexts if any should be excluded? Given potentially vast asymmetry of access to information, would such monitoring strip humans of privacy in public communication?
  • What is the potential for deceptive and counter-deceptive machines in e-commerce and counter-fraud? It seems patently obvious that AI technology, suitably deployed, could have prevented the Madoff Ponzi scheme, which caused not only loss of money, but also outright loss of life. Should counter-deceptive machines be more widely used in counter-fraud? (This question relates to the earlier one regarding formal approaches, since most counter-fraud technology is statistical. Could a cleverer version of Madoff have avoided statistical aberrations?)
  • Do we want AI spies? As we have mentioned, espionage by its nature requires deception and counter-deception; hence, it is presumably ideal territory for deceptive and counter-deceptive machines. But do we want that? If we do, how would the AI engineering work?
  • What about these machines specifically in medicine? Espionage puts a premium on deception and counter-deception, but even fairly standard medical activity involves at least mild deception and counter-deception. The US DoD is sponsoring AI R&D devoted to ensuring that immoral machines are not part of conflict. But how is this thrust consistent with the realities of AI-based medical care? Even robo-medics would presumably need to deceive in order to stay out of the sight of adversaries, and in addition would presumably need at a minimum to issue half-truths in order to calm and reassure wounded soldiers. Is this “tension” between “ethically correct” robots and deceptive/counter-deceptive machines being taken into account?
  • What is the role of cognitive science and human deception and counter-deception? AI researchers and engineers would presumably benefit from an understanding of how deception and counter-deception work in the less mechanical and bias-filled realm of humans. How are deceptive and counter-deceptive machines leveraging what is known about the psychology of deception and deception detection?

Call for Papers

Potential symposium participants are invited to submit either a full-length technical paper or a short position or demonstration paper. Full-length papers must be no longer than eight (8) pages, including references and figures. Short submissions can be up to four (4) pages in length and describe speculative work, work in progress, or a system demonstration.

Papers should be submitted here, and instructions for formatting and template files can be found here.

Important Dates

  • August 23, 2015: Full paper submissions due to organizers
  • August 28, 2015: Notifications of acceptance sent by organizers
  • September 4, 2015: Accepted camera-ready copy due to AAAI
  • November 12th-14th, 2015: Symposium, at the Westin Arlington Gateway, Arlington, Virginia.

Tentative Schedule

Day 1: Thursday, November 12

9:00 am – 9:10 am: Opening Remarks – Micah Clark, Office of Naval Research

9:10 am – 10:30 am: Invited Contributions (30min Presentation & 10min QA, each)

  • K. Forbus, “Analogical Abduction and Prediction: Their Impact on Deception”
  • S. Tran, E. Pontelli, & M. Balduccini, “Reasoning about Truthfulness of Agents Using Answer Set Programming”

10:30 am – 11:00 am: Coffee Break

11:00 am – 12:30 pm: Invited Contributions (30min Presentation & 10min QA, each)

  • S. Fahlman, “Position Paper: Knowledge-Based Mechanisms for Deception”
  • C. Sakama, “A Formal Account of Deception”

12:30 pm – 2:00 pm: Lunch

2:00 pm – 3:30 pm: Invited Contributions (30min Presentation & 10min QA, each)

  • S. Bringsjord & A. Bringsjord, “Can Accomplices to Fraud Will Themselves to Innocence, and Thereby Dodge Counter-Fraud Machines?”
  • J. Licato, “Formalizing Deceptive Reasoning in Breaking Bad: Default Reasoning in a Doxastic Logic”

3:30 pm – 4:00 pm: Coffee Break

4:00 pm – 4:45 pm: Invited Contribution (30min Presentation & 10min QA)

  • J. Johnson, “Toward an Intelligent Agent for Fraud Detection – The CFE Agent”

4:45 pm – 5:30 pm: Group Discussion (Selmer Bringsjord, Moderator): “Future Directions, Applications, and Key Research Questions for an Emerging Science of Deceptive & Counter-Deceptive Machines”

5:30 pm: Adjourn Day 1

6:00 pm – 7:00 pm: Reception

Day 2: Friday, November 13

9:00 am – 9:30 am: Opening Remarks and Summary of Day 1 Group Discussion – Micah Clark, Office of Naval Research & Selmer Bringsjord, Rensselaer Polytechnic Institute

9:30 am – 10:30 am: Invited Speaker – Sergei Nirenburg, Rensselaer Polytechnic Institute, “Heuristics and Mindreading for Detecting and Managing Deception”

10:30 am – 11:00 am: Coffee Break

11:00 am – 12:30 pm: Invited Contributions (30min Presentation & 10min QA, each)

  • A. Samsonovich, “Mind ID: A Psychologically Inspired Approach to Secure Authentication Based on Memory for Faces”
  • P. Bello & W. Bridewell, “Impression Management, Mindshaping and the Social Function of Fibbing”

12:30 pm – 2:00 pm: Lunch

2:00 pm – 3:30 pm: Invited Contributions (30min Presentation & 10min QA, each)

  • A. Wagner, “The Most Intelligent Robots are those that Exaggerate: Examining Robot Exaggeration”
  • M. Abramson, “Toward Adversarial Online Learning and the Science of Deceptive Machines”

3:30 pm – 4:00 pm: Coffee Break

4:00 pm – 5:30 pm: Panel Discussion / Debate: “Ethical, Legal, and Philosophical Implications of Deceptive & Counter-Deceptive Machines”; Panelists:

  •   Paul Scharre, Center for a New American Security
  •   William Casebeer, Lockheed Martin
  •   Matthias Scheutz, Tufts University
  •   Paul Bello, Naval Research Laboratory
  •   Selmer Bringsjord, Rensselaer Polytechnic Institute

5:30 pm: Adjourn Day 2

6:00 pm – 7:00 pm: Plenary Session

Registration Information

(Please check back here soon for more information on how to register for the AAAI 2015 Fall Symposium)

Organizing Committee

  • Chair: Dr. Micah H. Clark, Office of Naval Research, micah.clark@navy.mil
  • Prof. Selmer Bringsjord, Rensselaer Polytechnic Institute, selmer@rpi.edu
  • Dr. Paul Bello, Naval Research Laboratory, paul.bello@nrl.navy.mil

Contact

Rikhiya Ghosh, Rensselaer Artificial Intelligence and Reasoning Laboratory