The Distributed Artificial Intelligence Research (DAIR) Lab

The Distributed Artificial Intelligence Research Laboratory is part of the Department of Software and Information Systems at the University of North Carolina at Charlotte. The lab, directed by Professor Anita Raja, is concerned with the design and development of reasoning techniques for resource-bounded single and multi-agent systems.  Lab members conduct research in distributed computing, cascading risks,  clinical informatics, monitoring and control of computation, social computational systems. market intelligence and analytics, machine learning, resource-bounded reasoning, and reasoning under uncertainty.

Complex Networks

  • Coalition Formation in Complex Networks

Emergence of a single coalition among self-interested agents operating on large scale-free networks is a challenging task. We study the role of topological information as well as distributed consensus algorithms in achieving this goal.   [More Info]

  • Leveraging Structural Characteristics of Interdependent Networks to Model Non-linear Cascading Risk (NPS - Phase 2)

The goal of this research is to proactively study the role of network characteristics as they relate to cascading risks in highly dependent Major Defense Acquisition Programs (MDAP) clusters. It will also contribute towards automating the data extraction and analysis process of Defense Acquisition Execution Summary (DAES)  documents while identifying the associated data acquisition challenges.

  • Situational Awareness in Social Media

The goal of this project is to detect/gain real-time situational awareness of critical information in a social media space (SMS). It will explore how to know that something “pertinent” is being said in a SMS as well as how to discover  and handle the unexpected. [More Info]

Clinical Informatics and Medical Decision Support

  • Data Science for prediction and prevention of preterm births

Our approach to preterm birth prevention brings to bear stochastic methods to derive accurate, multidimensional prevention models from large collections of observational data. These methods will support prevention relative to different stages of pregnancy. [More Info]

Decentralized Learning and Decision-theoretic Control

  • Value-aware Agents for Ethical Decision making in E-government 

We are investigating a reusable means of bringing the viewpoints of power-limited elements of the population and usually overlooked dimensions of value into the public policy decision space and e-government. [More Info]

Past Projects

Learning only where Learning is Required (NSF)

The fundamental question addressed in this work is how to determine and obtain the minimal overlapping context among decentralized decision makers required to make their decisions more consistent. Our approach is a two-phased learning process where agents first learn their policies offline within the context of a simplified environment where it is not necessary to know detailed context information about neighbors. These local policies are then applied in more complex ”real” environments where it is expected that agents will encounter a much higher rate of inconsistencies (conflicts) with neighborhood actions. When conflicts are observed, agents switch to ”special” states that augment local policy states with additional non-local state information and learn other actions to take in this specific situation. This results in action choices that are less likely to lead to conflicts. We evaluate our approach by addressing meta-level decisions in a complex multiagent weather tracking domains. [More Info]

Modeling Nonlinear Cascading Consequences in Interdependent Networks (NPS - Phase 1 Completed)

This research seeks to study the cascading consequences of interdependencies in highly dependent networks. The project builds on extensive data that we have collected over the years in the form of Selected Acquisition Reports (SAR) documents for Major Defense Acquisition Programs (MDAPS) and Program Element (PE) documents. The goal is to to 1) Examine the interdependent regions  from multiple perspectives (data flow/funding) using nonlinear methods that will allow for “what-if” analyses. 2) Determine a probabilistic function that will use past performance of programs as a means to predict future performance 3) Identify the challenges in acquiring the data from the government and program managers.  [More info]

Multiagent Meta-level Control for  Tracking Tornadoes  (NSF)

NetRads [Zink05a, Zink05b] is a network of adaptive radars controlled by a collection of Meteorological Command and Control (MCC) agents that instruct where to scan based on emerging weather conditions. Meta-level control in this application will balance the resources spent on local combinatorial optimization versus the number of negotiation cycles. This is important because in certain situations it is better to do a good job in local optimization and allocate fewer cycles to negotiation while in other situations more cycles for negotiation would be better. For example, if there are a lot of boundary tasks , then having more negotiation cycles to coordinate the scanning tasks may be preferable. This work involves gathering data to develop the methodology to determine where this balance is and developing techniques to automate the meta-level control decision making process. phenomena and recognize the arrival of new ones. [More Info]

Thinking about Thinking in COORDINATORs (Honeywell/DARPA-IPTO)

Coordinators are intelligent agents that provide coordination support to military field units. Each coordinator agent is composed of multiple modules such as the Task Analysis module (TaskMod), Coordination Module (CoordMod), Organization Module (OrgMod) and so on. MetaCognition is the ability of agents to reason about their deliberations, i.e. think about thinking. In the context of Coordinators, the MetaCognition module (MetaMod) reasons about resource allocations to the other modules in time-constrained situations. [More Info]

Visual Analytics (PNL/DHS)

Knowledge gathering and investigative tasks in open environments are very complex because the problem-solving context is constantly evolving, and the data may be incomplete, unreliable and/or conflicting. These tasks are time critical and typically involve identifying and tracking multiple hypotheses; gathering evidence to validate the correct hypotheses and eliminating the incorrect ones. Visual analytics is the science of applying reasoning and analysis techniques to large, complex real-world data for problem solving using visualizations.In RESIN, we designed and developed a mixed-initiative reasoning agent that will assist investigative analysts in foraging tasks and performing predictive analysis using blackboard-based reasoning, visualization and an intelligent user interface. [More Info]

Introspection in Analytical Agents

Report generation is an integral part of many analytical tasks such as intelligence analysis. Thus, an important issue in developing cognitive assistants for analytical tasks is how the cognitive assistant may help a human analyst in  generating reports. In this paper, we first describe the task of report generation in intelligence analysis. Then, we describe a scheme for enabling a cognitive  assistant to generate self-explanations. The proposed scheme uses introspection  over the knowledge, reasoning, and conclusions of the cognitive assistant.

Finally, we analyze the introspective scheme for generating self-explanations from the perspective of report generation. [More Info]

WLAN Management using MAS  (NSF)

This research investigates cooperative resource management in WLAN (wireless local area networks) /WPAN (wireless personal area networks) interference environments. The objective of this research is to manage shared system resources fairly among multiple WLANs to optimize the overall performance. Results from the project are expected to have a significant impact on next generation WLAN network management based on employing algorithms of agent interaction and coordination to facilitate resource management, predictive models for parameter estimation, and dynamic load balancing algorithms. [More Info]

Predictive Protocol Management in Sensor Networks

Wireless Sensor Networks (WSN) are a subset of wireless networking applications focused on enabling sensor and actuator connectivity without the use of wires. Energy consumption among the wireless devices participating in these networks is a major constraint on the deployment for a broad range of applications enabled by WSNs. This work introduces, for the first time, a novel methodology based on predictive protocol management with contingency planning (PPM and CP). This approach allows efficient update of the WSN operational mode in order to optimize the energy utilization based on the time varying characteristics of the Radio-Frequency (RF) in which the  network operates. [More Info]

Mathematical Analysis of Uncertainty Propagation in Agent Control

Mathematical models of complex processes provide precise definitions of the processes and facilitate the prediction of process behavior for varying contexts. In this work, we study a numerical method for modeling the propagation of uncertainty in a multi-agent system (MAS) and a qualitative justification for this model. This model will help determine the effect of various types of uncertainty on different parts of the multi-agent system; facilitate the development of distributed policies for containing the uncertainty propagation to local nodes; and estimate the resource usage for such policies. [More Info]

Safety in Multi-Agent Systems

Conservative design is the ability of an individual agent to ensure predictability of its overall performance even if some its actions and interactions may be inherently less predictable or even completely unpredictable. We study the importance of conservative design in cooperative multi-agent systems and briefly characterize the challenges that need to be addressed to achieve this goal. [More Info]

A Reinforcement Learning Approach for Meta-level Control

Sophisticated agents operating in open environments must make decisions that efficiently trade off  the use of their limited resources between dynamic deliberative actions and domain actions. This is the meta-level control problem for agents operating in resource-bounded multi-agent environments. Control activities involve decisions on when to invoke and the amount to effort to put into scheduling and coordination of domain activities. The focus of this work is how to make effective meta-level control decisions in a task allocation domain. [More info]

Intelligent Home Environment

Intelligent environments are an interesting development and research application problem for multi-agent systems. The functional and spatial distribution of tasks naturally lends itself to a multi-agent model and the existence of shared resources creates interactions over which the agents must coordinate. In the UMASS Intelligent Home project, we have designed and implemented a set of distributed autonomous home control agents and deployed them in a simulated home environment. Our focus is primarily on resource coordination, though this project has multiple goals and areas of exploration ranging from the intellectual evaluation of the application as a general MAS testbed to the practical evaluation of our agent building and simulation tools. [More Info]

Generic Coordination Strategies for Agents

The GPGP/TÆM.S. domain-independent coordination framework for small agent groups was first described in 1992 and then more fully detailed in an ICMAS’95 paper. In this paper, we discuss the evolution of this framework which has been motivated by its use in a number of applications, including: information gathering and management, intelligent home automation, distributed situation assessment, coordination of concurrent engineering activities, hospital scheduling, travel planning, repair service coordination and supply chain management. First, we review the basic architecture of GPGP and then present extensions to the TÆM.S. domain-independent representation of agent activities. We next describe extensions to GPGP that permit the representation of situation-specific coordination strategies and social laws as well as making possible the use of GPGP in large agent organizations. Additionally, we discuss a more encompassing view of commitments that takes into account uncertainty in commitments. We then present new coordination mechanisms for use in resource sharing and contracting, and more complex coordination mechanisms that use a cooperative search among agents to find appropriate commitments. We conclude with a summary of the major ideas underpinning GPGP, an analysis of the applicability of the GPGP framework including performance issues, and a discussion of future research directions. [More Info]

Robustness in Agent Control

Open environments are characterized by their uncertainty and non-determinism. Agents need to adapt their task processing to available resources, deadlines, the goal criteria specified by the clients as well their current problem solving context in order to survive in these environments. If there were no resource constraints, then an optimal Markov Decision Process based policy would obviously be the best way for complex problem solving agents to make scheduling decisions. However in many agent systems, these scheduling decisions have to be made online or in soft real-time, making the off-line policy computationally infeasible in open environments. The hybrid planner/scheduler used to control TÆM.S. agents is the Design-to-Criteria (DTC) agent scheduler. Design-to-Criteria scheduling is the  soft real-time process of custom building a plan/schedule to meet an agent’s current objectives which are expressed as dynamic goal criteria (including real-time deadlines), using task models that describe alternate ways to achieve tasks and subtasks. Recent advances in Design-to-Criteria control include  the addition of uncertainty to the TÆM.S. computational task models analyzed by the scheduler and the  incorporation of uncertainty in the scheduling process. As we show, the use of uncertainty in TÆM.S. and  Design-to-Criteria enables agents to make better control decisions in uncertain environments. Design- to-Criteria uses a heuristic approach for online scheduling of medium granularity tasks. It approximates the analysis used to generate an optimal policy by heuristically reasoning about the implications of  uncertainty in task execution. [More Info]

Information Gathering

The World Wide Web has become an invaluable information resource but the explosion of available  information has made web search a time consuming and complex process. The large number of information sources and their different levels of accessibility, reliability and associated costs present a complex information gathering coordination problem. This paper describes the rationale, architecture, and implementation of a next generation information gathering system – a system that integrates several areas of Artificial Intelligence research under a single umbrella. Our solution to the information explosion is an information gathering agent, BIG, that plans to gather information to support a decision process, reasons about the resource trade-offs of different possible gathering approaches, extracts information from both unstructured and structured documents, and uses the extracted information to refine its search and processing activities. [More Info]