# Measuring Inconsistency in Argument Graphs

###### Abstract

There have been a number of developments in measuring inconsistency in logic-based representations of knowledge. In constrast, the development of inconsistency measures for computational models of argument has been limited. To address this shortcoming, this paper provides a general framework for measuring inconsistency in abstract argumentation, together with some proposals for specific measures, and a consideration of measuring inconsistency in logic-based instantiations of argument graphs, including a review of some existing proposals and a consideration of how existing logic-based measures of inconsistency can be applied.

## 1 Introduction

Argumentation is an important cognitive ability for handling conflicting and incomplete information such as beliefs, assumptions, opinions, and goals. When we are faced with a situation where we find that our information is incomplete or inconsistent, we often resort to the use of arguments for and against a given position in order to make sense of the situation. Furthermore, when we interact with other people we often exchange arguments to reach a final agreement and/or to defend and promote an individual position.

In recent years, there has been substantial interest in the development of computational models of argument for capturing aspects of this cognitive ability (for reviews see [BCD07, BH08, RS09]). This has led to the development of a number of directions including: (1) abstract argument models where arguments are atomic, and the emphasis is on the relationships between arguments; (2) logic-based (or structured) argument models where the emphasis is on the logical structure of the premises and claim of the arguments, and the logical definition of relations between arguments; and (3) dialogical argument models where the emphasis is on the protocols (i.e. allowed and obligatory moves that can be taken at each step of the dialogue) and strategies (i.e. mechanisms used by each participant to make the best choice of move at each step of the dialogue).

At the core of computational models of argument is the ability to represent and reason with inconsistency. So it is perhaps surprising that relatively little consideration has been given to measuring inconsistency in these models, particularly given the number of developments in measuring inconsistency in logic-based knowledgebases (see for example [Kni01, HK04, DRMO10, HK10, GH11b, MLJ12, JMR14, Bes14, Thi16]). A couple of exceptions are the consideration of the degree of undercut between an argument and counterargument [BH05, BH08], and measuring inconsistency through argumentation [Rad15]. Note, the approach of weighted argumentation frameworks [DHM11] is not a measure of inconsistency as the approach assumes extra information (weights) to label each arc, and an inconsistency budget that allows arcs that sum to no more than the budget to be ignored.

There are a number of reasons why it is useful to investigate the measurement of inconsistency in argumentation: (1) to better characterize the nature of inconsistency in argumentation; (2) to analyse the inconsistency arising in specific argumentation situations; and (3) to direct the resolution of inconsistency as arising in argumentation. We will consider contributions to these three areas during the course of this chapter.

Given the central role of argument graphs (where each node is an argument and each arc denotes one argument attacking another) in modelling argumentation, we will consider the inconsistency of an argument graph. This is useful if we want to assess the overall conflict that is manifested by an argument graph, and we want to focus on actions that may allow us to decrease the graph inconsistency.

Consider for example some security analysts who are analyzing some conflicting reports concerning a foreign country that may be descending into civil war. These analysts may enter into a process as follows: (1) they collect relevant information concerning the political and security situation in the country; (2) they construct arguments from this information that draw tentative hypotheses about the situation in the country; (3) they compose these arguments into an argument graph; (4) they measure the inconsistency of the argument graph; (5) they use the measure of inconsistency to identify information requirements (i.e. queries to ascertain whether particular argument should be accepted or rejected) therefore that would result in commitments being made for some of these arguments; (6) they seek the answers to these queries; (6) they use these commitments to reduce the overall inconsistency of the graph; and (7) they terminate this process when sufficient commitments have been made so as to reduce the inconsistency to a sufficiently low level.

This kind of process may of relevance to security analysts to augment recent proposals for argument-based security analysis technology such as by Toniolo et al [TNE15]. Furthermore, this kind of process may be replicated in roles such as business intelligence analysis, policy planning, political planning, and science research.

We proceed as follows: (Section 2) We review the basic definitions of abstract argumentation, considering both extension-based and label-based approaches; (Section 3) We investigate a general framework for measuring inconsistency in abstract argumentation, together with some proposals for specific measures; (Section 4) We review deductive argumentation for instantiating abstract argument graphs, we review an existing proposal for measuring inconsistency in deductive argumentation called degree of undercut, and we investigate a new approach that harnesses existing logic-based measures; (Section 5) We consider how we can use measures of inconsistency to direct the resolution of inconsistency in argumentation; (Section 6) We conclude with a discussion of the proposals in the paper and of future work.

## 2 Review of abstract argumentation

Our framework builds on more general developments in the area of computational models of argument. These models aim to reflect how human argumentation uses conflicting information to construct and analyze arguments. There is a number of frameworks for computational models of argumentation. They incorporate a formal representation of individual arguments and techniques for comparing conflicting arguments (for reviews see [BCD07, BH08, RS09]). By basing our framework on these general models, we can harness theory and adapt implemented argumentation software as the basis of our solution.

### 2.1 Extension-based semantics

We start with a brief review of abstract argumentation as proposed by Dung [Dun95]. In this approach, each argument is treated as an atom, and so no internal structure of the argument needs to be identified.

###### Definition 1.

An argument graph is a pair where is a set and is a binary relation over (in symbols, ). Let be the set of nodes in and let be the set of arcs in .

So an argument graph is a directed graph. Each element is called an argument and means that attacks (accordingly, is said to be an attacker of ). So is a counterargument for when holds.

###### Example 1.

Consider arguments = “Patient has hypertension so prescribe diuretics”, = “Patient has hypertension so prescribe beta-blockers”, and = “Patient has emphysema which is a contraindication for beta-blockers”. Here, we assume that and attack each other because we should only give one treatment and so giving one precludes the other, and we assume that attacks because it provides a counterargument to . Hence, we get the following abstract argument graph.