1. Introduction and background

The background for the theoretical work in this paper is an ongoing study of decision making under conditions of internal command contention and situational uncertainty, applied to the domain of military command and control (C2). The theory provides a foundation for the understanding of C2 agility. Therefore, the theory goes further than other military command decision studies, which treat C2 as a process (Wang and Wang, 2010) or model the commander as a player in a two-sided game (Medhurst et al, 2009). Nevertheless, the theory has been developed alongside a series of studies and has been supported, hence partially validated, by command decision-making experiments using UK Battle Group (BG) commanders (Dodd et al, 2003). The experiments presented BG commanders with situations of uncertainty and command contention, such that their courses of action, when seen from a tactical viewpoint, were potentially at odds with the broader campaign objectives. The experiments showed there were several ways in which the commanders dealt with the internal decision conflict:

  • ignore the higher-level command objective completely;

  • explicitly place little or no weight to the higher-level command objective;

  • explicitly place all or very great weight on the higher-level command objective at the expense of risking severe tactical losses;

  • focus attention only on the attributes of the situation that give weight to the course of action that feels most comfortable;

  • create a novel course of action that they hope might satisfy both objectives and might also ‘hedge’ against the uncertainty.

In Dodd et al (2006), we detail results from the BG command decision experiments studying how experienced personnel respond to conflicting objectives in two different scenarios. The first was a combat mission where there was high risk of casualties. The second was a peacekeeping mission with a risk of attack posed to a civilian convoy where the commander had to balance the efficacy of defence from attack and a negotiated passage. Participants formally documented their decision processes and their rationales for placing more or less weight and attention on objectives and situational attributes.

On the basis of these findings, the research challenge then was to develop the existing theory on discontinuity in decision making in order to further our understanding of how and when (and maybe then why) to adapt the weighting being placed on a given level of command objective. In other words, how might we understand how to apply a C2 regulatory function that acts as an arbiter agent, whose role is to balance the weightings and moderate command decision making according to the situation as a whole?

The particular concept around which the theory is set is drawn from UK defence doctrine, which introduced the concept of a C2 rheostat (MOD Joint Doctrine and Concepts Centre, 2003). As such, the C2 rheostat can be set to impose a top-down form of C2 at one extreme and a totally distributed form of C2 at the other extreme. Mission Command, generally adopted and used by the British military, lies at a mid-position and assumes that command intents are cascaded (usually downwards from strategic to operational to tactical) in a nested set of mission statements. For example, ‘Search and clear area ALPHA and secure roads Y and Z in order to allow safe passage of civilians and humanitarian supplies in order to restore stability in the region’. Such orders are usually limited to stating only the intents of command levels that are two (and at most three) levels apart. It is for this reason that the theory developed here begins with an abstracted two-level problem, simplified to having two C2 agents, one whose role is to meet campaign objectives and the other whose role is to meet tactical objectives.

It follows then that the theory assumes there is a C2 regulatory ‘arbiter’ whose purpose is to determine the level to which decisions can be devolved (eg, decisions can be made without explicit reference back-up the command levels for authority to choose and carry out a tactical course of action). An extreme form of such devolved decision making has been simulated previously for the US Department of Defense (Alston et al, 2006) and was called an Edge Organization (Alberts and Hayes, 2003; because all forms of regulatory function and also all decision-rights were unilaterally devolved right down to the fighting elements at the edge). The key function of a C2 regulatory arbiter agent, therefore, is to determine the nature of the conditions (across the situation as a whole) under which decisions are being faced and then to moderate the devolution of decision making appropriately.

2. Introduction to the military problem

The premise for this paper is that military C2 decisions can be devolved to varying levels of decision maker, as appropriate for the prevailing operating conditions. For example, in the United Kingdom through Mission Command (Moffat, 2002), it has proved effective to communicate mission orders in broad terms only, and to devolve real-time tactical decision making to an experienced commander who is best placed and well able to appreciate and respond to what is happening on the ground.

This paper addresses the concept of a C2 regulatory agent whose purpose is to understand the implications of devolving decision making given the specific characteristics of the operational context and the conditions under which the decisions are being made. The C2 regulatory agent and the fielded commanders are therefore players in a collaborative game. The responsibility of the regulation of C2 decision making and the devolution of decisions usually resides within the role of a high level of command. (This is so, usually for good reason due to a real need for human judgement based on experience.) Such a C2 regulatory function is traditionally placed at a high level, and often remote position, of command. As such, only some aspects of the geometry of any particular commander's belief and utility functions are known. The work presented in this paper will make explicit what such a C2 regulatory function needs if it is to determine when to devolve decision making (ie, assuming that such discretionary trust can be granted to those who are in closest touch with ongoing events) and when to communicate orders more prescriptively (ie, adopting ‘top-down’ or centralized C2).

In this paper, building on our observations of the behaviour of experienced UK BG commanders in simulated decision scenarios (Dodd et al, 2006), we develop a more formal framework within which the degree of decision ‘autonomy’ can be related, via commanders’ capabilities, to the specific demands of the operational context. We focus on those scenarios that are most difficult to manage: that is, those where there is goal contention (ie, current tactical objectives conflict with broader campaign objectives) and situational uncertainty. This should help to form a basis for development of agents that can perform the C2 arbiter role and it will also provide a formal understanding of what is required to achieve C2 agility.

C2 decision regulation should generally aim to preserve coherence through contiguity; that is, encourage commanders of different battle groups within geographic or operational proximity to choose actions that are tactically and operationally coherent. For example, to try to avoid one commander retreating, while another is carrying out a hasty attack, with potentially chaotic and counter-productive consequences. C2 decision regulation should also strive to minimize command contradiction; that is, to avoid having to face a complete turnaround in a previously made decision. Maintenance of these two principles aims to avoid command decision stressors that can lead to, for example, hypervigilance (Janis and Mann, 1977) or decision suppression (Dodd, 1997), and can also jeopardize a commander's ability to subsequently act rationally and coherently. (Note that hypervigilance, which is a state of over-sensitivity to incoming sensory signals, and decision suppression are common in situations of high uncertainty and disorder.)

Furthermore, while small adjustments in intensity of engagement are often possible and can often be taken at limited cost, dramatic changes, where the commander faced with contradiction tries to dramatically adjust midstream, can be very costly in a wide range of scenarios.

UK military commanders, generally speaking, are expected to act rationally and accountably, within the context of their training and experience. Here, we interpret this expectation in a Bayesian way: commanders should choose a course of action that maximizes their expected utility (or at least tries to minimize their likelihood of loss). Explicitly, we assume that commanders choose a decisive action d D from the potentially infinite set of decision options D available, so as to maximize the expectation of their utility function U. However, it would not be reasonable for a higher command to expect its personnel to try to evaluate and take into account the potential acts of all other contiguous commanders. Therefore, each commander will be treated as if they were an agent within a C2 regulatory framework.

The simplest way to capture the conflict scenario described above is to assume that each commander's utility function U ( d, x ∣λ 1) has two value independent attributes x=(x 1, x 2) (French and Rios Insua, 2000) with parameter vector λ 1, which captures the overall shapes of the commanders’ functions representing their beliefs and preferences related to outcomes. The first attribute measures the ongoing outcome-state of the current (tactical) mission. The second measures the extent to which the integrity of an overall campaign is preserved. The two sets of outcome measures may or may not have common elements; although variables such as number of casualties may be found in both sets of measures but then could be at differing levels of granularity. Under this assumption, for all decisions dD and x i χ i , where χ i is the sample space of the attribute i (i=1, 2) the commander's utility function has the form

where each marginal utility U i (d, x i ∣λ 1) is a function of its arguments only and the criteria weights k i (λ 1) satisfy k i (λ 1)⩾0, i=1, 2, k 1 1 )+k 1 1 )=1 (Keeney and Raiffa, 1976; von Winterfeldt and Edwards, 1986). The rational commander then chooses a decision option d *(λ)∈D—called a Bayes decision—to maximize the expected utility

where λ=(λ 1, λ 2)∈Λ—its possible set of values—and

The known vector λ 2 will be a function of the hyperparameters defining the commander's subjective posterior distribution—here defined by p i (x i λ 2) of attribute x i , i=1, 2.

We now investigate the extent to which a C2 regulatory agent can ensure that the commander's marginal utilities and criteria weights appropriately address the C2 regulatory principles of retaining contiguity and avoiding—as far as is possible—commander contradiction (and so maintaining overall coherence and balance).

The commander has a free choice of how to set (and adapt) the parameters λ 1. However, the observed and appraised commander will have a utility function, which will reflect their understanding of the situation, their mission and campaign objectives. Qualitatively, a commander's courses of action can be classified into three broad categories, attempting to achieve simultaneously—at least partially—both the tactical objective and the broader campaign objectives. Henceforth, we will call this type of decision a compromise. On the other hand, in a scenario where no course of action is likely to attain satisfactory resolution of either the mission or campaign objectives simultaneously, a compromise will be perceived as futile. Rational choice will then need to focus on finding a combative action most likely to achieve the tactical mission objective while ignoring the broader campaign objectives or alternatively choosing a circumspect action—focusing on avoiding jeopardizing the campaign while potentially aborting the tactical mission. The transition from a rational act being a compromise between objectives to a stark choice between combat and circumspection can be explained through examining the geometry of a commander's expected utility function. This geometry is remarkably robust to the choice of parametric models that might be being used to represent uncertainty and any belief in outcomes or intended consequences. The type of courses of action are determined according to:

  1. 1

    qualitative features of the descriptors of the operational conditions (eg, turbulence Emery and Trist, 1965);

  2. 2

    the uncertainty of the situation (eg, poor information, unfamiliar tactics);

  3. 3

    the relative importance the commander places on the two objectives.

This robustness allows us to develop a useful general theory for decision making under conditions of internal command conflict and enables us to suggest remedial ways for a C2 regulatory agent to establish command conditions that will allow and encourage appropriate commander responses, taking commander capability into account. In the next section, we analyse how the geometry of the corresponding expected utility functions changes qualitatively under different combat scenarios and different types of commander. In Section 3, we demonstrate some general properties of rational decision making in this context. In Section 4, we discuss how, with some mild differentiability conditions, our taxonomy relates to the classification of catastrophes (Poston and Stewart, 1978; Zeeman, 1977) and give a number of illustrative examples. We end the paper by relating theory to observed behaviour and give some general recommendations for C2 regulation in the light of these geometrical insights.

3. Rational decisions for competing objectives

3.1 A probabilistic formulation

The commander's decision space D will consist of an open set of possible courses of action but will typically be constrained by many situational factors; for example, the available resources and the rules of engagement of the mission. However, for a wide class of scenarios we will be able to express any course of action d=(d, d 1, d 2)∈D=D × D 1 × D 2 where D is a subset of the real line. In this paper, the component d will be a proxy measure for the intensity of the engagement associated with the chosen action. We assume that increasing the intensity of engagement does not reduce the commander's probability of successfully completing the tactical mission but is likely to have a potentially negative effect on the campaign (particularly now that military are involved mostly in stabilization operations). Thus, it is not unusual for a mission to be successfully addressed by engaging tactically with a large and sharp response. However, the intensity of the engagement increases the potential for casualties, both the commander's own unit and to the local civilian population. It is also likely to be increasingly politically deleterious and thus be increasingly to the detriment of the campaign objectives.

For a chosen level of engagement intensity d, a commander will choose, to the best of their ability, between other courses of action d 1(d) associated with satisfying the tactical mission objectives given d and between other courses of action d 2(d) associated with preserving the integrity of the campaign. Usually, d 1 encodes specific tactics involved in achieving the current tactical mission. On the other hand, the decision d 2 encodes the judgements involved in securing best use of human resources, preservation of life and retaining political integrity. Both d 1(d) and d 2(d) will usually be decided by the commander in the field and in response to the developing situation, although informed by protocol, rules of engagement and training. For the rest of this paper, we now assume that it is possible to define the engagement intensity d in such a way that these two subsequent choices do not impinge on one another. Formally, this will mean that a commander's expected marginal utility is a function only of (d, d i , λ), (d, d i )D × D i , i=1, 2, where . λ is an index that represents the personal, institutional or conditional aspects, such as personal daring, preference, politics, etc.

Now let d 1 *(d), (d 2 *(d)) denote, respectively, a choice with the ‘best’ likelihood of attaining the tactical mission objectives and campaign objectives (respectively) for a given intensity d. The assumption above makes it possible to characterize behaviour in terms of a one-dimensional decision space (see below). Figure 1 shows this dimension going from totally benign to super aggressive and also gives an illustration of a typical value plot.

Figure 1
figure 1

Illustrative shape of V values as a function of engagement intensity (d).

Assuming that neither criterion weight is zero, in the Appendix we show that by taking a linear transformation of the expression in Equation (1), a commander's Bayes decision d* will maximize the function:

Here, temporarily suppressing the index λ, for i=1, 2

where the daring ρ(λ) satisfies

where

and where for i=1, 2, and u i [1] denote the worst and best possible outcomes—as foreseeable in the eyes of the commander—for each of the objectives. For technical reasons, it will be convenient to reparametrize λ so that there is a one-to-one function from λ to . Heuristically, λ′ simply spans the parameters in Λ other than ρ. From the constructions above, it is clear that P 1(dλ), P 2(dλ) can be chosen so that they are only functions of λ through λ′ and thus henceforth will be indexed as P 1(dλ′), P 2(dλ′).

Note here that P 1(dλ′) and (P 2(dλ′)) are, respectively, simply an increasing (decreasing) linear transformations of : the commander's expected marginal utility i=1, 2 on making what is considered to be the best possible decision consistent with choosing an intensity d of engagement. From the definition of d, note that the functions P i (d ∣λ′) are each distribution functions in d: that is, non-decreasing in dD, with

parametrized by λ′∈Λ′, and i=1, 2. Denote the smallest closed interval containing the support of P i (dλ) by [a i (λ′), b i (λ′)], i=1, 2 where by an abuse of notation we allow any of the lower bounds to take the value −∞ and any of the upper bounds ∞. Thus, a 1 is the value below which the intensity d is deemed useless for attaining any even partial success in the mission. The upper bound b 1 is the lowest intensity that allows the commander to obtain total mission success. Similarly, a 2 is the highest value of intensity that can be used without damaging campaign objectives. The bound b 2 is the lowest value at which the campaign is maximally jeopardized. For obvious reasons, we will call b 1 ′) pure combat and a 2(λ′) pure circumspection.

The meaning of these distributions can be best understood through the following simple but important special case.

Example 1 (zero—one marginal utilities)

  • When a mission is deemed to be either fully successful or to have failed so that the campaign is totally uncompromised or it is compromised, then P 1(dλ) is the commander's probability that the mission is successful using intensity d and choosing other decisions associated with the mission in the best way possible under this constraint. (See Figure 2 for an illustration.)

    Figure 2
    figure 2

    Composition of zero-one marginal utility function with an outcome probability function.

    On the other hand, P 2(dλ) is the probability that the campaign will be jeopardized if the commander used an intensity d. Note that the difference V defined above in Equation (3) , balances these objectives, the relative weight given to mission success being determined by the value of the daring parameter ρ, with equal focus being given when ρ=0.

In the more common scenarios where the mission can be partially successful, the interpretation of P i (dλ), i=1, 2 in fact relates simply to the special case above. Thus, specifically, the partially successful probable consequence of using and intensity d in the given scenario is considered by the commander to be equivalent to attaining best possible mission success with probability P 1(dλ) and the most jeopardization of the campaign with probability P 2(dλ).

One point of interest is that if V(P 1, P 2, ρ, λ′) is given by Equation (3) and Q 1=P 2, Q 2=P 1 and , then V(Q 1, Q 2, ρ˜, λ′) is a strictly decreasing linear transformation of V(P 1, P 2, ρ, λ′). Therefore, in particular these two different settings share the same stationary points but with all local minima of V(P 1, P 2, ρ, λ′) being local maxima of V(Q 1, Q 2, ρ˜, λ′) and vice versa. Henceforth, call V(Q 1, Q 2, ρ˜, λ′) the dual of V(P 1, P 2, ρ, λ′). The close complementary relationship between the geometry of a problem and its dual will be exploited later in the paper.

3.2 Resolvability

Ideally, a C2 regulatory agent should be adaptive enough to alternate between devolving decision making to the commander in the field and taking a top-down approach prescribing that each commander focus on carrying out actions to achieve one or other of the objectives. There are two scenarios where it is straightforward for a C2 regulatory agent to decide between full-scale devolution and a top-down C2 approach. The first occurs when b 1(λ′)⩽a 2(λ′). Typically, in such conditions there is no overwhelming drive to be aggressive or purely combative. (See for illustration, Figure 3.)

Figure 3
figure 3

Illustrative shape of V against engagement intensity under conditions of resolvable contention, showing interval over d within which the decision conflict is potentially resolvable.

We henceforth call this scenario resolvable for λ′∈Λ′ and call the closed interval [b 1(λ′), a 2(λ′)] the resolution interval for λ′∈Λ′. It is easy to see from Equation (3) that the set of the commander's optimal decisions require d *(λ′)∈[b 1(λ′), a 2(λ′)] when V(d *(λ′)∣λ′)=exp ρ(λ′). Note that in particular both pure combat and pure circumspection are always Bayes decisions (as is any level of intensity between). In this case, although the commander's evaluation of performance V(d *(λ )λ) is clearly dependent on ρ, their decision need not depend on ρ. Therefore, the choice is simply a moderate intensity of engagement d *(λ′) in the interval above enabling the simultaneous recognition and acknowledgement of choices optimized on mission and campaign objectives from choosing d 1 *(λ′) and d 2 *(λ′) to maximize each of their respective marginal utilities. In fact, much military training focuses on this type of scenario, where there exists at least one course of action, which is ‘OK’ (Moffat, 2002) for both objectives. Good training regimes that ensure the commander can hedge (ie, identify both (d *(λ), d 1 *(λ)) and (d *(λ), d 1 *(λ))) will ensure that a utility maximizing strategy will be found and will not be influenced by the often unknowable parameter ρ. A C2 regulatory agent should be most prepared to devolve decision making to a commander on the ground when a situation is readily resolvable, as illustrated in thissimple case.

A second simple case occurs when b 2(λ′)⩽a 1(λ′). Typically, in such conditions there is a high degree of contention when what is deemed to be OK for one is deemed to be absolutely not OK by the other. Henceforth, this scenario is called the unresolvable scenario for λ′∈Λ′. Here, there is no possibility of redeeming anything from one objective if the commander even partially achieves something towards the other. A rational commander's Bayes decision is either pure combat d *(λ)=b 1(λ′) optimizing mission objectives or pure circumspection d *(λ)=a 2(λ′) maximizing campaign objectives, choosing the first option if ρ⩾0. In this scenario, a C2 regulatory agent therefore needs to account for the fact that a rational commander might apparently ignore completely one or other of the objectives depending on the sign of ρ. It is argued below that ρ can be unpredictable from the viewpoint of a C2 regulatory agent. Therefore, in such cases which of the two extreme reactions will be chosen will be difficult for a C2 regulatory agent to predict and control. C2 regulation should therefore be most inclined to be set as prescriptive in scenarios which are unresolvable and when b 2(λ′) and a 1(λ′) are far apart enough for the choice between them to cause discontiguity or contradiction.

When scenarios are such that both intervals [a i (λ′), b i (λ′)], i=1, 2 are short—that is, when a commander will judge that the use of an intensity d will either result in complete failure or complete success except in a small range for both the mission or campaign objective—then most scenarios will be resolvable or unresolvable and appropriate C2 settings will usually be clear. Of course many scenarios have the property that by using a moderate level of intensity, compromise cannot be expected to fully achieve both objectives—as in the resolvable scenarios—but nevertheless might be a viable possibility—unlike in the unresolvable scenarios. The effect of an intensity d will have intermediate potential success with respect to the mission or campaign over a fairly wide range of values of d. To understand and control the movement from the resolvable to the unresolvable scenario, we will henceforth focus on these intermediate scenarios.

Call a scenario a conflict when [a(λ′), b(λ′)] is non-empty where I(λ ) is the open interval defined by

Here, there is contention due to opposing viewpoints and different perspectives on the situation; conflict in the ways in which the situation might be expected to go in terms of outcomes and the differing assessments of success or loss given those outcomes, all of which are natural and tend to occur in contemporary operations typified by volatility, uncertainty, complexity and ambiguity.

The most important scenarios of this type are ones where one of the two intervals in the intersection above is not properly contained in the other. The first—the primal conflict scenarios––has a(λ′)=a 2(λ′) and b(λ′)=b 1(λ′). Here, the value of intensity at which the campaign begins to become progressively jeopardized is lower than the intensity at which the mission can be ensured to be fully successful.

Therefore, here we have a case in which there is a dominant priority for and preference towards the campaign aims taking precedence, yet the dominant views on the situation are from the more narrowly focused tactical mission perspective.

The second case—the dual conflict—has a(λ′)=a 1(λ′) and b(λ )=b 2(λ′) is more difficult for the commander but has some hope because the intensity required to begin to have some success in the mission is lower than the intensity at which the campaign will be maximally jeopardized. Therefore, here we have a case in which there is a dominant priority for and preference towards the tactical mission aims taking precedence, yet the dominant views on the situation are from the broader focused campaign perspective.

Note that each of the primal scenarios with associated potential V(P 1, P 2, ρ, λ′) with bounds [a 1, b 1] and [a 2, b 2] on P 1, P 2, respectively, has a dual scenario associated with its dual V(Q 1, Q 2, ρ˜, λ′) whose bounds on are Q 1, Q 2 are, respectively, [a 2, b 2] and [a 1, b 1]. It follows that the geometry of dual conflicts can be simply deduced from their corresponding primal conflicts. Say a scenario is a boundary conflict if a 1=a 2 and b 1=b 2.

Henceforth, assume that P 1 and P 2 are absolutely continuous with respective densities p 1 and p 2 and that p 1 and p 2 are strictly positive in the interior of their support and zero outside it. Then, it is straightforward to check from Equation (3) that when

It therefore follows that whatever the value of λ′∈Λ′, we can find a Bayes decision d *(λ′)∈I +(λ′) where

Henceforth, in this paper we will assume that the commander chooses their action from within the interval I +(λ′). Therefore, in any of the above cases of decision conflict d *(λ′)∈[a 2, b 1]. In a dual scenario, d *(λ′) is either at the extremes of intensity worth considering a 2 or b 1 or lies in the open interval (a 1, b 2). Therefore, the former is typical of some aspects of the ISAF campaign in Afghanistan, where there is a focus on stabilization and thus works against tactical missions focused solely on acting with great intensity.

The latter represents the warriors’ preference for intense fighting set against their knowledge that they are there to establish and maintain stability and security.

We next study the effect of the value of the parameter ρ, which represents the degree of daring on a commander's decisions.

3.3 Daring and intensity of action

Fix the value of λ′ and suppress this index. (This is representative of the regulatory agent being aware that it has only what it has in terms of the commanders’ capacities for perceiving and understanding the situation, and this is fixed.) Then, for each d>d ′, d, d ′∈I +(λ′) with the property that P 2(d)>0, there exists a large negative ρ such that

Thus, in this sense as ρ → −∞ the rational, accountable commander will choose a decision increasingly close to pure circumspection a 2. Such a condition may arise if there is great political pressure being brought to bear on the campaign and the eyes of the world's media are focused upon the decision makers.

On the other hand, for all fixed λ′ for each d<d ′, d, d ′∈I +(λ′) with the property that P 1(d)>0, there exists a large negative ρ such that

Therefore, as the daring parameter ρ → ∞ becomes large and positive, the rational, accountable commander will choose a decision increasingly close to pure combat b 1. Such a condition may arise if there is great need for personal daring when a situation demands great courage, for example, to rescue an injured comrade in the heat of combat, irrespective of danger to the decision maker, the mission or the campaign.

Next, note that any rational commander will assess that if d ′<d and d ′ is not preferred to d when ρ=ρ 0, then d ′ is not preferred to d when ρ=ρ 1 when ρ 1ρ 0. To see this, simply note that

The first term on the right-hand side is non-negative by hypothesis, while the second is positive since P 1 is a distribution function. Further, by an analogous argument, if d ′>d * and d ′ is not preferred to d * when ρ=ρ 0, then d ′ is not preferred to d *, when ρ=ρ 1, when ρ 1ρ 0 either. In this sense, a rational commander will choose to engage with non-decreasing intensity as ρ increases whatever the circumstances. We shall henceforth call this property ρ-monotonicity. Let

denote the set of optimal intensities d *(ρ, λ′) for a commander whose parameters are (ρ, λ′). Note that ρ-monotonicity implies that if D*(ρ 0, λ′) contains pure circumspection, then so does D*(ρ, λ′) where ρ<ρ 0. Similarly, if D*(ρ 1, λ′) contains pure aggression, then so does D*(ρ, λ′) where ρ>ρ 1. When for some fixed value λ′ and for ρ lying in the closed interval [ρ 0, ρ 1], D*(ρ, λ′) consists of the single point {d *(ρ, λ′)}. Then the monotonicity condition above and the strict positivity of p 1(d *λ) or ρ 2(d * ∣λ) on their support then tells us this d *(ρ, λ′)∈I(λ′) is strictly increasing ρ∈[ρ 0, ρ 1]. So the larger the ρ(λ′), the higher the priority placed on mission success. From the above, this will be reflected in the choice of intensity: the larger the value of ρ(λ′), the greater the choice of intensity.

Recall from Equation (4) that the daring ρ(λ )=ρ 1(λ )+ρ 2(λ′) decomposes into two terms. The term ρ 1(λ′) is an increasing function of the relative weight placed on the mission against the campaign objectives; that is, their prioritization. Note also that it is the only term in V affected by a commander's criterion weights. This term may be potentially very unpredictable to a C2 regulatory agent, especially if no formal C2 education is practised or provided about how to balance mission and broader campaign objectives. Even with such C2 training or experience, the personality and emotional history will colour the commander's choice of this parameter.

The term ρ 2(λ′) is an increasing function of how much better the commander believes they can achieve mission over campaign objectives were they able to choose an optimal intensity for either. This, of course depends on the scenario faced and their competence—something that a C2 regulatory agent might hope to estimate reasonably well. But, since it is based on their own evaluation of their competence it also reflects their relative confidence in their ability to achieve mission success or be sensitive to the campaign objectives. A commander's lack of training or difficult recent emotional history may well have a big affect on this term. Note that a large positive value of this parameter encourages the commander to focus almost entirely on the mission objectives, while a large negative value would encourage them to neglect the mission objectives in favour of the overall campaign objectives.

4. The developing bifurcation

4.1 Bifurcation with continuous potentials

Here, building on methodologies developed in Moffat (2002), Moffat and Witty (2002) and Smith et al (1981), we investigate the geometrical conditions determining when bifurcation of the expected utility can occur. When V(dλ) is continuous, a commander's optimal choice will move smoothly in response to smooth changes in λ, provided that their best course of action d *(λ) is unique: see the Appendix for a formal statement of this property and a proof. Thus, the undesirable situations of there being dramatic differences between the Bayes decisions of contiguous commanders at λ=λ 0=(ρ 0, λ 0′) or a single commander suddenly faced with contradiction can only occur when D *(ρ 0, λ 0′) contains at least two Bayes decisions and hence in particular two local maxima. On the other hand, if D *(ρ 0, λ 0′) contains two decisions d 1 *(ρ 0), d 2 *(ρ 0) where d 1 *(ρ 0)<d 2 *(ρ 0), then holding λ 0′ fixed and increasing ρ through ρ 0 from the above we must jump from a d *(ρ)⩽d 1 *(ρ 0) being optimal ρρ 0 to a d *(ρ)⩾d 2 *(ρ 0) being optimal. This in turn implies that a C2 regulatory agent can be faced with a lack of contiguity and commander contiguity whenever their daring is near ρ 0. Therefore, there is an intimate link between when it is expedient for a C2 regulatory agent to delegate and the cardinality of D *(ρ 0, λ 0′), which in turn is related to the number of local maxima of V(dλ).

Again suppressing the index λ′, a rational commander will choose a non-extreme option d *(λ)∈I(λ′) for some value ρ(λ′) if and only if

that is,

or equivalently

It follows, in particular, that if for all d *(λ)∈I(λ′)

that is, P 2 stochastically dominates P 1—then all commanders will have a Bayes decision either pure combat or pure circumspection, their choice depending on their daring, that is, act just as in an unresolvable scenario. Call such a scenario pseudo-unresolvable. Pseudo-unresolvable conflicts have the same difficult consequences as the unresolvable ones for C2 regulation and are therefore strong candidates for prescriptive arrangements. Note that in our zero-one example above a scenario is pseudo-unresolvable if, for all dI(λ′), the probability of mission success using intensity d is no larger than the probability of jeopardizing the campaign.

When this domination is violated at some point d 0I(λ′), then a C2 regulatory agent will predict that a commander with a particular level of daring will choose an interior decision, and thus compromise can be a viable option for at least some commanders. At the other extreme, when P 1 stochastically dominates P 2, then, for any commander, an interior decision d *(λ)∈I(λ′) is at least as good as pure combat or circumspection. We now study the position and nature and development of these interior decisions under smoothly changing scenarios and personnel.

4.2 Bifurcation when distributions are twice differentiable

Henceforth, assume that the distributions P i are twice differentiable in the open interval (a 1(λ′), b 2(λ′)), i=1, 2 and constant nowhere in this interval. On differentiating and taking logs, any local maximum of V(d ∣λ) will either lie on the boundary of I or satisfy

where f i (dλ′)=log p i (dλ′), i=1, 2 where a necessary condition for this stationary point to be a local maximum of V is that the derivative Dv(dλ′)⩾0. Therefore, in conflicting scenarios the commander's optimal decision d *I +(λ′) will either lie on the boundary of I(λ′)—as in the unresolvable scenario—or satisfy the equation above.

Let ξ 1(λ′) (ξ 1′(λ′)) and ξ 2(λ′) (ξ 2′(λ′)), respectively, denote the mode of p 1(dλ′) occurring at the largest (smallest) value of d (and hence the largest (smallest) maximum of f 1(dλ′)) in (a 1(λ′), b 1(λ′)) and the mode of p 2(d(λ)∣λ′)=0 occurring at the smallest (largest) value of d (and hence the smallest (largest) maximum of f 2(d λ′)) in the open interval (a 2(λ′), b 2(λ′)). Note that when P 1 and P 2 are both unimodal ξ 1(λ′)=ξ 1′(λ′), i=1, 2. In this case, because ξ 1(λ′) is a point of highest incremental gain in mission, we call this point the mission point and the intensity ξ 2(λ′) where the threat to campaign objectives worsens fastest the campaign point.

When ξ 1(λ′)⩽ξ 2(λ′), for any d∈[ξ 1(λ′), ξ 2(λ′)], v(dλ′) is strictly decreasing. It follows that there is at most one solution d * to Equation (8) for any value of ρ and Dv(dλ′)⩾0 so this stationary value d *∈(a(λ′), b(λ′) is a local maximum of V. Call a (primal) scenario pseudo-resolvable if

where a Bayes decision can only occur in the closed interval [a 2(λ′), b 1(λ′)]. Clearly, in this case for each value of λΛ there is a unique maximum in this interval moving as a continuous function of λ.

It follows that a C2 regulatory agent should find pseudo-resolvable conflicts almost as desirable as resolvable ones and these are therefore prime candidates for devolved decision making. In particular, no rational commander will face the stark combative versus circumspection dichotomy. Furthermore, although their choice of act will depend on ρ, two commanders with similar utility weightings as reflected through their value of ρ will act similarly. Therefore, in particular it is rational for them to compromise and if contiguous commanders are matched by their training and emotional history, then they will make similar and hence broadly consistent choices. In the particular case when the distributions P 1 and P 2 are unimodal, pseudo-resolvable scenarios occur in primal conflict where the effectiveness of the mission of increasing intensity past a 2(λ′) is waning up to b 1(λ′), whereas the effect on mission compromise is accelerating. It therefore makes logical sense for a commander to compromise between these two objectives.

On the other hand, when ξ 2′(λ′)⩽ξ 1′(λ′) for any d∈[ξ 2′(λ′), ξ 1(λ′)], v(dλ′) is strictly increasing. It follows that there is at most one solution to Equation (8) for any value of ρ and Dv(dλ′)⩾0 so this stationary value is a local minimum of V. It is easily checked that a (dual) scenario where

is pseudo-unresolvable and a Bayes decision can only be pure combat or pure circumspection.

4.3 Convexity and compromise

The next simplest case to consider is when D 2 v(dλ′) has the same sign for all (a(λ′), b(λ′)). This will occur, for example, when one of f 2(dλ′), f 1(dλ′) is convex and the other concave in (a(λ′), b(λ′)). In this case, clearly Equation (8) has no solution, two coincident solutions or two separated solutions in (a(λ′), b(λ′)). We have considered cases above when v(dλ′) is increasing or decreasing in d, when one or no stationary point exists in the interval of interest. Below we focus on the case when there are two different solutions.

By our differentiability conditions, the two stationary points in (a(λ′), b(λ′)) a local maximum and a local minimum. Furthermore, it is easy to check that in a primal conflict when D 2 v(dλ′)>0, d∈(a(λ′), b ( λ′)) and p 1(a 1 λ′)=0 the only maxima of V are either the smaller of these two intensities or b 1(λ′). On the other hand, when D 2 v(dλ′)>0 and p 2(b 2λ′)=0, the only maxima of V are either a 2(λ′) or the larger of these two interior intensities. In these two cases, we have a choice between a compromise and an all-out attack—in the first scenario or total focus on the campaign in the second. In the dual case, we simply reverse the roles of maxima and minima in the above. Any choice between the two options largely determined by ρ. Therefore, in all these cases C2 regulation avoids some possibilities of contradiction in the commander, but risks lack of contiguity.

It is often straightforward to find the solutions to Equation (8) when the two densities p 1(dλ′), p 2(dλ′) have a known algebraic form. We illustrate below a boundary scenario where v(dλ′) satisfies the convexity conditions outlined above.

Example 2 (Zero-one utility/beta beliefs)

  • Consider the setting described in the example above where, for i=1, 2, P i (dλ′) has a β B(α i , β i ) density p i (dα i , β i ) on the interval d∈[0, 1]=I (so a=0 and b=1) given by

    The function V(dλ) is then differentiable in d for d∈(0, 1), and thus by Equation (8) the commander's decision will be (1) d=0—to keep intensity to the minimum and so minimally compromise the campaign (2) d=1—to engage with full intensity in order to attain the mission with highest probability or (3) to choose a compromise decision d, which satisfies

    where α=α 2 −β 1, β=β 2α 1 and

    where

    Note in particular that in the two types of symmetric scenarios when α 1=α 2 and β 1=β 2 or when α 1=β 2 and β 1=α 2, the term ρ 3(λ)=0 so that the parameter ρis exactly the daring ρ. Equation (10) implies that there are either 0, 1 or 2 interior critical points and 0 or 1 local maximum, which is a potential compromise solution, as well as the two extreme intensities. We consider 4 cases in turn:

    α>0, β<0:

    In this case, τ(d∣α, β) is strictly increasing on (0, 1) corresponds to a maximum of V. This compromise option is always better than fully committing to the mission or campaign objectives at the exclusion of the other.

    α<0, β>0:

    In this case, τ(dα, β) is strictly decreasing on (0, 1) corresponds to a minimum of V. In this situation, the rational commander will choose either d=1—purecombator, d=0—pure circumspection. The actual choice will depend on the value of ρ′—the larger ρthe more inclined the commander is to choose combat.

    α>0, β>0:

    This occurs when, for example, the maximum negative effect on the campaign of a chosen level of intensity is approached much more quickly than the effect of intensity on the success of the mission. Here, it can be seen that τ(dα, β) has two values in (0, 1): the smaller a maximum and the larger a minimum of V. With large negative values of ρ′, the rational commander chooses a low but non-zero value of intensity obtaining almost optimal results associated with campaign objectives but allowing small chances of the mission success, which is more uncertain. As ρincreases, for example, because the mission objectives are given a higher priority then this intensity smoothly increases. However, at some point before the intensity maximizing τ is reached the commander switches from the partial compromise to pure combat.

    α<0, β<0:

    This happens when, for example, the maximum negative effect on the campaign of a chosen level of intensity is approached much more slowly than the effect of intensity on the success of the mission. Here again τ(dα, β) has two values in (0, 1): but this time the smaller is a minimum and the larger a maximum of V. With large negative values of ρ′, the rational commander chooses pure circumspection, but as ρincreases a point where the Bayes decision suddenly switches to a moderately high intensity this intensity smoothly increases to pure combat as ρ′ → ∞.

All scenarios where v(dλ′) is either strictly convex or concave exhibit an analogous geometry to the one discussed above: only the exact algebraic form of the equations governing the stationary point changes. Although surprisingly common in simple examples, this convexity condition is not a generic one. It cannot model all scenarios adequately, and competing decisions can often develop in subtler ways. In these cases, it is necessary to use somewhat more sophisticated mathematics to understand and classify the ensuing phenomena.

4.4 Conflict and differential conditions

For the purposes of this section, we make the qualitative assumption that for all values of λ′∈Λ′ p 1(.∣λ′) and p 2(.∣λ′) are both unimodal with its unique mission point mode denoted by ξ 1(λ′) and its unique campaign point mode ξ 2(λ′). Further, assume that p 1(.∣λ′) and p 2(.∣λ′)—are continuously differentiable on the open interval (a(λ′), b(λ′)). It will then follow that

We have seen in the discussion of Equation (9) that when the mission point is smaller than the campaign point in a primal scenario, the Bayes decisions of all rational commanders are compromises and this decision is a continuous function of the hyperparameters and this is the only scenario, which is not bifurcated. We now study the complementary situation; thus, suppose for a λ′∈Λ′ the mode ξ 2(λ′)<ξ 1(λ′): that is, the mission point is larger than the campaign point. Then, when d∈(a(λ), b(λ))∩(ξ 2(λ′), ξ 1(λ′))

this being true independently of the value of ρ. The stationary points d 0(λ) of V satisfy Equation (8) so define a value ρ 0 such that

This implies that any choice of ρ 0 making d 0(ρ 0, λ′) a stationary point makes d 0(λ) a local minimum of V(d∣(ρ 0, λ′)) and furthermore this is unique. It follows by Equation (9) that in a primal scenario V(d 0 λ) must have one local maximum ς 2(λ)<ξ 2(λ′), and a local maximum ς 1(λ)>ξ 1(λ′). The scenario is therefore bifurcated and will present possible problems for C2 regulation.

Since Dv(dλ)<0 for any value of ρ for any d∈(a(λ), b(λ))∩ξ 2(λ′), ξ 1(λ′) then in particular no Bayes decision can lie in this interval a phenomenon described by Zeeman (1977) as inaccessibility. In particular fixing λ′ and running ρ from −∞ to ∞. From the monotonicity property d 0 *(ρ) is discontinuous in ρ at some value ρ *(λ′): ρ 1(λ′)<ρ *(λ′)<ρ 2(λ′). The set of optimal decisions thus bifurcates into two disjoint sets: either lying in the interval [a(λ′), ς 2(λ′)] and be of ‘low intensity’ more consistent with campaign objectives or be in [ς 1(λ′), b(λ′)] and be of ‘high intensity’ and be more consistent with mission objectives.

Thus, when ξ 2(λ′)<ξ 1(λ′) C2 regulation cannot avoid a potential lack of contiguity, even in primal scenarios. Furthermore, the smaller the campaign point ξ 2(λ′) relative to the mission point ξ 1(λ′), the larger the inaccessibility regions will tend to be and so the worse the potential lack of contiguity. Thus, the relative position of the mission and campaign points has a critical role in the geometrical description of the resolvability of conflict for the rational commander.

5. Links to catastrophes

5.1 Catastrophes and rational choice

The bifurcation phenomenon we have described in this paper is actually a more general example of some well-studied singularities, especially the cusp (and dual cusp) catastrophe, which are classified for in infinitely differentiable functions (see, eg, Poston and Stewart, 1978; Zeeman, 1977; Harrison and Smith, 1980). Thus, for the purposes of this section assume now within the interval d∈(a(λ), b(λ)) that V(dλ) is infinitely differentiable in d and consider the points (d0, λ 0)∈I × Λ of (d, λ), which are stationary points in this interval: that is, that satisfy Equation (8). On this manifold the points for which the next two derivatives of this function are zero: that is, the parameter values λ′∈Λ′ of the two densities and a stationary value of d

are called fold points. If in addition we have that at that stationary value

also holds then such λ∈Λ is called a cusp point. These points are of special interest, because near such values λ 0∈Λ the geometry of V(dλ) changes. In the zero, one example these points will be largely determined by the actual situation faced by the commander.

An important theorem called the Classification Theorem demonstrates that for most functions V and dimensions of the non-local and scale parameters in Λ less than 7; the way this geometry changes can be classified into a small number of shapes called catastrophes (Zeeman, 1977), each linked to the geometry of a low-order polynomial. In our case, the cusp points and fold points are especially illuminating because we will see below that, in many scenarios, the commander's expected utility will exhibit a geometry associated with one of the two of these catastrophes: the cusp catastrophe in the case of primal scenarios and the dual cusp catastrophe in the dual scenario.

Suppose that Λ can be projected down on to a two-dimensional subspace CR 2, C⊆Λ, called the control space. Suppose this contains a single cusp. The cusp is a continuous curve in C with a single point c(λ 0′) called the cusp point, where the curve is not differentiable and turns back on itself to form a curly v shape. Points on this continuous line are called fold points. Their coordinates can be obtained by solving the first two equations above in λ and then projecting these on to C.

It is convenient to parametrise the space C using coordinates (n, s), which are oriented around this cusp. The splitting factor s takes a value 0 at the cusp point along the (local) line of symmetry of the cusp orientated so that positive values lie within the v. We will see below that typically in this application, in symmetric scenarios the splitting factor is increasing of the distance ξ 2(λ′)−ξ 1(λ′) between the campaign and mission points of the commander's expected utility. This is, however, not a function of ρ, and thus in particular is not a function of the weighted utilities. In this sense, it is somewhat a feature of the scenario faced by the commander, rather than the commander per se, and thus, in particular, is a more robust feature for a C2 regulatory agent to estimate. The normal factor n also takes a value 0 at the cusp point and is orthogonal to s. In our examples, it is always a function of the parameter ρ, as well as other features that might make the problem non-symmetric and can in principle take any value depending on the commander's criterion weights.

It has now been shown that under a variety of regularity conditions, discrete mixtures of two unimodal distribution typically exhibit at most on cusp point (see eg, Smith, 1979, 1983). When V(dλ) exhibits a single cusp point, its geometry is simple to define. For values of λΛ such that (n(λ), s(λ)) lies outside the v of the cusp. There is exactly one stationary point d* of V(dλ) where d is in the open interval (a, b); under the assumptions above d* must be a local (and therefore global) maximum of V(dλ), and thus the commander's best rational choice. In this scenario, because d *∈(a, b) this course of action can be labelled as a compromise between the two objectives. The extent to which the compromise will favour one of the two objectives will depend of the commander's current values of λ∈Λ, which in turn depend on his prioritization and beliefs. In this region, d *(λ) will be continuous in λ and thus evolve continuously as the commander's circumstances evolve.

On the other hand, for values of λ∈Λ such that (n(λ), s(λ)) lies within the v of the cusp, under the assumptions above there will (exceptionally) be two turning points and a maximum, or (typically) two maxima, d *(1) and d *(2) and a minimum. In the latter usual scenario, the commander's optimal choice will depend on the relative height of these local maxima. If the maximum d *(1) closer to a is such that V(d *(1)∣λ)>V(d *(2)∣λ) where d *(2) is the maximum closer to b, then the rational commander chooses a low intensity option. If V(d *(1)∣ λ)>V(d *(2)λ), then the rational choice is the higher intensity option. Note that this is analogous to the circumstances we have described above. In this case, C2 regulation can experience lack of contiguity and regret at least for central values of ρ.

The dual scenario, which is less favourable, has an identical geometry but with maxima and minima permuted. Since rational behaviour is governed by maxima, the behavioural consequences on the commander of the geometry are quite different. Outside the v of the cusp, optimal decisions are thrown on to the boundary and the scenario becomes pseudo-unresolvable. On the other hand, parameters inside the v of the cusp allow there to be an interior maximum of the expected utility, as well as the two extreme options. Usually, as we move further into the v of the cusp, the relative efficacy of the interior decision improves relative to the extremes until the Bayes decision becomes a compromise decision.

Rather than dwell on these generalizations, we now move on to demonstrate the geometries explicitly for some well-known families of distribution.

5.2 Some illustrative examples

Example 3 (Zero—one β catastrophe)

  • From the catastrophe point of view, this is particularly simple. The fold points are obtained as solutions of Dτ=0, which lie in the interior (0, 1) of the space of possible Bayes decisions. The solution in terms of d *=α(α+β)−1 lies in (0, 1), if and only if α and β are of the same sign: the last two of the four special cases we analysed. Explicitly they are given by αβ>0 and

    where

    It is easy to check there are no cusp points satisfying the above. Here, the control space can be expressed in one dimension, and this one-dimensional space summarizes the geometry of the their commander's utility function, as described earlier. Once a C2 regulatory agent identifies whether the scenario is primal or dual and whether αβ<0 or αβ>0 the value of ρ f (α, β) and whether or not the value of ρ′<ρ f (α, β), if ρ f , β) exists explains the range of possibilities. In this sense, the existence and position of fold points is intrinsic to understanding the geometry. Finally, note that this geometry is qualitatively stable in the sense that other utilities satisfying the same strict convexity/concavity condition illustrated in this example can never exhibit cusps and will exhibit exactly analogous geometry of projection of its singularities but be governed by different equations on different hyperparameters.

Because this is a boundary scenario, the above example is not general enough to capture all important geometries that C2 may encounter. Typically, these cases include cusps. Consider the following example.

Example 4 (gamma distributions)

  • Suppose the distributions P 1 and P 2 are (translated) gamma distributions having log densities on (a=0, ) given by

    where c i =α i log β i −log Γ(α i ), m i =β i −1(α i −1) and α i , β i >1 so that each density has its mode strictly within the interior of its support. The Equation (3) of the stationary points of the commander's expected utility is then

    where ρ′=(β 1+β 2)−1 (ρ+c 1c 2 ) Letting β=β 1(β 1+β 2)−1, this simplifies to

    The modes of the two densities on δ are given by the mission point and campaign point . By differentiating with respect to δ substituting and reorganizing it follows that the fold points for δ such that must satisfy the quadratic equation

    This scenario can therefore be identified with the canonical cusp catastrophe ( Zeeman, 1977 ), whose fold points are also given by a quadratic. In particular its cusp points satisfy

    The fold points exist when

    Note that when α 1=α 2=α and β 1=β 2=βso that β=1/2 and ξ 1=−ξ 2 this simplifies to there being fold points only when ξ 2ξ 1 and a cusp point at (δ, ξ 1, ξ 2)=(0, 0, 0). This is consistent with the results concerning inaccessibility discussed after Equation (12) and the two competing decisions get further apart as ξ 1 and increase since the fold points are given by with inaccessible decisions between these two values.

Example 5 (dual gamma)

  • In the dual scenario to the one described above, the cusp point defines the emergence of a compromise solution whilst pure circumspection a=0 and pure combat are always competing local maxima of the expected utility. However, as the modes ξ 1 of Q 2 and ξ 2 of Q 1 become increasingly separated, the compromise region grows and becomes the Bayes decision of most commanders.

Although being able to identify this phenomenon with a canonical cusp/dual cusp catastrophe as above is unusual, for many pairs of candidate distribution the most complicated singularity we encounter is usually a cusp catastrophe. Thus, consider the following example.

Example 6 (Weibull distributions)

  • Let X have an exponential distribution with distribution function 1−exp−1/2x and suppose that the distribution functions P 1 and P 2 are the distributions of X 1=2(σ −1{bX})c and X 2=2(σ −1{X+a})c so that for d∈(a(λ), b(λ)), a(λ)<b(λ) the respective densities on this interval are given by

    Here σ>0 and for simplicity we will assume 0<c⩽2. Note that when c>1, the densities are unimodal with mission point ξ 1(λ)=bσ{2(1−c −1)}1/c and campaign point ξ 2(λ)=a+σ{2(1−c −1)}1/c and stationary points satisfy

    where and δ=d−1/2(a+b)—so that . Differentiating and rearranging this expression when c≠1 decisions on the fold points must also satisfy

    where and

    Note that when 1<c<2, is strictly decreasing in δ 2. The cusp points also need to satisfy

    Note in particular that for each value of there is always a cusp point at and the splitting factor of such a cusp is : largest when the difference between the campaign point and mission point is large and ‘when the uncertainty σ is small. When 0<c<1, g<0 but so no fold points exist. As p increases, the best course of action jumps when ρ=0 from pure circumspection a to the value b of pure combat. When c=1, the stationary points are given by those value, unique functions of the parameters satisfying δ c =d c −1/2(a+b)=ρ/σ and this again is always a minimum except when ρ=0 when all intensities in [a, b] are equally good. Finally, when 1<c⩽2 because ψ is decreasing in δ 2 and g>0, there is a single pair of stationary points (−δ*, δ *)—coinciding when δ *=0—lying on fold points if and only if

    It can easily be checked that for a given there is a single cusp point at . In the special case when c=2, the fold points are given by

    There are therefore no fold points if , while if 1/4(b−a)2σ 2 the fold points are given by

    Differentiating and solving gives that the cusp point satisfies

    The distance between the campaign and mission point is therefore again central here. See Smith (1979) for further analyses of the geometry of this special case and its generic analogues. Note that this case is used to explain and categorize the results of two battle group exercises we discuss in Dodd et al (2006) .

Like in the γ example above, the assumption of equality in the uncertainty parameter for the two distributions is not critical in the example above in the sense that the underlying geometry can still be described in terms of s continuum of cusp points and details of their exact coordinates for the case c=2 can be found in Smith (1983). It turns out the richest geometry is obtain in the equal variance case, and when the uncertainty associated with one of the objectives is much higher than the other the large uncertainty objective tends to get ignored in favour of the other and the problem tends to degenerate.

We end by elaborating the first example to analyse the geometry of non-boundary scenarios of this type. We note that as we move away for the boundary, cusp catastrophes like those appearing in the last two examples are exhibited in this example as well.

Example 7 (General Beta Case)

  • For i=1, 2, let P i (dλ′) be the density of 2X i −1+(−1)i c where X 1 has a beta B(α i , β i ) density given in the earlier example andc∣⩽1. Then I(λ′)=[∣c∣−1, 1−∣c∣] and the scenario is primal when c>0, dual when c<0 whilst when c=0 we have a linear transformation of the boundary case of the last example. Writing γ i =α i −1, ɛ i =β i −1, i=1, 2. The Equation (8) becomes

    where

    Differentiating and reorganizing, we find that in the fold points in I(λ′) must satisfy the cubic

    where

    This situation is therefore slightly more complicated than the boundary on we discussed earlier, because there is the possibility that two local and potentially competing maxima appear in the interior of I(λ′). However, when a commander is comparably certain of the effect of chosen intensity on mission and campaign objectives, then γ 1+ɛ 1=γ 2+ɛ 2 the fold point becomes quadratic and we recover the geometry of the single canonical cusp/dual cusp catastrophe. After a little algebra, the cusp points related to the modes through the equation.

    When c=0 our earlier casethis equation degenerates into requiring P 1=P 2but otherwise such cusp points exist and are feasible whenever ξ 2>ξ 1. This demonstrates how our original example can be generalized straightforwardly away form convexity to a situation where compromise appears as an expression of the cusp catastrophe.

6. Discussion

There are several conclusions, concerning C2 regulation, that can be drawn from this analysis about how to organize, train and communicate intent and freedoms for decision making to commanders; indeed, a number of these conclusions are already being accepted as good practice under the principles of command agility. Here, we will assume that commanders face a scenario where both P 1 and P 2 are twice differentiable and unimodal.

  1. 1

    Whenever appropriate and possible, mission statements and campaign objectives should be stated in such a way that they are resolvable so that well-trained rational commanders can acknowledge and safely achieve compromise.

  2. 2

    When a situation cannot be presented or acknowledged as resolvable, then, within agile planning to devolve decision making, commanders should be presented with a pseudo-resolvable scenario. The first of two conditions required for this is that the scenario is primal. This means that the commander can perfectly address the campaign objectives while still having some possibility of completing the tactical mission to some degree of success and there is a level of intensity appropriate for attaining the tactical mission objectives, which also can be expected not to totally jeopardize the campaign. It will often be possible to make a scenario primal simply by the way the two objectives are communicated to the commander, although it may involve some innovative option-making. The second requirement is to control the modes of the mission and campaign points so that the intensity with the greatest incremental improvement on mission success occurs at a value ensuring maximal campaign integrity also that the greatest incremental improvement on campaign success occurs at a value ensuring maximal mission. A rational commander will then choose to compromise between the two objectives. The actual compromise point will depend on each commander's individual training, personality and emotional history, but the careful matching of contiguous commanders should help to ensure coherence.

  3. 3

    When neither of the two scenarios described above are achievable, then in most cases, provided the mission point is lower than the campaign point, the devolved commander can still be expected to compromise and not to be faced with contradiction. In this case, a C2 regulatory agent must be prepared to expect lower levels of contiguity but coherence can still be managed by carefully considering the commanders’ capacity to deal with stresses. In particular to encourage compromise, mission statements must allow for there to be an option that scores at least half as well as the best option for mission and at least half as well for campaign objectives. Note that if it is made clear that partially achieved success in the two objectives is more highly rated, then the likelihood of compromise is increased.

  4. 4

    Problems of lack of contiguity and contradiction can be expected to occur if the mission point is much higher than the campaign point. If a C2 regulatory agent still plans to devolve in these cases, then it must endeavour to keep the distance between the mission and campaign points as small as possible, since this will limit the extent of the discontiguity and contradiction (see the analysis of the last section).

  5. 5

    The most undesirable scenarios are those that are unresolvable or pseudo-unresolvable. In these cases, the focus falls on ρ and therefore, unless the intensity associated with pure combat is close to that for pure circumspection, the training, deployment and personality of individual commanders will become crucial. The C2 settings are then most stable if a top-down style is adopted.

All these points rely on the assumption of commander rationality. In Dodd et al (2003, 2006), we detail results from two experiments studying how experienced personnel respond to conflicting scenarios. The first was a mission where there was high risk of casualties. The second was a potential threat to a civilian convoy where the commander had to balance the efficacy of defence from attack and a negotiated passage. Participants were then encouraged to document their decision processes. The commanders often reasoned differently, but interestingly all choose courses of action consistent with the rationality described above. Perhaps one of the most interesting findings was that confidence in succeeding in the objectives—mainly reflected in the choice of ρ—had a big influence on course of action selection. Conclusions from these experiments, aided by the implementation of the ideas above, have informed procurement of command information systems (Saunders and Miles, 2004). Of course in real time a commander can only evaluate a few possible courses of action (Moffat, 2002; Moffat and Witty, 2002; Perry and Moffat, 2004), but we argue in Dodd et al (2006) that this does not invalidate the approach above, it just approximates it. Thus, both from the theoretical and practical perspective, this rational model—where C2 regulation assumes its commanders choose what is rationally consistent with their individual nature, experience and competencies is a good starting point for understanding C2 regulatory mechanisms and the needs for formal education into C2 organizational issues and for command training and selection.

7. Further application outside military domain

This work has an experimental foundation in military command decision studies, but it is not limited to situations of military hierarchy and mission command. Indeed, the findings can be applied to any situations where there may be uncertainty and where there is potential for contention in management objectives. Such conditions are common within many organizations today as they struggle to balance risks against a need to expand into new uncertain markets. The two key principles, which underlie the theory, of maintenance of contiguity and avoidance of contradiction are as relevant to management as they are to military C2. Appropriate placing of decision authorities and responsibilities within organizations, according to the prevailing circumstances as a whole, could determine the difference between commercial success and failure.