UNITED24 - Make a charitable donation in support of Ukraine!

Intelligence

Report of the Inquiry into  Australian Intelligence Agencies  

Chapter 6 - Contestability of Assessments


[Table of Contents]

 

Means of Achieving Contestability
Overlap Between ONA and DIO
The National Assessments Process
Contestability Within Agencies
Competing Sources
How Do We Compare?
US Intelligence Agencies
Other Proposals
Conclusion

Much of this report focuses on how the Australian intelligence community can best serve the national interest. A vital element is how to get the best information to the best analyst so that they can write the best assessment for government. But an analyst working alone, no matter how well prepared, will not produce a first-class assessment. For Australia to have the highest quality assessment, analysts need to be challenged, confronted by different perspectives, and alerted to flaws in their arguments. This chapter examines how the intelligence assessment community, with limited resources, can best challenge and test its assessment product.

The level of contestability in advice to government has, overall, been increasing in recent years. This reflects two broad trends: a changing relationship between ministers and the public service, in which governments look for a broader range of inputs to decision-making; and an enhanced capacity of the external environment to provide alternative sources of advice.

The surge in the availability of advice is not a feature of government alone. The increased flow of information, made possible by new technology, has had a dramatic impact on how day-to-day decisions are made in all fields - about which flight is cheapest, what jobs are available, and what financial services are most advantageous. In a similar way, government ministers have been able to access more competitive options, and are less dependent on the public service for advice.

Against this backdrop, it is not surprising that the Inquiry found widespread support for contestability in the area of intelligence assessments amongst ministers and those who have served as ministers in previous governments as well as in the policy, operational and intelligence communities. The rationale for contestability is strong. It supports effective decision-making by ministers by providing differing viewpoints; it constitutes a check and balance mechanism against faulty assessment; and it ensures that the full breadth of government (and non-government) expertise is brought to bear.

What is at issue is how much contestability is needed inside government systems, how it should be achieved, and at what price. A key issue in settling these questions is how much overlap between agencies is needed to achieve contestability.

Means of Achieving Contestability

Contestability of intelligence assessment can be achieved in a number of ways:

  • Contestability between agencies, achieved by having overlapping responsibilities between agencies.
  • Contestability within agencies, achieved by effective challenge mechanisms inside agencies, and through access by intelligence customers to the source material behind assessments.
  • Contestability by joint assessments, which force analysts from one agency to confront the views of others. In the Australian system, the key instrument for joint assessments is the National Assessment, which formalises consultation across the intelligence and policy communities and provides a mechanism for dissenting views.
  • And, beyond the government framework, a level of contestability is increasingly achieved through the impact of external commentators (journalists, think tanks, academics), whose views are taken seriously by government decision-makers.

Overlap Between ONA and DIO

While the overlap in functions between ONA and DIO undoubtedly provides a level of contestability, it is interesting that this was not intended in the architecture drawn up by Justice Hope in the 1970s. He recommended the establishment of a "centrally located assessments function... placed in a location in the centre of government". He saw those parts of DIO's predecessor, JIO, which were clearly 'national' being transferred to ONA, and all other parts examined to see which should not be transferred. Thus the contestability which is currently offered by areas of overlap between DIO and ONA is an incidental rather than planned part of our intelligence architecture.

The current level of overlap is relatively high. While a precise figure would be difficult to determine, by some estimates as many as 75 per cent of the topics covered by ONA are also covered to some extent by DIO, although the focus and level of detail are often different. But to be effective, overlap and contestability between agencies need to be properly structured - designed to maximise testing and to minimise waste.

Examining the contestability systems in Australian agencies against these criteria, the Inquiry found the system wanting in some important ways, partly as a result of contestability between agencies in Australia growing in a haphazard way, rather than being designed for specific outcomes.

First, there is very little coordination between the agencies on their work programmes, nor any clear management of which issues warrant contestability. Both have a weekly meeting - ONA to consider formally a forward work plan and DIO to brief, inter alia, current defence requirements. Collection agencies attend both, but neither assessment agency attends the other's meeting. As a result, there is a lack of coordination on what issues are significant enough to warrant coverage by both agencies.

Instituting a mechanism to enable discussion and, where necessary, agreement on duplication or reconciliation of ONA and DIO forward work programmes would be a useful step towards a more deliberate system of contestability (and would help to reduce unproductive overlap).

Second, there is no system for identifying clearly points of agreement and difference between ONA and DIO assessments. Where ONA and DIO write on the same topic, there is often some consultation but no systematic approach to weighing (and using or discarding) alternative views. Customers receive the material separately, and need to work out for themselves where the points of difference and agreement are. A clear articulation of the points of difference on substantive issues, and why they have come about - a potentially useful tool for decision-makers - is, save in exceptional circumstances, absent from the system.

RECOMMENDATION:

ONA and DIO should consult on and, where appropriate, reconcile their forward work programmes. As a minimum each should attend the other's existing weekly requirements meetings.

Notwithstanding the focus of recommendations in this and other chapters on reducing duplication between ONA and DIO, there will remain some areas of overlap. The agencies must ensure that substantive differences in assessment in this overlapping ground are clearly set out.

The National Assessments Process

The National Assessments process offers another, more formal, mechanism for contestability. The process by which National Assessments are developed, in which senior policy and intelligence officers debate and develop their key judgments on significant issues, is a robust exercise of challenge. The requirement to record dissenting views, where they exist, is outlined in section 8 of the Office of National Assessments Act 1977. This is an extremely important feature of the process from a contestability perspective: indeed Justice Hope had recommended that intelligence assessments should, in case of disagreement, expose the nature of the disagreement and provide for incorporation of minority dissents, and further that options of interpretation should be indicated. It is of note that dissenting views have been recorded only once to ONA's knowledge - and that some two decades ago - in the 220 National Assessments produced since 1978.

National Assessments

National Assessments are a special form of assessment provided for in the ONA Act. The Act stipulates that ONA must, as circumstances require, make assessments on international matters that are of national importance. National Assessments are intended to be an agreed product of relevant departments and agencies, although there is provision for dissent to be recorded when agreement cannot be reached.

Drafting is undertaken in consultation with an interdepartmental working group chaired by an ONA branch head. However, the Act requires that the Director-General of ONA shall consult with a National Assessments Board in relation to each National Assessment. The National Assessments Board will approve the terms of reference and the final text of each National Assessment - the terms of reference usually by an exchange of letters and the final text by a meeting of the Board.

The composition of the National Assessments Board is also laid down in the legislation. The board is chaired by the Director-General of ONA and includes representatives from the Department of Foreign Affairs and Trade, the Department of Defence, a member of the Defence Force, and an Australian Public Service officer (not from Defence or Foreign Affairs and Trade) with economic expertise. The Defence Intelligence Organisation normally provides the Defence representation. Other departments are invited to attend as observers.

The Act also required the establishment of an Economic Assessments Board. However, since 1998 the Economic Assessments Board has been subsumed into the National Assessments Board, and economic assessments have been approved by the National Assessments Board. A Treasury representative has been invited to attend most, if not all, of the meetings of the National Assessments Board since that time. Where Treasury does not attend, the position of economic expert is filled by the Deputy Director-General of ONA responsible for economic issues.

RECOMMENDATION:

The Office of National Assessments Act 1977 should be amended to remove the references to two assessments boards - the National Assessments Board and the Economic Assessments Board - to reflect the reality that there is only one National Assessments Board which covers strategic, political and economic issues, but with provision for different composition according to subject matter.

The production of National Assessments through a National Assessments Board is enshrined in ONA's legislation. During its early years, ONA produced more than 20 National Assessments annually, and they were the primary focus for the Director-General. This number has declined significantly over the years, with an annual average of less than four since 1984. In two years, none at all was produced.


NATIONAL ASSESSMENTS 1978-2004

Year Number Year Number
1978 13 1979 39
1980 33 1981 25
1982 19 1983 13
1984 6 1985 11
1986 6 1987 2
1988 5 1989 5
1990 4 1991 4
1992 3 1993 3
1994 1 1995 0
1996 4 1997 0
1998 2 1999 8
2000 4 2001 2
2002 4 2003 4
2004 0    
Total     220

In part the decline represents the increasing utility of intelligence to government, and the consequent high demand for more current intelligence. It also reflects the pace of government business - with new technology and a heightened tempo of government business, the demand for immediate advice has grown. The focus on National Assessments has also varied according to the preferences of successive Directors-General and, in recent years, has been influenced by some debate about the National Assessments Board process.

But the reasons, which are many and subject to differing perceptions, are less important than the fact of the decline in National Assessments. ONA needs the capacity both to serve current intelligence needs and to produce comprehensive assessments on issues of strategic significance.

RECOMMENDATION:

ONA should produce a greater number of National Assessments on issues of strategic importance to Australia and reflect significant dissenting views. A National Assessment should be prepared prior to any significant deployment by the Australian Defence Force, and in support of major strategic reviews.

At a less formal level, ONA and DIO joint reporting provides some of the benefits of National Assessments without the overheads. Such reporting should continue where appropriate, but not be used as a substitute for National Assessments on areas of significance.

The inclusion of officers from policy departments such as the Department of Foreign Affairs and Trade and the Department of the Treasury in the National Assessments Board is specified in ONA's legislation. The contribution of such officers is critical - they bring both unique expertise and policy context to the National Assessments process, which the Inquiry finds both warranted and appropriate given the importance of the topics under consideration. The Director-General of ONA has a vital role in ensuring that policy considerations inform intelligence judgments without influencing their integrity: the Inquiry received no indications of inappropriate influence from policy departments in any previous National (or Economic) Assessments Board.

Contestability Within Agencies

In light of the recommendation in Chapter 7 on a revised DIO mandate, internal contestability mechanisms become critically important to meeting government needs.

Within ONA, contestability is built into the process of developing an assessment, primarily through circulation of drafts, both within and outside the organisation, and through the clearance process, where both the key judgments themselves and the sources which underpin them are challenged. All ONA product is cleared for release by the Director-General. The level of challenge clearly varies with the significance and complexity of the issue under consideration, but it is a robust process underpinned by a strong culture of intellectual rigour. As a larger organisation producing greater quantities of product, DIO's processes are understandably somewhat different, with less external review of drafts, and clearance of product for release typically undertaken at a more junior level. As at ONA, there is a strong culture of contest and challenge.

Key to the success of internal contestability is the capacity of the analyst to engage widely to ensure the fullest possible range of views, and to interrogate the systems which provide the information. It is necessary not only to test what one has, but to identify what one does not have, and to seek to fill those gaps where possible. While the agencies recognise the vital nature of this and reinforce it consistently with analysts, including through induction training, it does not always happen. Some aspects of the analytic workforce works against it: in personality terms, analysts are more likely to tend towards introspection than extroversion, and the process of broad and dynamic engagement is typically not natural. Security issues, where external sources or other government officials are not cleared for sensitive material, can compound the problem. The Inquiry's study of Iraq found that, while issues of high significance were discussed across agencies, the process of interrogating sources and engaging with external interlocutors was not a sufficiently dynamic one.

RECOMMENDATION:

ONA and DIO should institutionalise measures to ensure rigorous and interactive challenging of sources, and effective dialogue between collectors and assessors. They should also institutionalise measures to ensure effective challenge to judgments, including formal peer review mechanisms within, between and outside the agencies, and between technical and geographic experts.

An alternative approach to these 'mainstream' mechanisms for internal contestability involves establishing cells within agencies specifically designed to provide alternative - or deliberately contrary - views, either against authoritative assessments, or on self-generated topics. Often called 'red cells', these units provide contestability either by taking a fresh look at issues or playing the devil's advocate by looking for gaps or mistakes in mainstream assessments. Red cells use 'brainstorming' techniques and rigorous peer review.

This system of 'red teaming' has certain attractions. It gives contestability an established place in the system; and it ensures that specific resources are devoted to testing assessments and generating fresh ideas. In an intelligence system with bountiful resources, red cells would warrant a place.

In the Australian intelligence system, however, with limited funding, the establishment of red cells seems extravagant. Scarce high-quality analytical staff should be devoted to core assessment tasks, not to developing artificial critiques. Sufficient internal contestability should be achievable through the critical analysis of assessments by in-line staff, notably those in management positions. Further, there is a danger that artificial criticism might drive out good, straightforward assessment, and lead to as many blind alleys as fruitful paths.

Competing Sources

The focus so far has been on the Australian assessment agencies' product. But ministers and other intelligence customers get a range of related information which serves to challenge the judgments presented by the assessors. Some receive an amount of unassessed intelligence directly from the collection agencies, and can make independent judgments on its meaning. Diplomatic reporting from our overseas posts is a significant source of information - the volume of DFAT reporting on international, defence and security issues outweighs the volume of assessed intelligence by a factor of approximately twelve. And of course there is a range of policy advice from departments, which uses various sources, including intelligence, to come to judgments and make recommendations.

Foreign assessments, either in toto or as references within ONA and DIO reports, are also available to government, and provide an alternative view. Chapter 7 discusses the importance of getting greater access to, and making greater use of, assessments sourced from countries other than the US and the UK. This strategy would further extend the range and nature of material available to government.

An increasingly significant source of contestability comes from outside the government. The past four years has seen the most welcome arrival of two independent think tanks focused on international policy and intelligence and security, the Australian Strategic Policy Institute (ASPI) and the Lowy Institute for International Policy. Unlike some previous think tanks with a similar focus, which had difficulty maintaining critical mass, both have secure funding bases and strong leadership and staff. Without access to intelligence, such organisations are not on an equal footing with ONA and DIO in providing contestability; nonetheless, ASPI has already produced a number of useful strategic reports, and the Lowy Institute promises to do likewise in its field of international policy. An illustration of ASPI's utility in the strategic policy arena is the significant role its report on the Solomon Islands played in supporting government decision-making in its response to the worsening crisis in the Solomon Islands in 2003.

Complementing these institutes are a number of well informed and well respected commentators and journalists. Many have significant foreign experience, and bring to their commentaries a deep cultural understanding of the countries on which they write. Many bring a fresh perspective to the debate - and often the capacity to view world events through the eyes of another nation or culture. And many have a deep understanding of what drives government and what its challenges are. They represent a potent and not to be underestimated capability.

How Do We Compare?

The US and UK systems offer points of comparison, but are ultimately of limited value to Australia's specific circumstances. While we have much in common with US and UK partners, Australia's intelligence needs are specific, and foreign models could not usefully be transferred without significant modification.

Size matters. The vast US system has the ability to support multiple assessment agencies - indeed, all 15 members of the US Intelligence Community are able to generate assessments, and many subagencies within those organisations produce assessments in their own right. Most American commentators regard this range of agencies - and their ability to deliver 'competitive analysis' - as one of the strong features of their system. As a point of comparison with Australian National Assessments, US National Intelligence Estimates register some form of dissent in approximately 10 per cent of cases.


US Intelligence Agencies

CENTRAL INTELLIGENCE AGENCY (CIA): provides foreign intelligence on national security topics to national policy and decision-makers.

DEFENSE INTELLIGENCE AGENCY (DIA): provides military intelligence to war-fighters, policy makers and force planners.

NATIONAL GEOSPATIAL-INTELLIGENCE AGENCY (NGA): provides geospatial intelligence in support of national security.

NATIONAL RECONNAISSANCE OFFICE (NRO): coordinates collection and analysis of information from aeroplane and satellite reconnaissance by the military services and the CIA.

NATIONAL SECURITY AGENCY (NSA): collects and processes foreign signals intelligence information for national decision-makers and war-fighters, and protects critical US information security systems from compromise.

DEPARTMENT OF HOMELAND SECURITY (DHS): prevents terrorist attacks within the United States, reduces America's vulnerability to terrorism and minimises the damage from attacks that do occur.

DEPARTMENT OF ENERGY: performs analyses of foreign nuclear weapons, nuclear non-proliferation and energy security-related intelligence issues in support of US national security policies, programmes and objectives.

FEDERAL BUREAU OF INVESTIGATION (FBI): investigates acts of terrorism; deals with counterespionage and data about international criminal cases.

DEPARTMENT OF STATE: deals with information affecting US foreign policy. This includes the Bureau of Intelligence and Research (INR) which provides intelligence assessment to the Secretary of State and senior officials in the department.

DEPARTMENT OF TREASURY: collects and processes information that may affect US fiscal and monetary policy.

ARMY, NAVY, AIR FORCE, AND MARINE CORPS INTELLIGENCE ORGANISATIONS: each collects and processes intelligence relevant to their particular Service needs.

COAST GUARD INTELLIGENCE: deals with information related to US maritime borders and Homeland Security.


No doubt there are advantages in this level of contestability. But for Australia, with fewer resources, a level of overlap even remotely like that in the US system would be profligate.

The proliferation of US agencies also reflects the wider US government system, designed to satisfy the bifocal interests of the executive and the congress. The Westminster system, with its greater focus of power, does not need, or encourage, such a spread of assessment agencies.

Unlike the US, the UK system embeds contestability inside the core analysis unit. At the centre of the UK intelligence system is the Joint Intelligence Committee (JIC). Meeting weekly, the JIC settles the texts of all important UK intelligence assessments. Under a chairman based in the Cabinet Office, the meeting brings together the heads of all of the intelligence agencies with key policy figures - from the Cabinet Office, the Foreign and Commonwealth Office, the Treasury and the Ministry of Defence. Servicing the JIC is a small group of talented analysts, drawn from the same group of intelligence and policy agencies. With a range of players at the analysis table, differences in view are highlighted and resolved inside the assessment process.

The JIC system has three advantages: first, it provides senior readers with one indisputably peak source of assessments on key topics; second, because all of the key agencies sit together to settle assessments, it is supported by a committee of experts from each agency, who bring a very high level of specialist knowledge on each topic; and third, because of the committee structure of the JIC system, lines of analysis are fully contested inside the system.

From an Australian perspective, the Inquiry found these benefits to be outweighed by the overheads of supporting such a committee structure, and the time delay inherent in a system where assessments are cleared by a weekly committee meeting.

Other Proposals

The Inquiry received one recommendation for an additional agency to be established - an analytic element within the Department of Foreign Affairs and Trade along the lines of the US State Department's Bureau of Intelligence and Research, commonly known as INR. While the rationale for this was to exploit more fully the information gathering capacities of DFAT officers overseas, and to provide more strategic assessments, such an organisation or element would also provide further contestability. The Inquiry does not find the arguments for creating an additional organisation compelling, primarily on the basis of affordability and further strain on the limited pool of qualified staff. The Australian system is not of a size to support duplication of intelligence functions, and runs the risk of losing critical mass in such capability if it does so. ONA has full access to DFAT reporting - indeed it is the second largest source for ONA after open source material although, as reflected in Chapter 2, analysts' access to diplomatic reporting is contracting with the reduction in Australian diplomatic resources. A rebalancing within ONA towards the strategic would produce the effect intended in the proposal for the creation of an INR-like body without the need for an additional organisation.

Conclusion

There are many positive features to the delivery of contestability in the Australian system. The culture of challenge and contest within the agencies is strong, and some simple to achieve process changes would give it greater assurance. The external environment is providing an increasingly positive and effective source of alternative views.

But against a clear and consistent message from ministers and a broad range of senior customers that contestability is valued, the Australian intelligence community needs to be more deliberate in its efforts to provide it. Most importantly, it needs to determine what is the most effective means to achieve contestability, both in terms of outcome and cost.

The Inquiry finds that contestability, highly valued by most intelligence customers, is not optimised under the current arrangements. Overlap between ONA and DIO is not managed to support contestability, and National Assessments are not being used to best effect.

The changes recommended to DIO's mandate in Chapter 7 will not - nor are they designed to - eliminate overlap between the two agencies. If the recommendations of the Inquiry are adopted, the decrease in overlap in terms of volume will be balanced by more deliberate management of what remains. This would be supplemented by strengthened internal contestability mechanisms, subject to external scrutiny. More National Assessments, with their inherent processes of debate and contest, would be produced, and the recording of dissenting views would be encouraged. As an adjunct, we can expect that the external inputs, such as those currently provided by think tanks, will continue to grow in number and capacity with a continuing strong focus on security issues nationally.

 

[Table of Contents]



NEWSLETTER
Join the GlobalSecurity.org mailing list