Roundtable 10-15 on Intelligence Success and Failure: The Human Factor72 min read

For decades, political scientists have been taught that it is dangerous if not forbidden to search on the dependent variable. By looking only at cases in which the effect of interest occurs, we cannot infer causation because the factors that we think are powerful may be only necessary conditions, which means that they can be present in cases in which the effect does not appear. They then cannot readily be labelled as causes in the sense of distinguishing between instances in which the effect is present and those in which it is absent. Yet almost all studies of intelligence failures have looked only at the failures themselves, without making comparisons with successes. This was true even of scholars who not only knew about this problem, but had stressed it in their teaching. I can say that with some confidence because I am in that category. While I urged others to look at successes as well as failures, I did not do so.[1] Erik Dahl, one of our reviewers, recently used this methodology,[2] but our reviewers agree that until this volume, no one had sought such careful and controlled comparisons that are developed here.

H-Diplo | ISSF Roundtable, Volume X, No. 15 (2018)

issforum.org

Editors: Thomas Maddux and Diane Labrosse
Web/Production Editor: George Fujii

Uri Bar-Joseph and Rose McDermott. Intelligence Success and Failure: The Human Factor. New York: Oxford University Press, 2017. ISBN: 9780199341733.

Published on 4 June 2018

Shortlink: tiny.cc/ISSF-Roundtable-10-15
Permalink: https://issforum.org/roundtables/10-15-human-factor

PDF URL: https://issforum.org/ISSF/PDF/ISSF-Roundtable-10-15.pdf

Contents

© Copyright 2018 The Authors

Introduction by Robert Jervis, Columbia University

For decades, political scientists have been taught that it is dangerous if not forbidden to search on the dependent variable. By looking only at cases in which the effect of interest occurs, we cannot infer causation because the factors that we think are powerful may be only necessary conditions, which means that they can be present in cases in which the effect does not appear. They then cannot readily be labelled as causes in the sense of distinguishing between instances in which the effect is present and those in which it is absent. Yet almost all studies of intelligence failures have looked only at the failures themselves, without making comparisons with successes. This was true even of scholars who not only knew about this problem, but had stressed it in their teaching. I can say that with some confidence because I am in that category. While I urged others to look at successes as well as failures, I did not do so.[1] Erik Dahl, one of our reviewers, recently used this methodology,[2] but our reviewers agree that until this volume, no one had sought such careful and controlled comparisons that are developed here.

Our reviewers find much to praise in this book. Dahl calls it “a valuable contribution,” Genevieve Lester concludes that it “is original, deeply researched, and fascinating,” Jeffrey Friedman judges that it “makes a major contribution to the study of intelligence and military decision-making,” Joshua Rovner calls it “a unique and valuable book, breaking new ground on a familiar topic,” and Keren Yarhi-Milo calls it “a truly remarkable book that offers great theoretical and empirical insights.” They agree that in addition to focusing on successes as well as failures, the book is original in shifting the focus from intelligence services to policy-makers, who are the people that in the end must determine whether and how to act on intelligence. In a real sense, intelligence can succeed only if decision-makers draw the correct inferences, and follow, if not the best, then at least good courses of action. As Uri Bar-Joseph and Rose McDermott stress, not all decision-makers are alike, and we need to probe their personalities and skills in order to fully understand intelligence failures and successes. Particularly important are the leader’s need for cognitive closure and his (all the leaders studied here are male) narcissism.

All the reviewers note the policy relevance of the study, and join Bar-Joseph and McDermott in being critical of the American propensity to believe that organizational reforms will bring an end to intelligence failures.

The reviewers generally applaud the pairing of successes and failures within the same war, which permits the authors to focus on whether leaders learn from the initial surprise attack how to better utilize and evaluate intelligence. But as Dahl notes, while the importance of learning is plausible, the case studies do not always allow the authors to pin down what is learned or how learning occurs (or in some cases, does not occur). Furthermore, Friedman notes that in the important case of the American failures to anticipate both the North Korean attack on the South and the subsequent Chinese intervention, while General Douglas MacArthur was the main leader at fault in the latter case, he was less involved in the former. Lester points out that for both MacArthur and Soviet dictator Joseph Stalin, the leader’s authoritarianism affected their staffs, setting off a cycle of reinforcing and misplaced confidence.

Rovner points to the value of examining intra-war cases that often receive less attention, but would have liked the authors to have done more to integrate their arguments about leaders with their discussion of the prevailing organizational culture within intelligence services.

Yarhi-Milo wonders whether the selection of cases that involve wars starting with successful surprise might limit the generalized ability of some of the arguments, and also calls attention to additional dimensions of leaders’ personalities that can play important roles.

Bar-Joseph and McDermott provide separate replies. In addition to engaging with the reviewers, Bar-Joseph discusses how recent events support the book’s stresses on the importance of the decision-makers’ personalities and on the value of human sources in intelligence.

Participants:

Uri Bar-Joseph (Ph.D. Political, Stanford University) is a Professor Emeritus of the School of Political Science, Haifa University, Israel. He focuses on strategic and intelligence studies, with special emphasis on the Arab-Israeli conflict and Israeli security policy. He is the author of six books in addition to numerous refereed journal articles and book chapters. His The Angel: The Egyptian Spy Who Saved Israel (HarperCollins, 2016) is the winner of the National Jewish Book Award, 2017 and The Association of Former Intelligence Officers (AFIO) “Best Intelligence Book of 2017.” A Netflix production based on the book is due to be released in fall 2018.

Rose McDermott is the David and Mariana Fisher University Professor of International Relations at Brown University and a Fellow in the American Academy of Arts and Sciences. She received her Ph.D. (Political Science) and M.A. (Experimental Social Psychology) from Stanford University and has taught at Cornell, UCSB, and Harvard. She has held fellowships at the Radcliffe Institute for Advanced Study, the Olin Institute for Strategic Studies and the Women and Public Policy Program, all at Harvard University. She has been a fellow at the Stanford Center for Advanced Studies in the Behavioral Sciences twice. She is the author of four books, a co-editor of two additional volumes, and author of over two hundred academic articles across a wide variety of disciplines encompassing topics such as experimentation, emotion and decision making, and the biological and genetic bases of political behavior.

Robert Jervis (Ph.D., California at Berkeley, 1968) is the Adlai E. Stevenson Professor of International Politics and has been a member of the Columbia political science department since 1980. In 2000-2001, he served as President of the American Political Science Association. Professor Jervis is co-editor of the “Cornell Studies in Security Affairs,” a series published by Cornell University Press, and a member of numerous editorial review boards for scholarly journals. His publications include Perception and Misperception in International PoliticsThe Meaning of the Nuclear RevolutionSystem Effects: Complexity in Political and Social LifeAmerican Foreign Policy in a New Era, and Why Intelligence Fails: Lessons from the Fall of the Shah and Iraqi WMD, and several edited volumes and numerous articles in scholarly journals. His latest book is How Statesmen Think: The Psychology of International Politics.

Erik J. Dahl is an associate professor of national security affairs at the Naval Postgraduate School in Monterey, California, where he teaches in the National Security Affairs Department and the Center for Homeland Defense and Security. His research and teaching focus on intelligence, terrorism, and international and homeland security, and he is the author of Intelligence and Surprise Attack: Failure and Success from Pearl Harbor to 9/11 and Beyond (Georgetown University Press, 2013). He retired from the U.S. Navy in 2002 after serving 21 years as an intelligence officer, and received his Ph.D. from the Fletcher School of Tufts University.

Jeffrey A. Friedman is Assistant Professor of Government at Dartmouth College. He studies the ways in which national security officials assess uncertainty, and how Americans prioritize responses to different forms of risk. His research has been published by Intelligence and National Security, International Organization, International Security, International Studies Quarterly, the Journal of Conflict Resolution, Security Studies, and the U.S. Army War College. He received his Ph.D. from the Harvard Kennedy School in 2013.

Genevieve Lester is the De Serio Chair of Strategic Intelligence and Associate Professor at the U.S. Army War College. Her areas of interest are international relations and security with an emphasis on accountability and domestic security institutions. She also studies covert action and the relationship between intelligence and Special Operations. She holds a PhD and MA in Political Science from the University of California, Berkeley, an MA in International Economics/International Law and Organizations from the Johns Hopkins University, School of Advanced International Studies, and a BA in history from Carleton College. She recently published her first book, When Should State Secrets Stay Secret? Accountability, Democratic Governance, and Intelligence, with Cambridge University Press. She is currently working on two more book projects, one on restoring the credibility of security services after failure, and another on the interaction of intelligence services and Special Forces.

Joshua Rovner is Associate Professor in the School of International Service at American University. He is the author of Fixing the Facts: National Security and the Politics of Intelligence (Cornell University Press, 2011), and the co-editor of Chaos on the Liberal Order: The Trump Presidency and International Politics in the 21st Century (Columbia University Press, forthcoming in 2018).

Keren Yarhi-Milo is an Assistant Professor of Politics and International Affairs at Princeton University. She is the author of Knowing The Adversary: Leaders, Intelligence and Assessment of Intentions in International Relations (Princeton University Press, 2014). Her new book, titled “Who Fights for Reputation? The Psychology of Leaders in International Conflict” is forthcoming (Princeton University Press 2018).

 

Review by Erik Dahl, Naval Postgraduate School

In this new book, Uri Bar-Joseph and Rose McDermott make a valuable contribution to the literature on intelligence, particularly through their examination of the role that leaders play in using intelligence wisely or poorly. They provide a very useful alternative perspective to that found in most studies of intelligence success and failure—including my own. Most works in intelligence studies focus on the role played by intelligence agencies and analysts, and seek to determine whether failures stem from either insufficient collection, or improper analysis of the intelligence that has been collected. In other words, the question usually asked about intelligence failures is: was it a failure to collect the dots, or a failure to connect the dots through analysis?

Bar-Joseph and McDermott instead turn the focus on the consumers of intelligence and in particular on the leaders who receive the products of intelligence agencies. They argue that it is here, at the level of leaders, rather than of intelligence analysts, that the causes of intelligence success and failure can be found, especially in cases where leaders are able to learn from intelligence failure and allow failure to be followed by success (27).

The book’s most original contributions is in the area of theory, as the authors focus their analysis on the level of the individual, which is typically ignored or treated as secondary in most political science literature. They note that “failure often constitutes our most valuable teacher” (41), and they seek to understand why it is that some leaders learn from intelligence failures, while others do not. They argue that the answer often lies in the psychological makeup of political and military leaders, with some characteristics, such as openness to new ideas, tending to be conducive to learning, while others, such as narcissism, tend to hinder effective learning. While such findings may seem obvious, they argue—appropriately, in my view—that not enough work has been done to examine the role of such psychological factors in the use of intelligence.

The major part of the book consists of their empirical evidence for the argument that the human factor—learning or the failure to learn on the part of leaders—plays a more important role in intelligence failure and success than has been previously recognized. The authors offer this evidence in the form of a set of three pairs of case studies, or ‘dyads’. In each comparison, the first case is one of intelligence failure, and the core of the analysis is a determination of whether or not leaders learned enough from that event to prevent a later failure. In two of the cases the second event is an intelligence success, which they argue demonstrates that leaders learned from the initial failure, while in the third dyad the second event is also a failure, indicating a lack of learning.

The first dyad compares the well-known Soviet failure to anticipate the German invasion of June 1941 (Operation Barbarossa) with what they describe as the successful Soviet use of intelligence surrounding the Battle for Moscow four months later. Their section on Barbarossa is a useful review of this case, but the discussion of the Battle for Moscow is less familiar and more interesting. The authors argue that Soviet leader Joseph Stalin’s key decisions leading up to this climactic battle in October 1941 represented a much more effective use of intelligence than in the run-up to the German invasion only four months earlier. In particular, his decisions to transfer large numbers of troops from the Far Eastern Front to the European front, and to stay in Moscow himself, hinged largely on his acceptance of intelligence that indicated Japan did not intend to attack in Siberia, thus freeing Soviet troops to be transferred west. Their evidence is strong that this case should be considered an intelligence success, adding the Battle for Moscow to the relatively small list in the intelligence studies literature of major recognized successes. But I found the discussion of Stalin’s learning process to be less satisfying, as the evidence for his having learned from the failure of Barbarossa seems largely to consist of the fact that in the first case he rejected multiple warnings, while in the second he listened to intelligence.

Their second dyad compares two famous American intelligence failures: the failures to anticipate the North Korean invasion of South Korea in June 1950 and then the Chinese intervention later that year. Although these are very familiar cases, the authors provide useful analysis, making extensive use of secondary as well as some primary source material; students studying these cases would do well to start with this book. The authors’ discussion of the role and background of General Douglas MacArthur’s intelligence chief, Major General Charles Willoughby, is especially useful. But here again, as with the Soviet cases, the discussion of learning is not completely satisfactory. The authors argue that the repeated failure is at least partly the result of an ineffective learning process (159). But this treats the failure to learn as the explanation for what went wrong, when it seems just as reasonable to treat failure to learn as the problem—which then requires a separate explanation.

The third and final dyad compares Israel’s intelligence failure at the beginning of the 1973 October (Yom Kippur) War with its effective use of intelligence later in that short war. As with the Soviet case comparison, this dyad pairs one well-known example of intelligence failure with a case of intelligence success that has been much less discussed. The story of the Yom Kippur failure, and of the role of the Israeli Director of Military Intelligence, Major General Eli Zeira, has been told before, most notably by Bar-Joseph in several articles and his book The Watchman Fell Asleep.[3] Nonetheless, the new book provides a useful and extensive discussion of this case, relying largely on Hebrew sources. The coverage of the Israeli “special means of collection,” which Zeira famously refused to activate, is especially useful in that it brings together what appears to be the latest open-source thinking about exactly what was involved with this collection source, “whose exact nature is still a state secret” (192).

Newer to me, and perhaps to others who are not specialists, is the discussion of the Israeli intelligence success on October 12, only six days after the war had begun. Mossad officials were able to cut through normal channels and use innovative methods to get a critical warning to their boss, the Mossad chief, who was in the middle of a meeting of Prime Minister Golda Meir’s kitchen cabinet. That warning—that Egyptian forces were preparing for a new offensive—“changed the course of the war” (223), as it gave Israel time to prepare to meet the offensive and regain the initiative.

Again with this case comparison, as with the other two, I found intelligence learning to be unconvincing as an explanation for the difference between the first case and the second. I also had a minor quibble with a comment the authors make in this section: They argue that “in Israel (just as in any other democracy), civilian leaders cannot be more hawkish during times of war than their generals” (224-225). This is contrary to the commonly held view that military professionals are often in fact more cautious than civilians, who may be unfamiliar with the true costs and horrors of war. Bar-Joseph and McDermott may have a point, however, in that they are discussing times of war, and it may be that the conventional “military conservatism” thesis only applies during times of peace.[4] But U.S. history provides at least one prominent example of a civilian leader who was more hawkish during wartime than some of his generals—Abraham Lincoln.

These case studies add up to a convincing argument that in these examples, at least, the responsibility for failure—and on occasion success—can be placed at the feet of military and political leaders. Cognitive biases, personality attributes such as the need for cognitive closure, and narcissism all have clearly played major roles in these failures. But this analysis begs the question, how widely do these lessons apply? Are leaders such as Stalin, MacArthur, Willoughby, and Zeira the exception, or are these pathologies and personality failings more common than most political scientists and intelligence scholars have believed? I am not sure that the human factor can help explain more recent intelligence failures such as the 9/11 attacks, the failure to understand Iraq’s weapons of mass destruction program, or the inability to anticipate developments such as the Arab Spring or the rise of the Islamic State of Iraq and Syria (ISIS).

Bar-Joseph and McDermott conclude by discussing several policy implications that stem from their research. First, they argue that the tendency of the American intelligence community to reorganize after major failures is counterproductive, since failures often stem from problems that could be better addressed through more effective personnel screening and training. Second, they encourage greater use of human intelligence, and argue that American intelligence agencies (and intelligence scholars as well) have often tended to put too much faith in technical means of collection. Third, they note the general American weakness in understanding foreign cultures, and recommend that the intelligence studies community devote more attention to issues (including intelligence failures) beyond the Anglo-Saxon world. These are all sound suggestions, but it is their final recommendation that appears to be both the most significant, and the most difficult to implement: they encourage citizens “to vote for leaders who appear more balanced and open to change” (244).

While there is nothing in the book to suggest that the authors had Donald Trump in mind when they wrote it, this reviewer found it difficult not to wonder whether it could apply to the current occupant of the Oval Office. For example, consider narcissism, which Bar-Joseph and McDermott find plays a significant role in intelligence failure. Their description of the narcissist personality may remind some readers of Trump: “Narcissists can be quite charming, especially at first blush, but once someone pricks their narcissist bubble of superiority by suggesting they are anything less than perfect, anger often erupts toward the offending party” (47-48). Many of Trump’s critics argue he is a prime example of such a personality.[5]

Even without trying to make broad comparisons between Trump and the leaders examined in this book, we can see that a number of the situations that contributed to these intelligence failures can be found in Washington today. For example, the authors note that one of the few men whom Stalin trusted was Germany’s leader Adolf Hitler. They point to several comments Stalin made that showed his respect for the German leader such as “Hitler, what a great man!” (92), and argue that this sense of respect led Stalin to believe Hitler’s claims of peaceful intentions toward the Soviet Union. Is it too much of a leap to wonder whether Donald Trump’s admiration for Russia’s leader Vladimir Putin could lead him to misjudge Russian intentions?

Similarly, Bar-Joseph and McDermott describe the groupthink and other troublesome dynamics that surrounded the Bataan Gang, the group of MacArthur advisors whose members told him what he wanted to hear—that is, when they were not fighting each other (165). A critic might find a similar dynamic at work in Trump’s inner circle.

There are, to be sure, many differences between the leaders examined in this book and Donald Trump. For example, Stalin largely distrusted his generals (94), while by all accounts Trump admires military men. But given President Trump’s very public tensions with, and disdain for, the intelligence community that works for him, it seems reasonable to wonder whether someday Bar-Joseph and McDermott will find it necessary to write a new edition to this book, with Trump as a leading character.

 

Review by Jeffrey A. Friedman, Dartmouth College

Intelligence Success and Failure: The Human Factor makes a major contribution to the study of intelligence and military decision-making. Uri Bar-Joseph and Rose McDermott employ a clever research design based on ‘dyads’ – paired case studies involving intelligence successes as well as intelligence failures – to explore why leaders vary in their ability to anticipate strategic surprise. This research design offers clear advantages over studying intelligence failures alone, and it reveals an important substantive lesson: that intelligence success is often borne from failure, and specifically from the way that leaders adapt their decision-making styles in response to catastrophe.

The book’s discussion of the ‘human factor’ in intelligence departs from conventional approaches to this subject. While there is no shortage of excellent scholarship examining the psychological drivers of intelligence failure, much of this work focuses on factors like confirmation bias, the availability heuristic, and other cognitive limitations that seem hard-wired into the human brain. Understanding these handicaps, along with the inherent challenges of detecting enemy deception and managing complex organizations, suggests that intelligence failures are systemic, inevitable, and thus in many ways forgivable. As Roberta Wohlstetter put it in her seminal study of Pearl Harbor, the officials who fail to anticipate strategic surprise are often “honest, dedicated, and intelligent men.”[6]

Such individuals play little role in Intelligence Success and Failure, which attributes strategic surprise to the psychopathologies of specific people. Bar-Joseph and McDermott argue that the Soviet failure to anticipate Operation Barbarossa resulted from Soviet leader Joseph Stalin’s bizarre identification with Germany’s Adolf Hitler, his need for cognitive closure, and his paranoid belief that warnings of invasion reflected a British conspiracy. The book blames Israel’s surprise in the Yom Kippur War on the mistakes of Eli Zeira, the Director of Military Intelligence who convinced Prime Minister Golda Meir to disregard evidence of impending conflict by cultivating faith in a “special means of collection” (192-193) within Egypt, even though he avoided activating that very tool. The authors argue that United States’ inability to anticipate Chinese intervention in Korea stemmed from General Douglas MacArthur’s “groundless optimism” (171), born from a narcissistic personality and bolstered by a chief of intelligence who suppressed unwelcome information. Throughout these case studies, the honest, dedicated, and intelligent analysts did not fail to convince their leaders of strategic surprise on account of their innate cognitive limitations. Instead, their efforts were blocked by high-ranking individuals with special personality flaws.

The authors’ willingness to personalize blame gives Intelligence Success and Failure a more critical tone than many other prominent works in this field. The flip side to this criticism, however, is that Bar-Joseph and McDermott portray strategic surprise as being less inevitable than the conventional wisdom suggests. Their empirical work demonstrates that once leaders become aware of their personality flaws and organizational vulnerabilities, they can take concrete steps to mitigate these weaknesses. Such measures include avoiding overly insulated decision-making structures and selecting personnel based on psychological profiles, a topic which the Bar-Joseph and McDermott explore in their 2008 Foreign Policy Analysis article, “Change the Analysts and Not the System.”[7] In advancing these arguments, Intelligence and Success and Failure arms scholars with a richer set of ideas about the psychological dimensions of strategic surprise, showing how a personality-based approach can complement the focus on heuristics and biases that has dominated scholarship in this area to date.

Each of the book’s paired case studies begins with an adverse surprise. Stalin rejected clear indications that Germany was preparing for invasion in 1941; the start of the Korean War shocked most U.S. officials 1950; and while Israeli leaders assumed they would have several days’ warning of Egyptian attack in 1973, they learned of the invasion with just ten hours to spare. The latter case is especially well-presented, based on Bar-Joseph’s extensive prior research on Israeli intelligence performance.[8]

The heart of the book’s empirical analysis revolves around the ways in which leaders reacted to these events. Bar-Joseph and McDermott explain how Stalin restrained his tendency to prematurely ‘seize’ upon assessments of uncertainty, which allowed him to revise his expectation that Japan would attack Eastern Russia. This realization came just in time for Stalin to send Soviet forces westward to fight the Battle of Moscow, which became a major turning point in the war. This case does not provide the cleanest comparison to Operation Barbarossa: understanding that an attack will not happen is not the same as determining that one will, for example, and it is presumably easier for leaders to prioritize concrete threats to the capital over speculative threats elsewhere. Yet Bar-Joseph and McDermott make a convincing case that Stalin’s learning process required adapting some of the most pathological elements of his decision-making style. In doing so, the authors provide credible evidence that leaders can mitigate their personality flaws when pressured to do so.

The Israelis also recovered from their failure to anticipate war in striking a major intelligence coup. Their success came in realizing that the Egyptian Army would not hunker down as expected to defend initial gains, but that it would conduct further offensives at the Gidi and Mitla Passes. Following this realization, Israeli leaders discarded plans to pursue a ceasefire, repelled the Egyptian assault, and launched a successful counterattack. Again, there are some significant differences between the context of the failure and the context of the success. It is presumably easier to disregard indications of attack during peace than during war, for example, and intelligence capabilities during wartime typically enjoy greater resources and decision-maker bandwidth. But here, too, Bar-Joseph and McDermott offer a convincing case that intelligence success depended on leaders taking concrete steps to mitigate prior weaknesses, particularly by drawing input from outside Prime Minister Meir’s ‘kitchen cabinet,’ by listening to the views of intelligence officials who disagreed with Zeira, and by entertaining arguments based on fragmentary information rather than clinging to preconceptions in the absence of conclusive evidence to the contrary.

Douglas MacArthur provides the book’s most cautionary tale. After landing U.S. forces at Inchon, MacArthur moved north, disregarding “undisputable evidence” (153) that China was preparing to intervene. MacArthur’s overconfidence was exacerbated by his chief of intelligence, Charles Willoughby, who curated information to support MacArthur’s preconceptions. For example, when the U.S. army captured Chinese soldiers, Willoughby insisted that they were actually North Koreans; when it became obvious that Chinese units had entered the country, he claimed that they were just volunteers. MacArthur and Willoughby deliberately stifled information that contradicted their views in an effort to monopolize control of intelligence flowing back to Washington.

This case carries a lot of weight for the book’s overall research design. MacArthur’s failure to anticipate Chinese intervention offers the key source of cross-sectional variation in the book, suggesting that leaders who do not adapt their decision-making styles in the wake of failure are prone to repeat their earlier mistakes. Officials in Washington deserve substantial scrutiny here as well, both for failing to prevent MacArthur from monopolizing intelligence and for allowing MacArthur to take excessive risks on account of their fears that restraining his autonomy would create political blowback. A better-functioning bureaucracy might thus have been able to compensate for MacArthur’s and Willoughby’s malfeasance. The initial intelligence failure in this case also differs from the other dyads in the book. Unlike Operation Barbarossa and the Yom Kippur War, where leaders disregarded concrete evidence of impending conflict, most U.S. analysts were genuinely surprised by the North Korean invasion (123, 142). Since there is little reason to believe that MacArthur was personally responsible for the original failure, the episode did not necessarily offer clear guidance for learning and adaptation, nor was the second mistake a repetition of the first.

Yet none of these points undermines the strength of the case study in showing how personality flaws can shape the way national security officials make decisions under uncertainty. Even if the Truman administration had taken a more active role in restraining MacArthur, the result would have been to counteract the personality-driven influence of one high-ranking leader – this is exactly the kind of pathology that the book seeks to demonstrate, in part to encourage future administrations to pursue more aggressive countermeasures. While reasonable people can disagree about how much to generalize from this case, Bar-Joseph and McDermott take care to note the limitations of their research design (e.g., 3-4), and theirs is unusually sophisticated. The book’s ultimate goal is not to prove that personality-based factors are the sole or even the primary determinants of intelligence performance writ large, but rather to introduce scholars to a new, viable way of understanding the psychological origins of strategic surprise. In this respect, Intelligence Success and Failure offers a convincing and valuable resource for further scholarship.

Though the book’s case studies are all at least four decades old, they bear clear implications for the modern age. One such lesson is to emphasize how anticipating strategic surprise is not just a matter of gathering and analyzing intelligence. Often, intelligence success also requires convincing leaders to overturn their preconceptions, a task which is far easier said than done given the unusual psychological profiles of the women and men who rise to high office. Such intelligence-policy tensions have been on bright display during the early stages of the Trump presidency, but there is a more systematic reason why the book’s argument should continue to gain relevance moving forward. As collection capabilities become more and more advanced, the challenge of anticipating strategic surprise will increasingly depend on judging other leaders’ intentions. Since these judgments are inherently subjective, they grant broad latitude to decision-makers’ prior assumptions.

It will be interesting to see how the book’s arguments about personality-based factors hold up to further case studies. The authors carefully define their scope of analysis as explaining the origins of intelligence success in the wake of intelligence failures (27). By looking only at cases where intelligence initially failed, it is possible that the book presents an overly-negative picture of leader psychology. The next step in this research program would thus be to develop a broader theory of the conditions under which personality-based factors exert greater influence on intelligence performance than other major variables in the existing literature. This book cannot speak to that broader question given how its dyadic case study design holds structural factors constant, and so this is one area where the book points the way for further research.

I would also be interested to see research that extends the book’s discussion of the interaction between leader psychology and organizational structure. Bar-Joseph and McDermott’s case studies vividly demonstrate the drawbacks of relying on insular, personalized coteries of advisers. But there may also be situations where interpersonal trust and small-group cohesion increase the chances that leaders heed advice that runs contrary to their preconceptions. The fear of leaks alone can stifle controversial discussions among larger groups that lack social bonds. In these and other respects, scholars might seek to merge the lessons from Intelligence Success and Failure with existing scholarship on how foreign policy advisers shape small-group decision-making. Elizabeth Saunders, for example, argues that inexperienced leaders are more likely to marginalize divergent viewpoints, which contrasts with what Bar-Joseph and McDermott find in their case studies of Stalin and MacArthur.[9] Does this indicate a potential interaction effect between experience and personality type?

Last, some practical notes. The book’s cover features a blurb from former CIA Deputy Director Michael Morell, who writes that Intelligence Success and Failure should be “required reading for students of intelligence,” which sounds about right. In particular, the book’s case studies of Chinese intervention in Korea and the start of the Yom Kippur War would be excellent to assign for coursework on intelligence failures at either the graduate or undergraduate levels. The book’s chapter on the personality-based dimensions of learning could be assigned for discussions of the psychology of intelligence, particularly in complementing more traditional research on cognitive heuristics and biases. The book’s first chapter, which reviews existing scholarship on surprise attacks, would provide a useful foundation for graduate-level discussions of empirical approaches to intelligence studies, both for how the authors critique existing scholarship and for how they explicate their own research design. Readers should also note that Bar-Joseph and McDermott’s recent Intelligence and National Security article on Pearl Harbor and Midway adds further credibility to the idea that individual leaders play major roles in explaining the origins of intelligence success and failure.[10]

 

Review by Genevieve Lester, U.S. Army War College

According to the authors, “strategic surprise” is “the sudden realization that one has been operating on the basis of an erroneous threat perception” (9). The victim does not know when, whether, where, and how an attack will occur. Intelligence Success and Failure looks at an old question from a new perspective and provides a distinctive answer. While it is core to both the theory and practice of intelligence and strategic studies, the question of why a state can be surprised by attack has remained unanswered definitively, although the literature abounds.

In Roberta Wohlstetter’s original work on Pearl Harbor, she argues that the explanation of failure was the inability to differentiate valuable signals from indiscriminate noise.[11] Later works, most notably the 9/11 Commission Report, argued along the same lines that the attacks on September 11, 2011 occurred because of the U.S. Intelligence Community’s “failure to connect the dots.” In other words, according to the authors it was a failure to build a theory even though the information was clearly in hand; the information was simply indecipherable to analysts at that time.[12] Others insist that the problems with intelligence and warning are largely bureaucratic or the result of apathy or indecision, and the problem of dealing with the massive quantity of incoming information combined with the limits of human analysis (4). The response to these perceived problems has generally been blunt – a reorganization or restructuring of agency or community of agencies that made the errors.

The authors look past these explanations and remedies. In order to develop a more sophisticated theory about surprise attack, they address three gaps – or ‘lacunae’ – in the literature. The theory building in this book requires a more comprehensive picture and thus the authors engage with the fact that most literature on surprise focuses on intelligence failure as an explanation. As such, they broaden the aperture to include success. Second, the study uses the individual as the level of analysis, building the psychology of the decision-maker into the process (3). This granular approach allows for a closer analysis of how an individual learns and adapts after failure, focusing on the obstacles to information processing—in this case intelligence information–by decision-makers responsible for the “warning-response” phase (4). According to the argument, “the learning curve is our intervening variable in explaining success and failure” (2-3)The originality of this argument lies in the focus on individual leaders’ thinking and psychological make up, explaining through this lens the strategic outcomes of intelligence-based decision-making.

The book is organized into two parts: the first establishes the analytical framework of the book and examines the literature on surprise attack, while the second tests the developed hypotheses through the use of four matched ‘dyads,’ sets of comparative cases. The empirical detail is used to demonstrate how and when decision-maker learning occurred and to investigate how decision-makers update prior belief systems with new information, in order to develop new theories. (48) Accordingly, the dyads are matched to lay pressure on specific, illustrative inflection points. As the authors point out, each set starts out with a mistaken assessment of the threat and thus ends in failure, while the second case is either failure or success. Each set of cases is examined through three sets of explanations: relative stakes, organizational culture, and personality—across the three cases (29).

The first dyad matches failure and success in the Soviet Union: the ‘Soviet surprise’ of June 1941 with the Battle for Moscow (October-December 1941), which demonstrated Soviet leader Joseph Stalin’s effective use of intelligence. The case explores the decision-making dynamics of the June 1941 surprise, noting that Stalin’s rigid decision-making style forced the first failure. According to the argument, the fear in which the staff held Stalin made creativity and alternative suggestions extremely difficult to process. The purges had removed senior officers in the army and security services, forcing inexperienced officers to take their place. This inexperience, as well as the fear of being purged oneself, led to an overall unwillingness to challenge decision-making (76).

Stalin wanted to delay the war until his forces were further modernized. He was also in denial about the timeframe of the German attack and did not adjust to a range of warnings on the imminence of it. Intelligence to please is not new, but motivated obliviousness to a stream of warnings is contrary to usual practice. It runs throughout the cases in this study. While the first example shows failure, the second contrasts by examining the factors that led to the success of the Battle of Moscow, in which Stalin’s trust in his intelligence providers had improved and the stream of information integrated into planning successfully. The intelligence was more realistic and Stalin more willing to accept advice from advisers, meaning that “the fear that dominated discussions before the war was less felt” (112).

This last comment represents an interesting thread that runs throughout the text: the effect of the personality of the senior decision-maker on his subordinate officers. The command environment and culture created by a pathological senior leader affects those in his immediate circle, as they aim to ensure their personal survival through pander and deference. Obviously, this dynamic can create a forcing mechanism that destabilizes correct intelligence reporting and appropriate decision-making, particularly if the leader has strong views about intelligence or requires intelligence to support an existing belief or plan. A question arises then about how the shock of attack is absorbed by the insulated group and whether learning is catalyzed by breakdown of its cohesiveness. The cases provide a snapshot of the challenges of the learning environment. The decision-makers are assessed in terms of their reaction to emergent threat by analysis of how they update their prior assumptions. Secondarily, the other individuals around the senior leaders are assessed for enabling behavior, which can lead to poor intelligence consumption and decision-making.

The second dyad explores two American failures: the June 1950 surprise of the North Korean attack on South Korea, and the lack of warning of two Chinese interventions in the fall of that year. Poor intelligence collection in the first case resulted in failure, while a failure to integrate and learn from a new and apparently overwhelming supply of intelligence resulted in the failure of the second. In the first case, intelligence was weak institutionally throughout the area as Asia was not a priority. Intelligence warnings drawn from the Central Intelligence Agency (CIA) were weak or non-existent and the CIA’s own analysts did not check their assumptions about the North Korean build-up that occurred directly before the attack. For example, rather than conveying warnings, the CIA argued that the build-up was a type of emergent security dilemma with a strengthening South Korea (133).

Misjudging the specifics of Communism within this context, and assuming that the Soviets were to blame, the U.S. used military force in a situation that had been instigated by the North Koreans. The assumption was that the Soviet Union had total control over its adherent states and thus any action by these states was driven by the Soviets, leading to potential overreaction, and in the case of Korea, complete overreaction (139). Overall, failure was caused by a lack of intelligence focus on the Far East at that time and by the pathologies of decision-making made by the U.S. commander General Douglas MacArthur and the staff officers surrounding him. Insulating the imperious and paranoid MacArthur by falsifying intelligence, and providing faulty estimates and foreclosed options led to failure. The second case analyzes North Korea’s attack on South Korea, and Chinese engagement in the Korean War. It ended in failure as well because leadership failed to gauge the Chinese threat accurately and learning had not taken place from the first set of inaccurate intelligence estimates (145).

The third dyad compares two different snapshots of the 1973 October War: one which demonstrates the failure of warning and the second, six days later, that demonstrates change and learning and was a success. In the first case, a broad assumption was made by the Israelis that Egypt would not attack without sufficient airpower. The attack against Israel was successful because Israeli leadership disregarded warning of impending attack. Further, a single individual, charged with military intelligence, believed so strongly that war would not occur with Egypt that he did not use all of his collection resources, leading to failure. Incredibly secure in his own assessment of the situation, Director of Israeli Intelligence Eli Zeira lied to senior political leaders about the failure to use this additional collection tool. He refused to adjust his conception that war would not occur, and thus didn’t integrate new warning information, and didn’t succeed. By the second case, Israeli forces had adapted to the exigencies of the new situation, improvised, and succeeded.

The book assesses the behavior and learning of extreme personalities, German dictator Adolf Hitler, Stalin, and, to a lesser degree, MacArthur and Zeira. Because of the extremity I am wary of the generalizability of its explanation. It may be true that characteristics such as narcissism and paranoia may push individuals to seek public, powerful positions, but not all leaders possess them. Does this narrative provide an explanation for learning in multiple cases, or only in those where the decision-maker is psychologically marred?

The power of the ‘in group’ and the pathologies of staff-officer subordinates are perhaps more generalizable than those of the deciding leadership. In groups surrounding the senior decision-maker, the primary goal tends to be pleasing the leader. If this does not occur, subordinates are marginalized or replaced. Favoritism and attrition run throughout all bureaucracies, thus creating a survival mode among subordinates. What emerges is an insulating environment in which norms require thinking in line with the small group culture. Deviations in thought tend not to be accepted and thus assumptions are not questioned or challenged because attachment to the ‘in group’ and fear of being excluded are very strong.

The argument about Stalin and learning discusses the importance of fear in decision-making. Stalin had purged his senior officers, thus providing a vivid example of what he was capable of doing but also forcing large responsibility for intelligence matters on the shoulders of junior officers. Inexperienced and afraid of falling into the fate of their senior counterparts, these officers did not challenge Stalin’s assumptions and preconceptions – particularly his unwillingness to heed warning – effectively in the first case. By the second case – the Battle for Moscow – the fear seems to have abated somewhat. MacArthur, to a lesser degree, still had a circle of officers he depended upon for their loyalty and who depended on him for his power and authority.

The authors address a serious problem inherent to the literature on intelligence: its U.S.-centric focus. Although it grew up in the shadow of the others, U.S. intelligence has clung to the exceptionalism of the American story, leaving others out. This tendency has severely limited its aperture in practice, but also in its theoretical study. The authors add non-U.S. cases aimed at delving into the mindsets of other decision-makers, adding nuance to the process of thinking about some of the cognitive issues inherent to intelligence: group think, mirror imaging, and cognitive bias, which they discuss at some length.

This monograph is original, deeply researched, and fascinating. Unlike other explanations of intelligence failure and surprise attack that focus generally on institutional, bureaucratic, and organizational rationales, the book seeks explanation through granular analysis. It also especially interesting given the tendency of the United States to reorganize and ‘reform’ after failure, as the authors discuss in their concluding section (240). The only thing large scale change seems to be able to impact is the morale of officers tasked with changing, once again, in reaction to something over which they in many cases had very little control. Further, the authors focus on a crucial factor in the process of intelligence, the value of human intelligence (HUMINT). The more the U.S. developed its intelligence services over time, the more those services have begun to rely primarily on technical means, such as signals intelligence (SIGINT). Personality, pathology, leadership, and learning – the themes of this excellent book – all bring the individual back into the picture and bring individual agency to the problem of defending a nation from surprise attack.

 

Review by Joshua Rovner, American University

Uri Bar-Joseph and Rose McDermott have written a unique and valuable book, breaking new ground on a familiar topic. There is a venerable and vast literature about why intelligence services fail to provide warning of surprise attacks at the outset of war. Indeed, Pearl Harbor postmortems were arguably the launch point for the modern study of intelligence. But as Bar-Joseph and McDermott point out, the problem of warning does not end after the shooting starts, because intelligence services still have to predict how and when the enemy will attack next. And the consumers of intelligence have to decide whether or not to trust them. The book offers a novel theory on the whether or not intelligence succeeds, and whether it gets a hearing.

The book makes two other important contributions. First, it includes two detailed case studies from outside the United States. Intelligence research has for decades concentrated on U.S. experience, but a recent burst of scholarship examines intelligence elsewhere. Bar-Joseph and McDermott’s and book joins this overdue movement.[13] Second, it examines the sources of success as well as failure. There are many theories of intelligence failure, but we know much less about why intelligence works. The reason is that scholars have not explored success stories in as much depth. Nor, with rare exceptions, have they developed theories to explain them.[14] This book does both.

The intra-war performance of intelligence, as well as the quality of intelligence producer-consumer relations, has enormous strategic consequences. Learning from initial failures can help desperate countries recover. Soviet leader Joseph Stalin failed to heed warnings before Nazi Germany attacked in 1941, but he paid much closer attention to intelligence in the aftermath, and was able to prepare for the Battle of Moscow. Israeli intelligence failed to give sufficient warning before the Yom Kippur War in 1973, but provided excellent analysis later that helped Israel turn the tide.

Failure to learn from failure, on the other hand, can lead to catastrophe. The United States was surprised by North Korea’s invasion of South Korea in June 1950, and surprised again when China intervened later that year. Overextended U.S. forces were routed that November and forced to retreat back to the south. Both sides settled into a bloody stalemate.

Bar-Joseph and McDermott explain why some states learn from surprise and others do not. Their basic claim has to do with individual psychology. A toxic brew of narcissism, paranoia, and a powerful need for cognitive closure cause certain leaders to ignore discomfiting information. Some compound the problem by surrounding themselves with sycophants, creating an echo chamber that makes them especially vulnerable to shock. Unable and unwilling to keep an open mind, they cling stubbornly to their own beliefs and expectations, even after information emerges that should cause them to reconsider.

The book’s detailed case studies are powerful, though some are more convincing than others. Stalin’s enormous ego prevented him from registering the signs of a German attack, for example, and his monstrous rule deterred some of his advisors from challenging his views more forcefully. General Douglas MacArthur surrounded himself with acolytes, rewarding their loyalty and brooking no dissent. Warnings about Chinese intervention failed to move him at all.

Although these leaders were unique in some ways, they shared key psychological attributes. These shared traits are the grist for the book’s broader theory. Bar-Joseph and McDermott did not set out to write vivid historical narratives for their own sake, but to show that common psychological traits exist among leaders in different political systems and facing different security challenges. The fact that these cases lead to predictable outcomes forces readers to take the theory seriously.

The theory, however, is incomplete. While their focus is on psychology, Bar-Joseph and McDermott also include other factors like organizational culture and time constraints. It is not clear how all these fit together. The book would benefit from a more direct treatment of what combination of psychological, organizational, and structural variables make success or failure more likely. Such a treatment might also help determine which factors are most important. Is a leader’s narcissism enough to inhibit learning, for example, even when she is supported by an intelligence community with a healthy organizational culture? Is a paranoid leader able to learn as long as she is not a narcissist?

It is also unclear what kind of evidence would falsify the theory. Some events seem to contradict their expectations, forcing the authors to introduce new explanations. Stalin’s paranoia, for example, tells us why he discounted warnings of German preparations: he was sure the British were trying to deceive him. Strangely, however, he was not paranoid about Nazi leader Adolf Hitler, despite the Führer’s record of aggression and conquest. The authors address this puzzle with a separate psychological explanation. Having been abused as a child, Stalin came to identify with the aggressor in order to reduce his anxiety (91). Later, after Hitler’s betrayal, Stalin became receptive to intelligence and deliberate in judgment. This unusual open-mindedness helped him make the right decisions in advance of the defense of Moscow. But why he was not hamstrung by vanity in this case, as he had been in the past? The answer requires yet another factor, the “fear of invalidity.” Having been the victim of premature cognitive closure before the war, Stalin was reluctant to make bad decisions in haste. As a result, he was uncharacteristically objective, at least for a time (117).

These nuanced explanations may be correct, but they require stretching the theory beyond its predictions. Indeed, had Stalin erred again before the Battle of Moscow, the authors could make a strong case that his deep and abiding narcissism was the main cause of failure. They can only explain the actual course of events by introducing new factors. On its own, their theory probably does a better job explaining the counterfactual.

None of this invalidates the theory, of course, but it suggests its limits. Clearer statements of what the theory predicts about each case would help reveal them.

Finally, the cases hint at alternative theories of success and failure. One has to do with a kind of victory fever suffered by all three key individuals before their worst failures. Stalin believed that the Soviet-German nonaggression pact of 1939 was a major diplomatic success, and it fed his opinion of his own strategic judgment. MacArthur was at the height of his powers, having recently led the successful landing at Inchon. And Israeli intelligence chief Eli Zeira, the critical figure in the 1973 case, had previously issued comforting analyses to the government about the unlikelihood of war, encouraging his belief that he was a bulwark against alarmism. Victory fever fueled their narcissism, to be sure, so in a sense this argument complements the psychological explanation in the book. But it does suggest that outside events may be necessary to trigger the extraordinary myopia that leads to surprise.

 

Review by Keren Yarhi-Milo, Princeton University

I

ntelligence Success and Failures is a rare book in the field of intelligence studies. And it is certainly a book that will become required reading for all courses on intelligence and foreign policy. This book is innovative in that it departs from the conventional ways in which we study intelligence in several important ways.

First, the vast majority of studies on intelligence analysis have focused on intelligence failures. While the analytical task of uncovering the causes of intelligence failures is undoubtedly important, it is also highly problematic. Without conducting a similar exercise to unpack the factors that lead to intelligence success, we are essentially selecting on the dependent variable. Intelligence Success and Failures, in contrast, addresses this issue by putting forth a research design where the outcome of interest varies. Its empirical strategy is one of structured analysis of three cases: The first is a study of Soviet leader Joseph Stalin’s effective use of intelligence during the Battle of Moscow, following the ‘Soviet surprise’ of June 1941. The second examines the conditions that led to two subsequent Intelligence failures during the Korean War—the June 1950 failure to anticipate the North Korean invasion of South Korea, followed by the failure to warn against a massive Chinese intervention in the fall of 1950. Finally, the authors offer a third case of effective learning after intelligence failure in the Yom Kippur War, similar to their analysis of Israel’s failure to anticipate a surprise attack in October 1973. The variation present in these cases is not just good social science; understanding when intelligence organizations learn and update within and between crises carries significant policy implications that run far deeper than those we have seen from studies of intelligence failures alone.

A second way Intelligence Success and Failures elevates the scholarship on intelligence analysis is by focusing on cross-national cases. While most scholarship on intelligence failures has traditionally explored historical cases from a single country over time, this book engages in impressive analysis of intelligence learning in Russia, Israel, and the United States. The cross-national comparison of six events that vary across regime, time, and geography is meaningful because it allows readers to appreciate the generalizability of the theory, and to move beyond the often too simplified and possibly overstated difference between intelligence organizations in democracies and non-democracies.

The third, and perhaps most important contribution of Intelligence Success and Failures has to do with the innovative theory at its center. Conventional interpretations of intelligence successes and failures have typically focused on situational or organizational/structural conditions and variables. Others have focused on a host of cognitive and unmotivated biases whose importance varies across cases. Bar-Joseph and McDermott, however, argue that conventional explanations still fail to explain historically important cases of estimates of strategic threats. Their individual-level theory posits that the personality features of key individuals shape the learning process after intelligence failures, and thus also shapes the quality of subsequent decisions on whether and how to adapt.

They focus on two individual-level psychological factors: The first is the individual’s degree of need for cognitive closure (which captures the degree to which individuals engage in confirmation bias and thus ignore contradictory information), whereas the second is a narcissist personality that increases sensitivity to criticism and decreases trust in others. They argue that when key individuals are high on the need for cognitive closure and/or are narcissist, the process of intelligence learning is more likely to be biased, resulting in a failure to adequately adapt after a failure. The strength of this theoretical approach is that it opens up an entirely understudied and useful lens through which we can conceptualize the process of organizational learning, and apply it to the study of strategic estimates. This theory not only brings back ‘agency’ to this structure-heavy line of intelligence scholarship, but it does so carefully, methodically, and reasonably.

There are many more reasons why every serious scholar of intelligence or crisis decision-making should read Intelligence Success and Failures. The empirical analysis is impressive, well-written, and it offers new evidence and insights on familiar historical cases. In the remainder of this review, I would like to highlight four areas that I believe deserve more attention and reflection.

The first issue concerns the generalizability of the theory. As stated earlier, the cases vary across several important dimensions. But at its heart, Intelligence Failures and Successes applies the theory to a particular universe of cases of (effective or ineffective) learning that take place after, or in the shadow of, a strategic failure. Thus, all the cases are bounded by the fact that leaders and analysts are operating within a war that begins with a traumatic strategic surprise. Thus, the books seeks to capture the learning process that takes place subsequent to such failures, and argues that key players’ capacity to learn is crucial to understanding the outcome of the learning process. This is an important analytical move that allow the authors to show variation on the dependent variable across cases, despite the similarity in initial conditions. But in terms of generalizability, it also raises an important question about whether the theory can be used to explain cases that do not involve an initial strategic failure. Put differently, what role are these psychological tendencies of key individuals likely to play in the absence of a major strategic failure that is understood by everyone as such? Put differently, under what set of conditions—outside of crises that begin with a strategic failure—should we continue to expect the individual to play an important causal role in explaining institutional adaptation and learning?

Alternatively, the authors focus on a particular type of a cognitive bias, as well as a narcissistic personality trait, of key individuals whose estimates led to a subsequent success or failure. While the analysis the book offers is compelling, it is unclear whether there are cultural, organizational, or group dynamics that also critically affect the salience of the biases and personality traits of these individuals. If so, what are those facilitating conditions? In what way do they interact with the individual-level variables the authors highlight? These facilitating conditions are important to recognize not only because they would increase the predictive power of this theory, but because in the current empirical analysis it is unclear if they are indeed ‘facilitating’ conditions or ‘confounders.’ Some of those types of variables are clearly mentioned in the analysis. For example, in the discussion of the Yom Kippur war, the authors acknowledge that the strategic culture of the Israel Defense Force (IDF), which glorifies the ability of its officers to rely on personal judgment, played an important role in the learning process. Still, these factors are mentioned on an ad hoc basis and their theoretical importance can be further clarified.

Relatedly, the theory highlights two particular types of psychological variables—the need for cognitive closure and narcissistic drives—that provide the majority of the explanation for intelligence failures in the cases examined. Indeed, one of the strengths of this book is that rather than offering a laundry list of potential biases, it zooms in on two specific psychological variables and carefully traces their effect on the dependent variable. It also provides a clear and compelling explanation of how these psychological traits are likely to affect the learning process. But because the literature on psychological biases is so inextricably linked to the study of intelligence, especially in light of the work by prominent scholars such as Richard Heuer, Robert Jervis, and Richard Betts, it would have been useful if the authors had summarized what the empirical evidence in their important cases shows about the importance of other types of psychological biases relative to the ones they highlight.[15] Here, I specifically think of the role of motivated biases—such as false optimism, defensive avoidance, wishful thinking, mirror imaging—that seem to creep into the analysis of nearly every case in this book. One could imagine how these motivated biases could have affected the learning process of these individuals, and thus could serve as an alternative explanation for why these individuals interpreted the evidence in the manner they did. To be fair to the authors, engaging in a systematic analysis of the role of prominent motivated biases would have been too much to try to squeeze into one book. But it is useful, nonetheless, to know what these authors think about the causal role of these alternative psychological variables, especially given their salience in the field.

Finally, Intelligence Success and Failures has tapped into a larger body of work in international relations that seeks to show the powerful role of individuals in shaping foreign policy. As such, the theory and empirical evidence serve as a powerful reminder to scholars that even in the realm of intelligence analysis—where analysts are rarely if ever the center of attention, as they are believed to be significantly constrained by the culture or organization of the larger intelligence community—individuals can play a critical role in steering an entire apparatus of professionals into subsequent success or failure. This book deserves a lot of credit for convincingly showing how those individuals matter.

A closer look at the analysis, however, shows that these powerful individuals, in terms of their positions or roles, varied significantly across the cases. In one case, it is the head of state, Joseph Stalin; in another it’s the U.S. military’s head of intelligence for the Korean War, Charles Willoughby, who was General Douglas MacArthur’s G-2, or chief of intelligence; and in the third case, it is the head of the IDF intelligence branch, Eli Zeira. On the one hand, the fact that these individuals are claimed to have played a significant causal role despite occupying different positions or roles is reassuring in that it is not ‘where they sit’ that made them prone to hold a particular bias, personality trait, or power over the outcome. At the same time, given this variation and the absence of meaningful guidance in the book on this point, it is harder to a priori identify who these ‘central actors’ or ‘key individuals’ are, and thus why their personal proclivities are likely to play a decisive role in the learning process.

The points I raise are all possible extensions to an otherwise a truly remarkable book that offers great theoretical and empirical insights. Intelligence Success and Failures significantly advances our understanding of crucial historical events whose lessons continue to shape how we study and practice intelligence assessments.

 

Author’s Response by Uri Bar-Joseph, Emeritus Haifa University

I would like to begin by expressing my gratitude to Thomas Maddux and to the editors of H-Diplo/ISSF for making this roundtable happen. I am also deeply grateful to the group of highly respected scholars who reviewed our book. Rose McDermott will respond to their incisive commentary on behalf of us both.

The idea for this book was born when Rose McDermott, following talks with Robert Jervis, came up with the idea of studying intelligence successes rather than the more banal topic of intelligence failures. My experience in the field of intelligence failures convinced me that this was an excellent idea. By this stage I also reached the conclusion that the general pathologies which are usually used to explain warning failures—be they judgmental errors, bureaucratic politics, or the difficulties inherited in the task of forecasting opponent’s behavior—are, in some cases, insufficient. More attention, I have come to feel, should be devoted to the psychological traits of key persons in the warning-response process, and to how these obstruct their ability to view reality objectively. The result of our back and forth on these issues is Intelligence Success and Failure, which focuses on the individual psychology of key intelligence officers and military and civilian policymakers to explain successes and failures in the warning-response process.

The events that took place since the book went into production have confirmed the importance of studying strategic surprises at the age of the global war on terrorism, where religious fanaticism and sophisticated intelligence and military technologies seem to dominate the battlefield. The Russian hacking of the 2016 presidential elections—an event that Thomas Friedman has compared to Pearl Harbor—showed that even when large-scale conventional surprise attacks disappear from the field of human conflict, strategic surprise attacks do not. As in the past, national failure or success in effectively meeting this challenge remains critical for the state’s wellbeing, even when tanks and generals are replaced by computers and hackers. Hence the justification to further study the sources of these important events.

The book’s policy-oriented conclusions (see 241-242) regarding the limits of intelligence collection by technical means and the continuing value of Humint have likewise been confirmed by recent events. The revelation in March 2018 that it was Israel which had destroyed the nuclear reactor near Dir Azur in Syria in September 2007 also exposed the fact that both Israeli and American intelligence services had failed to identify the construction of the site by North Korea. A former head of the Mossad said that it was a failure on the magnitude of the Israeli 1973 fiasco. For the Americans, meanwhile, it was another reminder, alongside the Iraqi Weapons of Mass Destruction failure, of the problems involved in overreliance on intelligence gathering by technical means. A well-placed human source in Pyongyang or Damascus (or in the Russian hacking circles, for that matter) would probably have done more good, in these cases, than the billion-dollar systems that are currently trusted to address these threats.

Finally, the book’s assumption that individuals matter, that psychological factors are often critical in warning-response failures, has likewise been vindicated. This emphasis (to be distinguished the cognitive-bias theory first introduced to IR theory by Robert Jervis)[16] is one of the book’s novel contributions to the study of strategic surprises. President Donald Trump’s election and his presidential performances so far have served to highlight its relevance. Trump’s malignant narcissism, his proneness to conspiratorial thinking (reminiscent of Soviet leader Joseph Stalin’s and American General Douglas MacArthur’s), and his high need for cognitive closure, seem more relevant to the fate of global politics than traditional factors such as the American-Russian-Chinese balance of military power or GNP. In this sense, as Eric Dahl points at the end of his review, certain elements of our case studies might be relevant to today’s politics as well.

Since Rose McDermott’s response addresses the main comments made by the reviewers, I would like to use this opportunity to emphasize a few other issues that did not receive, I think, sufficient attention in the reviews.

The first involves the disproportional impact of the case of Pearl Harbor on the buildup of the theory of strategic surprise. Roberta Wohlstetter’s seminal study was, indeed, a breakthrough when it came out in 1962.[17] Her main conclusions—that the catastrophe was neither the outcome of insufficient information nor of intentional acts by the persons involved in the warning-response process, but the product of a signal-to-noise ratio—were foundational for the field for many years. But her theoretical thesis and conclusions have since been challenged by studies that have shown that in other major cases of strategic surprises, signal-to-noise ratio was a far less relevant factor than Wohlstetter claimed. After all, if two persons (say, Joseph Stalin and General Georgy Zhukov in 1941; or General Charles Willoughby and Colonel James Polk in 1950; or General Eli Zeira and Colonel Yoel Ben-Porat in 1973) who faced similar signal-to-noise ratios reached very different conclusions, the variance cannot be explained by Wohlstetter’s ratio but by the way each of the people in question processed the available signals and noise.

This conclusion leads to an additional issue concerning the disproportional impact of cases from the American repertoire of warning failures to shaping the theory in the field. We do not yet have a strong theory of the causes for the American failure to meet the challenge of Russian hacking of the 2016 presidential elections. Nevertheless, the available information suggests that the failure (very much as happened in Pearl Harbor and 9/11) was due to lack of imagination and inability to ‘connect the dots.’ If this assumption proves valid, it will strengthen the impression that American warning-response failures result in large measure from factors unique to American culture, and do not indicate a universal pattern. If this is so, then establishing a strong theory of intelligence failure calls for a comparative approach, drawing on a broader array of case studies, such as those offered by Soviet, Indian, Egyptian, Israeli, Iraqi, British, and Iranian histories. Here, the contribution of non-American scholars fluent in the languages, cultures and histories of their native nations is vital.

The reviewers have reacted positively to our study’s novel emphasis on warning-response successes (as opposed to the traditional focus on failures). Less attention was paid, however, to the major difference that we outline in the theoretical introduction, between surprise attacks that start wars and those that take place after armed conflict is underway. Since in all the cases that started the war the victim failed to be prepared for the attack when it was launched, while in many cases it was ready for the offense when it was taken during the war, we believe that more studies are needed to explain the variance in this dependent variable. Again, since so many cases in these two sub-categories of events are not American or British, scholars from other countries can take the lead in exploring cases from their own nations’ history.

Lastly, as Keren Yarhi-Milo and a few of the other reviewers have noted, our argument regarding the role of cognitive bias has benefitted greatly from the work of cognitive psychologists such as Amos Tversky and Daniel Kahneman[18] (we were anticipated in this by Robert Jervis, Richard Betts, and Richard Heuer.)[19] The work of Arie Kruglansky and his colleagues,[20] who link psychological traits, primarily the need for cognitive closure, to the tendency to err in judgmental tasks, likewise helped us to provide more specific explanations for the failures we discussed by linking the personality of key persons in the warning-response process to their tendency to fall victim to the trap of cognitive biases. We discussed this issue in an earlier study that focused on ways of diminishing intelligence-analysis mistakes.[21] And following our work on Intelligence Success and Failure we believe that in many cases this type of explanation may be highly useful for explaining additional failures in the warning-response process.

 

Author’s Response by Rose McDermott, Brown University

I too would like to begin by thanking Tom Maddux for organizing this roundtable discussion and the distinguished reviewers for their careful, thoughtful, and generous comments.

Uri Bar-Joseph is too kind in attributing the idea to explore successes to me; indeed, the idea, as he indicated, really came from Robert Jervis, who pointed out the challenges associated with throwing the baby out with the bathwater every time we make major changes in intelligence infrastructure following failure, during a panel at the American Political Science Association (APSA) Conference

The reviewers collectively point to much of what we hoped was innovative in this work regarding the identification of personality factors in intelligence success and failure but also note some meaningful limitations, as well as directions that future work might profitably develop.

Eric Dahl notes that much of what we discuss in terms of narcissism is deeply resonant of President Donald Trump. Indeed, we wrote the book before Trump was even a viable Presidential candidate, much less elected President. And yet the pathology clearly resonates beyond the historical cases we examine. Dahl wonders whether the pathologies we note are common. Using the Diagnostic and Statistical Manual of Mental Disorders IV-R classification, lifetime prevalence of narcissistic personality disorder runs about 6.2%, with much higher rates for men (7.7%) than women (4.8%).[22] In addition, the disorder often co-occurs with other psychiatric conditions, primarily in men. While this indeed represents a high level of occurrence in the general population, Post argues that narcissism is more prevalent in political circles, precisely because political careers disproportionately attract those who seek the spotlight and desire to be the center of attention.[23] In this way, narcissistic individuals naturally gravitate toward the political arena. In this way, Dahl’s insight regarding public accountability becomes central to our understanding of how to mitigate the impact of such individuals on important decisions; the public needs to take responsibility to vote against people who demonstrate such tendencies. After all, by definition, narcissists do not hide their predilections.

Genevieve Lester begins by noting the centrality of the Pearl Harbor and 9/11 cases to not only American intelligence analysis, but to the broader field of intelligence scholarship as well. We tried, both in the book and also in our article on Pearl Harbor and Midway in Intelligence and National Security,[24] to distinguish between cases of seemingly ‘normal’ surprises caused by insufficient information and those that resulted from human failings, such as those we identify in Barbarossa with Soviet leader Joseph Stalin, General Douglas McArthur regarding Chinese intervention in the Korean conflict, or Israeli Director of Military Intelligence Eli Zeira in Yom Kippur. As Lester notes, it is not that these other factors do not matter, but rather that the human factor we focus on in this book has been relatively neglected in previous work on intelligence failure.

Keren Yarhi-Milo astutely notes that focusing on the human factor actually allows for more hope and possibility for positive intervention than pointing to larger structural or organizational factors which may be less susceptible to rapid replacement or change. Indeed, one of the ways that such change can take place is if key decision makers are attentive to the problems introduced by narcissistic or close-minded officials, and if the public remains attentive to such tendencies in elected officials. Like Jeffrey Friedman, she correctly asks whether and how our model might apply to cases where failure did not occur first. In this way, both she and Friedman identify an important limitation in the design of the study, and one that would be amenable to future work investigating the scope conditions under which learning can occur without necessitating initial failure. In another context, Joshua Rovner supplies what is most likely, in my opinion, to provide the more productive avenue for illuminating these circumstances when he writes that victory fever “does suggest that outside events may be necessary to trigger the extraordinary myopia that leads to surprise.”

It may be that certain events, short of failure, nonetheless can operate as signal situational triggers which activate a particular kind of learning that might mitigate the prospects for failure while optimizing the chances for success. This might take several different kinds of forms. Lower level damage might be caught by those who are particularly hyper-vigilant or tasked with surveillance for early warning. For example, early evidence of invasive hacking of financial or corporate information and other cyber exploitation might have served as a useful warning, short of massive election fraud or infrastructure damage, indicating that greater attention should have been paid to defense around civilian targets. However, in many cases I would suspect that emotional responses to particular events or individuals may serve as the first, and often most accurate, indicator of malfeasance or malfunction of some kind or other. Some people may call it intuition, or other may say a person or environment does not ‘feel right’ but these indicators really reflect deep pattern recognition of a constellations of events, experiences, behaviors and individuals which have been associated in a meaningful way with things going wrong in the past. Such associations may happen below conscious awareness but are largely effective and accurate ways to respond to a complicated world. Antonio Damasio’s work detailing his somatic marker hypothesis presents the best and most comprehensive evidence in support of this argument.[25]

Friedman discusses how the personality factors we focus on provide a useful complement to the earlier important work emphasizing heuristics. And indeed, that was one of our major goals in writing the book. Much of the previous literature in political science in general has presented a clear contrast between unmotivated biases, such as those most prominently represented by the work on judgmental heuristics, and so-called motivated ones, including work on wishful thinking and other personality factors. While this may have been a useful analytic strategy, it does not constitute an accurate reflection of how the human mind operates. These processes are inextricably intertwined in the human brain, and do not operate in distinct or independent ways. In presenting both notions that represent ideas from the more traditionally ‘cognitive’ side, such as cognitive closure, with analysis emerging from a more clinical, motivated tradition such as narcissism, we hoped to demonstrate how these motivated and unmotivated factors work in constant iterative interaction to produce biases in both judgment and decision making. And I would certainly agree with Friedman that there is likely an important, and discernible, interaction between personality type and experience. In particular, as we know from genetic work, one of the most important influences of genetic tendencies on behavior has to do with the environments that individuals self-select into; shy people do not go out of their way to attend parties, and risk takers seek out dangerous adventures. This affects not only the experiences that people are likely to have, which influence the person in turn, but also shapes the other kinds of people they like to interact with. This is important because of the point that Friedman raises about leaders’ judging other people’s intentions. Individual notions of the characteristics they like in others (i.e. strength, kindness, etc.) is influenced not only by one’s own temperament but also by the exposure they have had to others. Bullies may not find belligerent behavior to be unusual because they bring it with them everywhere they go; but more mild mannered sorts may find it more unusual and offensive and judge the implications of it in a different light. So, for example, Stalin was not as put off by Germany dictator Adolf Hitler’s belligerence as he was by what he assumed to be British Prime Minister Winston Churchill’s duplicitousness. While he was wrong in his assessment of both, it is likely that that judgmental error was rooted, as least in part, by his experience of himself.

Each of the reviews pointed to important limitations in our work that, luckily, is largely amenable to future research which can further explore the scope conditions of our model. Yarhi-Milo in particular is quite correct in noting the challenges associated with weighting the importance of the factors we identify relative to those raised by other previous authors and analysis. And of course there is likely no one right answer, and many of these factors work in combination with both other internal experiential and intellectual factors, as well as external forces, including those provided by other people. But for me, the big surprise of this study, and our findings, was how endemic, powerful, and pervasive narcissism was across time and space, and how deeply its presence compromised any kind of rational or reasonable decision making. Remaining vigilant and aware of its profoundly negative consequences should be a prime concern for the public and unaffected decision makers. This insight does not portend well for our current situation.

 

Notes

[1] Robert Jervis, Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War (Ithaca: Cornell University Press, 2010), 19-20.

[2] Erik Dahl, Intelligence and Surprise Attack: Failure and Success from Pearl Harbor to 9/11 and Beyond (Washington, D.C.: Georgetown University Press, 2013).

[3] Uri Bar-Joseph, The Watchman Fell Asleep: The Surprise of Yom Kippur and Its Sources (Albany: State University of New York Press, 2005).

[4] Sechser notes that several scholars have made the argument that once war starts, military officers tend to favor escalation more than do civilians. Todd S. Sechser, “Are Soldiers Less War-Prone than Statesmen?” Journal of Conflict Resolution 48:5 (October 2004), 747, note 1.

[5] For example, see Paul Krugman, “Trump’s Deadly Narcissism,” The New York Times, 29 September 2017, https://www.nytimes.com/2017/09/29/opinion/trumps-deadly-narcissism.html?_r=0. For a review of several new books that examine Trump’s personality and psychology, see Carlos Lozada, “Is Trump mentally ill? Or is America? Psychiatrists weigh in,” The Washington Post, 22 September 2017, https://www.washingtonpost.com/news/book-party/wp/2017/09/22/is-trump-mentally-ill-or-is-america-psychiatrists-weigh-in/.

[6] Roberta Wohlstetter, Pearl Harbor: Warning and Decision (Stanford: Stanford University Press, 1962), 397.

[7] Uri Bar-Joseph and Rose McDermott, “Change the Analysts and Not the System: A Different Approach to Intelligence Reform,” Foreign Policy Analysis 4 (2008): 127-145.

[8] See, for example, Uri Bar-Joseph, The Watchman Fell Asleep: The Surprise of the Yom Kippur War and its Sources (Albany: SUNY University Press, 2005); Bar-Joseph, “The ‘Special Means of Collection’: The Missing Link in the Surprise of the Yom Kippur War,” Middle East Journal 67 (2013): 521-546; and Bar-Joseph, “A Question of Loyalty: Ashraf Marwan and Israel’s Intelligence Fiasco in the Yom Kippur War,” Intelligence and National Security 30 (2015): 667-685.

[9] Elizabeth Saunders, “No Substitute for Experience: Presidents, Advisers, and Information in Group Decision Making,” International Organization 71 (2017): S1-S247.

[10] Rose McDermott and Uri Bar-Joseph, “Pearl Harbor and Midway: The Decisive Influence of Two Men on the Outcomes,” Intelligence and National Security 31 (2016): 949-962.

[11] Robert Wohlstetter, Pearl Harbor: Warning and Decision (Stanford: Stanford University Press, 1962).

[12] See Thomas H. Kean, and Lee Hamilton, The 9/11 Commission Report: Final Report of the National Commission on Terrorist Attacks Upon the United States (Washington, D.C.: National Commission on Terrorist Attacks upon the United States, 2004) for further detail on the “failure to connect the dots” as well as the adjacent, “failure of imagination” used to critique the performance of the national security agencies.

[13] An excellent example is Philip H. J. Davies and Kristian C. Gustafson, eds., Intelligence Elsewhere: Spies and Espionage Outside the Anglosphere (Washington, D.C.: Georgetown University Press, 2013).

[14] See especially Thomas Juneau, ed., Strategic Analysis in Support of International Policy Making: Case Studies in Achieving Analytical Relevance (Lanham: Rowman & Littlefield, 2017); and Erik J. Dahl, Intelligence and Surprise Attack: Failure and Success from Pearl Harbor to 9/11 and Beyond (Washington, D.C.: Georgetown University Press, 2013).

[15] Richard Heuer, “Psychology of Intelligence Analysis,” Center for the Study of Intelligence, Central Intelligence Agency, 1999; Robert Jervis, Why Intelligence Fails (Ithaca: Cornell University Press, 2010); Richard Betts, Enemies of Intelligence (New York: Columbia University Press, 2007).

[16] Robert Jervis, Perception and Misperception in International Politics (Princeton: Princeton University Press, 1976).

[17] Roberta Wohlstetter, Pearl Harbor: Warning and Decision (Stanford: Stanford University Press, 1962).

[18] Amos Tversky and Daniel Kahneman, “Judgment under Uncertainty: Heuristic Biases,” Science (New Series) 185:4157. (September 1974): 1124-1131.

[19] Robert Jervis, Perception and Misperception in International Politics (Princeton: Princeton University Press, 1976); Richard J. Heuer, Psychology of Intelligence Analysis (Center for the Study of Intelligence. Central Intelligence Agency, 1999); Richard Betts, Enemies of Intelligence (New York: Columbia University Press, 2007).

[20]Arie W. Kruglanski and Donna M. Webster, “Motivated Closing of the Mind: “Seizing” and “Freezing,”” Psychological Review 103:2 (1996): 263-283.

[21] Uri Bar-Joseph and Rose McDermott, “Change the Analyst and not the System: A Different Approach to Intelligence Reforms,” Foreign Policy Analysis 4:2 (April 2008): 26-44.

[22] F. S. Stinson, D. A. Dawson, R. B. Goldstein, S. P. Chou, B. Huang, S.M. Smith, and B. F. Grant, “Prevalence, correlates, disability, and comorbidity of DSM-IV narcissistic personality disorder: results from the wave 2 national epidemiologic survey on alcohol and related conditions,” The Journal of Clinical Psychiatry 69:7 (2008), 1033.

[23] J. M. Post, Narcissism and politics: Dreams of glory  (Cambridge: Cambridge University Press, 2014).

[24] Rose McDermott and Uri Bar-Joseph, “Pearl Harbor and Midway: the decisive influence of two men on the outcomes,” Intelligence and National Security 31:7 (2016): 949-962.

[25] Antonio Damasio, Descartes’ error: Emotion, Reason, and the Human Brain (New York: Random House, 1994).