Skip to content

Investigating 3.11: Disaster and the Politics of Expert Inquiry

Scott Knowles
Drexel Univesity
sgk23@drexel.edu

DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION

PLEASE NOTE: This abridged version of the paper does not include the full Fukushima and World Trade Center disaster investigation case studies—those may be found in a longer version of the paper available upon request.

Abstract

Drawing on the emerging “disaster STS” synthesis, this paper reviews the key foundational concepts of disaster STS as they apply to the study of post-disaster technical investigations.  The paper suggests areas for further research in the form of seven trends critical to analyzing more recent disaster investigations.  These seven trends reflect emerging currents in disaster STS scholarship, and/or opportunities for historical revision: 1) a crisis in assessing regulatory effectiveness amidst the trend towards deregulation, 2) the “discovery” of vulnerable populations, 3) the struggle over defining the appropriate and authoritative investigative body, 4) the widespread use of the Internet and social media as tools of citizen dissent, 5) the rise of “sustainability” as an organizing principle for technological change, 6) the struggle over risk modeling as a method applicable to risk regulation, 7) the struggle over defining the “dominant” disaster in multi-causal disaster episodes.  The paper elaborates on these themes with comparative case studies of the Fukushima investigations and the World Trade Center investigations.

In observing patterns of risk-taking we see the values of a society.  Decision-making processes—some open and democratic, some sealed behind the walls of technical and corporate privilege—lead to the toleration of some risks and not others, for some citizens and not others.  And, from these deliberations flow the artifacts of risk governance: land use laws, building codes, protective infrastructures, inspection protocols for technological systems, pollutant control regimes, toxic exposure guidelines, emergency management handbooks.  These artifacts and the politics they represent are generally so esoteric, so secretively managed, that the power relations inherent in risk-taking recede from view.  In the human development of risky ecologies (cities in flood plains, nuclear power plants on fault lines) we chart the evolution of modern risk as a seemingly natural, inevitable, expertly managed function of industrialized life.

Disasters bring the formlessness of risk calculations into shape, in the faces of victims, or the wreckage from a hurricane.  Disasters also bring risks and their managers back into public view and under scrutiny, at least for a time.  The rapid and confusing flow of events in the midst of a disaster makes it a difficult moment for the review of a particular risk and its history.  As such, formal disaster investigations have long stood as the venues through which chronology, causality, and blame are allocated after a disaster.  Investigation is a normal outgrowth of the very techno-scientific mode of thinking that brings high-risk technological systems like nuclear power plants into existence in the first place.  Without an investigation the system that fails cannot be redesigned and restarted.  The earthquake-resistant building codes, the levees, the back-up generators—none can be restored to normalcy, to profitability, without the formal study and closure that investigation provides.

Drawing on the emerging “disaster STS” synthesis, this paper reviews the key foundational concepts of disaster STS as they apply to the study of post-disaster technical investigations.[i]  The paper suggests several areas for further research in the form of seven trends critical to analyzing more recent disaster investigations.  These seven trends reflect emerging currents in disaster STS scholarship, and/or opportunities for historical revision: 1) a crisis in assessing regulatory effectiveness amidst the trend towards deregulation, 2) the “discovery” of vulnerable populations, 3) the struggle over defining the appropriate and authoritative investigative body, 4) the widespread use of the Internet and social media as tools of citizen dissent, 5) the rise of “sustainability” as an organizing principle for technological change, 6) the struggle over risk modeling as a method applicable to risk regulation, 7) the struggle over defining the “dominant” disaster in multi-causal disaster episodes.  The paper elaborates on these themes with comparative case studies of the Fukushima investigations and the World Trade Center investigations.

Investigations

What characteristics do disaster investigations share in common?  Starting with the idea that they are summoned to find the facts of a disaster, we must also be willing to accept that disaster investigators and their facts are embedded in society, and embedded in multiple overlapping societies at once.  It makes sense to start by doing some lining up of interests, asking: who wants the investigation and what do they want from it?  An investigation may at one and the same time satisfy a professional organization but not a regulatory agency—a legislature but not an executive—one interested group of citizens, but not another.

An interdisciplinary, disaster-focused STS scholarship has emerged from a generation (and more in some instances) of empirical studies in anthropology, geography, history, policy studies, psychology, and sociology.  STS researchers will recognize that the study of disaster investigations fits squarely within the “risk society” theoretical tradition established by Ulrich Beck.  To Beck, we have entered a time when industrialized nations are spending as much time and effort managing risks and disasters as they used to spend managing the creation of cities and factories.  For Beck the relationship is causal, risks today “are risks of modernization.  They are a wholesale product of industrialization, and are systematically intensified as it becomes global.”[ii]

Charles Perrow’s “normal accidents” organizational theory also serves as a touchstone.  Countering the commonplace notion that risk modeling and expert management are the keys to avoiding disasters, Perrow describes a society marked by “normal accidents,” the kind of technological failures that result in massive system disruptions like blackouts or nuclear power plant failures.  A technological system is not just machinery—it involves the operators (and their level of competency), regulatory agencies (and their fastidiousness).  “Small failures go on continuously,” Perrow notes, “none of them [are] devastating in themselves in isolation, [but they] come together in unexpected ways and defeat the safety devices—the definition of a “normal accident” . . . If the accident brings down a significant part of the system, and the system has catastrophic potential, we will have a catastrophe.[iii]  Such catastrophes are not historically very exceptional, though governments and industrial corporations tend to treat them with surprise, segregating disasters from the more upbeat “progress” narratives of technological modernity.

The study of disaster investigations also returns us to some of the foundational epistemological issues that STS researchers seek to reveal in the “fact-based” realm of techno-science.  Facts are good, but facts (when did the reactor melt down, what was in the safety manual?) will never overwhelm the more irrational, contingent, and highly contextual demands of a society to have what its most powerful actors deem appropriate, or what (sometimes) a politically-savvy interest group demands.  As Sheila Jasanoff argues: “What we know about the world is intimately linked to our sense of what we can do about it, as well as to the felt legitimacy of specific actors, instruments and courses of action.”[iv]  With this caution, we might reflect that disaster investigations are themselves conditioned by the very forces that created the risk and enabled the disaster in the first place.

Writing in the aftermath of Hurricane Katrina, Stephen Hilgartner beautifully summarized the contributions of STS research to the analysis of disaster investigations, identifying a set of recurrent themes that I have condensed (added to a bit) and reformulated into three main propositions.[v]

1) Disasters are not “natural,” and they are not aberrant.  In industrialized societies disasters are “normal,” the by-products of the forces of modernization, particularly urbanization, industrialization, the breakdown of a division between “natural” and “human” ecologies, and the creation of complex technological systems.[vi]

2) Political legitimacy in the modern state relies in no small part on the successful management of high-risk technological systems.  The state cannot function effectively without a risk and disaster-minded bureaucracy.  Likewise, after a disaster occurs, legitimacy relies on relieving victims, fostering recovery, and restoring public faith in the ability of government and industry to anticipate and prevent disaster recurrences.[vii]

3) Disaster investigations aspire to soothe public fears and restore faith in experts.  Disaster investigations may reveal negligence that opens the door to sustained critiques of corporate, regulatory, and/or governmental leadership.  With so much on the line, disaster investigations may result in multiple parties shifting blame one to the other, with associated efforts to limit the power of investigators, thwart their work, and distort the evidence necessary to draw conclusions.[viii]

These three broad findings represent the foundation upon which any critical analysis of modern disaster investigations should be grounded.  However, new disasters reveal developing historical trends, and they force the revision of established theories in light of new evidence.

New Directions

Recent disasters—and by extension recent disaster investigations—have opened areas for further research that add depth and complexity to the three themes discussed above.  The following seven trends are observable in more recent disaster investigations, particularly the World Trade Center collapse investigations and the Fukushima investigations.

1) A crisis of assessing regulatory effectiveness amidst the trend towards deregulation.

One of the major global economic trends of the past two decades has been the aggressive deregulation of public utilities like transportation and energy.  Deregulation is one of a package of neoliberal tools that has opened the way towards global investments in industries that were previously owned and/or tightly regulated by government.  Cost-cutting being the nature of competition, deregulated energy companies have frequently shown success in driving utility prices down, and they have opened up markets where none existed previously.  Deregulation has also been lauded as a means to trim government waste and prevent corruption.  Analysts disagree sharply, though, about the overall impacts of deregulation on safety.  Recent disaster investigations thus have become forums for articulation for and against deregulation, particularly in regards to the safety question.

With less and less faith left in the traditional model of state regulation, policymakers are left scrambling to suggest reforms that can neutralize public safety concerns while continuing larger trends towards global investment, privatization, and shrinking the regulatory state.  Japan is one of the few industrialized nations yet to undertake a deregulation of its nuclear power sector.  Criticism of the relationship between the energy industry and Japanese regulatory bodies became in some ways the central narrative of the government’s most extensive disaster investigation into the Fukushima disaster, that of the Fukushima Nuclear Accident Independent Investigation Commission (NAIIC).  The NAIIC report closely details the failures of Japan’s Nuclear and Industrial Safety Agency (NISA) and Nuclear Safety Commission (NSC) to police the Tokyo Electric Power Company (TEPCO), the nation’s largest electricity provider.  It is presently unclear if the NAIIC report will prove influential towards Japanese energy deregulation.  NAIIC’s  aggressive critiques did spark administrative reform and the creation of a new agency, the Nuclear Regulatory Authority, with greater independence from industry than its predecessor agencies.

The regulation/deregulation debate in the United States made its way into the World Trade Center collapse investigations, albeit from a different perspective.  In the U.S. there is no centralized regulation of building codes and building safety—such regulation occurs at the state and local level, guided by a “consensus” system involving private and non-profit groups representing fire safety, construction, and engineering.  In the midst of the World Trade Center investigation this absence of regulation was a matter of intense discussion and critique, with one outcome being government funding for additional building safety investigation and research at the National Institute of Standards and Technology (NIST, a federal agency).  No further moves towards governmental regulation of high-rise construction resulted.[ix]

2) The “discovery” of vulnerable populations.

Disasters do not affect populations equally; disaster losses (human and material) reflect the underlying social stratification of a society.  Almost invariably marginalized groups live in more risk-prone geographies, and by definition they have fewer resources with which to confront loss: less money and credit, lack of professional networks or access to political power.  Disaster researchers have built these findings into a “vulnerability paradigm” with tremendous value towards understanding how the individual experience of a disaster can be radically different across a society, even one as rich as the United States or Japan.

The Fukushima investigations were particularly concerned with evacuees and their hardships, especially those who were evacuated from hospitals—the mental health implications of evacuation were also taken into account.  The specter of radiation exposures (especially for children), and a possible Tokyo evacuation also put a fine point on the potential human catastrophe of evacuating the world’s largest city.  Disproportionate risks borne by nuclear workers were central to investigations, and to the critique of investigations coming both within and outside of Japan.

Likewise, the World Trade Center collapse investigations looked closely at the situation of emergency personnel (fire, police) as they responded to the disaster.  Additional investigations chronicled the failures of public health and occupational safety authorities to ensure proper egress from the Twin Towers, and to safeguard the health of relief workers, and lower Manhattan residents.  The concern over vulnerable populations articulated in recent disaster investigations points to the possibility of disaster policies that address pre-disaster mitigation specifically in low socioeconomic status communities.  This would mark a strong departure from the more technocratic outcomes of previous disaster investigations.[x]

3) The struggle over defining the appropriate and authoritative investigative body.

Going back into the nineteenth century in the United States one finds confusion over the appropriate body of experts to deliver opinion and blame in the aftermath of a disaster.  Such squabbles are particularly pointed in periods of broader government reform, such as the Progressive Era or the 1960s in the United States (both periods saw uncertainty in disaster investigative process).  Clarity on this issue is more readily available when court cases are involved, considering the more clearly drawn lines of authority in criminal proceedings.  Many large-scale disasters are not litigated in courtrooms, however.

Most of the disasters that merit large-scale investigation run up against government oversight responsibilities, and as such government-backed investigations are generally deemed authoritative in terms of assessing blame and launching policy reforms.  But in countries with multi-layered governments and with dispersed regulatory functions, the authoritative body within the government is frequently difficult to ascertain, and multiple investigations are common.  Outside of government, relevant professional societies and scientific bodies often conduct investigations in order to grow their knowledge base, and to protect technical and ethical reputation.  The press also frequently finds itself in the role of conducting in-depth investigations, under the aegis of impartiality and willingness to criticize government and the private sector equally.  Internal investigations by insurance companies, and private companies party to litigation certainly take place, but such records are rarely available to the public.  Overall, the struggle to define the dominant investigator is itself a key feature of the disaster investigation enterprise.  In Japan at least four major investigations have been mounted; in the United States three investigations followed September 11—scores of smaller investigations have arisen from each disaster.[xi]

4) The widespread use of the Internet and social media as tools of citizen dissent.

The availability of information made possible by the Internet and social media has demonstrated a break with traditions in disaster investigations.  First of all, it opens the way to further contestation of the authority of experts, and it has tended to move expert investigative bodies towards the public release of more and more documentary evidence that previously may not have been released.  Both the Fukushima and World Trade Center investigations released unprecedented quantities of documents, including staff reports, technical reports, images, and interviews—both also witnessed the posting of hearings via online video.

Citizen victim-support and advocacy groups have also been able to form nationally (a traditionally observed practice), but also internationally, and far more rapidly than before the availability of new media.  These networks were critical to the formation of political influence among September 11 family groups—and were in part responsible for pushing forward disaster investigations when government leaders were slow to do so.  An additional trend: new media like blogs, wikis, and Twitter have made it possible for disaster victims and emergency personnel to post information in real-time, as the disaster is unfolding.  In this sense they create communities of “expertise” that parallel official expert bodies, and they are also creating evidence disaster investigators may use in their more formal investigations.  Given the unsettled legal status of such evidence, or of the ownership and control of digital communications, this is clearly an area of study that is (fascinating, and) in flux.[xii]

5) The rise of “sustainability” as an organizing principle for technological change.

A major trend evident in both the Fukushima and World Trade Center investigations (and the Hurricane Katrina investigations, too) is a critique of previous generations of technological artifacts as unsustainable.  The question of whether supertall skyscrapers or nuclear power plants are necessary, or desirable technologies given recent disasters has been commonly voiced in the midst of investigations.  The general post-disaster trend has been towards deepening commitments to a given regime of socio-technical power.  For example, engineering experts investigating disasters have recently tended to criticize the lack of sustainability (both in terms of environmental impact and cost) in the infrastructures they inherited from previous generations.  Their recommendations have primarily entailed new commitments to better, more sustainable designs and engineered fixes to get past the failures of older designs.  Regulators have tended to look towards enhanced regulatory protocols.

However, in both Fukushima and the World Trade Center, calls for abolition of risky technology (particularly ones that have proven repeatedly dangerous) have been voiced.  The most remarkable outcome of the Fukushima investigations was a vibrant debate within Japan over whether or not to abolish or phase out nuclear power altogether.  Other countries took the Fukushima debates into their own political cultures, and in some instances (Germany) have committed to nuclear abolition in the name of sustainability and safety.  The Fukushima case is complicated by the fact that many advocates of “green energy” point to nuclear power as a key solution to carbon emissions and climate change.

6) The struggle over risk modeling as a method applicable to risk regulation.

Do the models used by risk analysts correspond to reality?  This question sits at the heart of the Fukushima and World Trade Center investigations.  In complex technological systems it is impossible to model risks through direct experience.  As such, operators of high-risk systems are mandated by regulators and insurers to perform and document regular risk assessments.  These assessments are complex pictures of a reality where uncertainties must be turned into testing protocols and measurable quantities.  Investigators found much to dislike in the risk modeling done by TEPCO in Japan, and by the structural engineers who designed and built the Twin Towers.

A gap between toxic realities and estimated exposure to risk is condoned by regulatory practice in agencies that lack the resources or power to demand more patient, creative, and realistic research.  Scott Frickel analyzes this problem in his studies of the post-Katrina environmental politics of New Orleans.[xiii]  He follows the trail as fears of a “toxic gumbo” swamping New Orleans were raised in the immediate past-disaster period.  Environmental Protection Agency (EPA) and LDEQ (Louisiana Department of Environmental Quality) officials dutifully tested and declared the “gumbo” to be not as harmful as people had worried, after all.  Frickel takes issue with the conclusion, and in doing so introduces a provocative new concept, “organized ignorance.”  Frickel explains:

The tests the EPA and LDEQ have conducted are based on the compartmentalization of ecosystems into discrete media (e.g., air, soil, and water).  These testing regimes, in turn, correspond to media-specific disciplines (e.g., aquatic toxicology), regulatory bureaucracies (e.g., LDEQ’s Water Quality Assessment Division), and federal regulatory frameworks (e.g., Clean Water Act), each of which develops understandings of environmental contamination in ways that stand at some odds to ecological reality.  In short, we have organized knowledge in ways that ensure we will not really know what is happening in the ecosystems we study.  This is [a] . . . form of organized ignorance.[xiv]

The “organized ignorance” concept helps us understand why after September 11 the EPA ruled air and water quality in lower Manhattan to be good enough to reopen Wall Street and let residents return to their homes almost immediately after the World Trade Center collapse.  EPA director Christie Todd Whitman eventually was forced to resign when it became clear that EPA officials had created ad hoc standards where none existed for the toxins released by the destroyed towers.  Political pressure to get lower Manhattan back to “normal” had overwhelmed the chance to carefully design test protocols and evaluate public health concerns.  One can readily spot the value of the “organized ignorance” concept to disaster historians as they work back through historical cases of environmental disasters and the frequently unsuccessful attempts of expert institutions to monitor and mitigate them.

7) The struggle over defining the “dominant” disaster in multi-causal disaster episodes.

The Fukushima Daiichi accident was initiated by flooding caused by a tsunami, which itself was caused by an earthquake.  The Twin Towers collapsed due to the failure of egress and fire control systems—the fires were caused by plane crashes, brought about by way of a terrorist attack.  Like the difficulty described above about defining the dominant investigator, recent disasters have been multi-causal, seemingly running into endlessly complex configurations of cause-effect scenarios.  This confusion has challenged the authority of experts.

A comprehensive investigation seems almost impossible in light of the breadth of information and expertise required to make sense of such disasters.  The result in both Fukushima and the World Trade Center has been that investigators chose one cause as the dominant cause—terrorism, or reactor failure—and have written the other elements of the disaster out of the analysis.  The reduction of complexity achieved by settling on one dominant disaster has serious implications, considering the interlinked nature of environments and technological risks.  Rival investigations spring up when experts choose to focus on a different causal element—such as fire over terrorism in the case of the World Trade Center.

PLEASE NOTE: This abridged version of the paper does not include the full Fukushima and World Trade Center disaster investigation case studies—those may be found in a longer version of the paper available upon request.


[i] For an overview of this emerging synthesis, see: Kim Fortun and Scott Frickel, “Making a Case for Disaster Science and Technology Studies,” Fukushima Forum, “3.11 Virtual Conference,” 11-12 March, 2012: https://fukushimaforum.wordpress.com/online-forum-2/online-forum/making-a-case-for-disaster-science-and-technology-studies/.

[ii] Ulrich Beck, Risk Society: Toward a New Modernity (1986, trans., 1992), 21. (italics in original)

[iii] Charles Perrow, Normal Accidents: Living with High-Risk Technologies (New York: Basic Books, 1984).356-357.

[iv] Sheila Jasanoff, ed., States of Knowledge: The Co-Production of Science and the Social Order (London: Routledge, 2004), 14.

[v] Stephen Hilgartner, “Overflow and Containment in the Aftermath of Disaster,” Social Studies of Science 37:1 (February 2007): 153-158; see also; Scott Gabriel Knowles, “Lessons in the Rubble: The World Trade Center and the History of Disaster Investigations in the United States, History and Technology 19:1 (2001): 9-28.

[vi] The literature tracing the modern “production” of risk and disaster sprawls across multiple disciplines, particularly environmental and urban studies, and STS.  In addition to Beck’s Risk Society, see the brief and useful discussion in Greg Bankoff, “No Such Thing as Natural Disasters,” Harvard International Review (online), 23 August 2010.  http://hir.harvard.edu/no-such-thing-as-natural-disasters.  See also: Ian Burton, Robert W. Kates, and Gilbert F. White, The Environment as Hazard (New York: Oxford University Press, 1978); Mike Davis, Ecology of Fire: Los Angeles and the Imagination of Disaster (New York: Metropolitan Books, 1998); Scott Gabriel Knowles, The Disaster Experts: Mastering Risk in Modern America (Philadelphia: University of Pennsylvania Press, 2011); Theodore M. Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton, N.J.: Princeton University Press, 1995); Sara B. Pritchard, “An Envirotechnical Disaster: Nature, Politics, and Technology at Fukushima, Environmental History 17 (April 2012): 219-243; Joel Tarr, The Search for the Ultimate Sink: Urban Pollution in Historical Perspective (1996); and, Langdon Winner, The Whale and the Reactor: A Search for Limits in an Age of High Technology (Chicago: University of Chicago Press, 1986).

[vii] The rise of a modern “disaster state” has been increasingly well-documented (particularly in the United States).  See: Thomas A. Birkland, After Disaster: Agenda Setting, Public Policy, and Focusing Events (Washington, D.C.: Georgetown University Press, 1997); Birkland, Lessons of Disaster: Policy Change After Catastrophic Events (Washington, D.C.: Georgetown University Press, 2006); Michele Landis Dauber, The Sympathetic State: Disaster Relief and the Origins of the American Welfare State (Chicago: University of Chicago Press, 2013); Andrew Lakoff, ed. Disaster and the Politics of Intervention. New York: Columbia University Press, 2010); Charles Perrow, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters, (Princeton, NJ: Princeton University Press, 2007); Rutherford H. Platt, Disasters and Democracy: The Politics of Extreme Natural Events (Washington, D.C., Island Press: 1999); Patrick S. Roberts, Disasters and the American State: How Politicians, Bureaucrats, and the Public Prepare for the Unexpected (Cambridge: Cambridge University Press, forthcoming 2013); Richard Sylves, “President Bush and Hurricane Katrina: A Presidential Leadership Study, The ANNALS of the American Academy of Political and Social Science 604, 1 (2006): 26-56; Miwao Matsumoto “Social Dynamics and Structures around Nuclear Technology: Pre- and Post-Fukushima Stories,” 4S Annual Meeting, 2012.

[viii] For discussion of disaster investigations and their “performative” elements, see: Stephen Hilgartner, Science on Stage: Expert Advice as Public Drama (Stanford: Stanford University Press, 2000); for examinations of conflict and competence among disaster experts, see: Lee Clarke, Mission Improbable: Using Fantasy Documents to Tame Disaster (Chicago: University of Chicago Press, 1999); Sheila Jasanoff, ed., Learning from Disaster: Risk Management after Bhopal (Philadelphia: University of Pennsylvania Press, 1994); Ann Larabee, Decade of Disaster (Champaign, IL: University of Illinois Press, 2000); Diane Vaughan, The Challenger Launch Decision: Risky Technology, Deviance, and Deviance at NASA (Chicago: University of Chicago Press, 1997); J. Samuel Walker, Three Mile Island: A Nuclear Crisis in Historical Perspective (Berkeley: University of California Press, 2004).

[ix] Susan Silbey, ‘‘Taming Prometheus: Talk About Safety and Culture,’’ Annual Review of Sociology 35 (2009): 341–69; Ruthanne Huising and Susan S. Silbey, “Governing the Gap: Forging Safe Science through Relational Regulation,” Regulation and Governance 5 (2011): 14-42, Norimitsu Onishi and Martin Fackler, “Utility Reform Eluding Japan After Nuclear Plant Disaster,” New York Times, 17 November 2011: http://www.nytimes.com/2011/11/18/world/asia/after-fukushima-fighting-the-power-of-tepco.html.

[x] On disasters and vulnerability see: Gregory Bankoff, ‘‘Rendering the World Unsafe: ‘Vulnerability’ as Western Discourse,’’ Disasters 25, 1 (2001): 27; Greg Bankoff, Georg Ferks, and Dorothea Hilhost, eds., Mapping Vulnerability: Disasters, Development, and People (London: Earthscan, 2004); Kenneth Hewitt, ed., Interpretations of Calamity from the Viewpoint of Human Ecology (Boston: Allen and Unwin, 1983); Gabrielle Hecht, “Nuclear Nomads: A Look at the Subcontracted Heroes,” The Bulletin of the Atomic Scientists, 9 January 2012; Bob Bolin, Race, Class, Ethnicity and Disaster Vulnerability,” in Havidan Rodriguez, Enrico L. Quarantelli, and Russell R. Dynes, Handbook of Disaster Research, (New York: Springer, 2006); and Elaine Enarson, Alice Fothergill, and Lori Peek, “Gender and Disaster: Foundations and Directions,” Handbook of Disaster Research; Ryuma Shineha, “Vulnerability and Inequality: A Case Study of the 3.11 Disaster,” 4S Annual Meeting, 2012; Paul Jobin, “Dying for TEPCO? Fukushima’s Nuclear Contract Workers,” The Asia-Pacific Journal (9) 18:3 (May 2, 2011).

[xi] See: Knowles, “Lessons in the Rubble.”

[xii] See: Yasuhito Abe, “Analyzing the Fukushima Crisis Via Uses of Social Media: A Short Essay,” Fukushima Forum, “3.11 Virtual Conference,” 11-12 March, 2012.  https://fukushimaforum.wordpress.com/online-forum-2/online-forum/fukushima-social-media/; Lisa Onaga, “Teach 3.11: Participatory Educational Project Puts the Kanto-Tohoku Disaster into Historical Context,” East Asian Science, Technology and Society: An International Journal 5:3 (2011): 417-422.

[xiii] For overviews of social science research into Hurricane Katrina, see ‘‘Understanding Katrina: Perspectives from the Social Sciences,’’ Social Science Research Council (online), http://understandingkatrina.ssrc.org/; and Social Science Research Council Katrina Research Hub (online), http://katrinaresearchhub.ssrc.org/rdb/katrina-hub. See also Eugenie L. Birch and Susan M. Wachter, eds., Rebuilding Urban Places After Disaster: Lessons from Hurricane Katrina (Philadelphia: University of Pennsylvania Press, 2006), and Ronald J. Daniels, Donald F. Kettl, and Howard Kunreuther, eds., On Risk and Disaster: Lessons from Hurricane Katrina (Philadelphia: University of Pennsylvania Press, 2006); and Technology in Society 29: 2 (April 2007): 143-260.

[xiv] Scott Frickel and M. Bess Vincent, “Hurricane Katrina, Contamination, and the Unintended Organization of Ignorance,” Technology in Society 29 (2007): 185.  See also: Barbara Allen, Uneasy Alchemy; and Gerald E. Markowitz and David Rosner, Deceit and Denial : The Deadly Politics of Industrial Pollution (2003); and, David Rosner and Gerald Markowitz, Are We Ready: Public Health Since 9/11 (Berkeley: University of California Press, 2006).  Also, Robyn R. M. Gershon, Lori A. Magda, Halley E. M. Riley, and Martin F. Sherman, ‘‘The World Trade Center Evacuation Study: Factors Associated with Initiation and Length of Time for Evacuation,’’ Fire and Materials (2011); and, Juan Gonzalez, Fallout: The Environmental Consequences of the World Trade Center Collapse (New York: New Press, 2002).

Scott Gabriel Knowles is Associate Professor of History at Drexel University.  He specializes in the history of risk and disaster, with emphasis on the history of technology, urban history, and public policy.  He is the author of The Disaster Experts: Mastering Risk in Modern America (University of Pennsylvania Press, 2011).

Advertisements
15 Comments
  1. Scott Frickel permalink

    Two quick comments on Scott’s thoughtful essay: First, I want to push back a little bit on the emphasis given to post-disaster investigation as political performance. While this framing is certainly useful and appropriate, the previous essay (by Professor Kohta) raises the question of the relationship between political legitimacy and accurate knowledge. I think this is fundamental. How is the quality of knowledge produced by these investigations and the political utility of the resulting reports correlated? Can we find evidence of structural patterns across cases? Second, since Scott references my work on ‘organized ignorance’ in the Katrina case, I will beat this drum a bit harder: especially in the aftermath of major calamity, data can be abundant but also highly ephemeral – it disappears quickly if not collected. But what data to collect? Who decides? Are these decisions intentional or unintentional? Thought-through or spur-of-the-moment? The absence of data figures importantly in post-disaster contexts/responses and I believe that DSTS would do well to pay close attention to the emerging literature in on ignorance/absence in science.

  2. Jen Schneider permalink

    This essay provides a useful framework for thinking about trends in DSTS–being new to this literature, I thank you for it! I’ve used both Perrow and Beck when teaching about nuclear power (largely to nuclear engineers) and have had interesting experiences using both: many of these engineers can barely stand to read Perrow because of the many technical inaccuracies they see there, and disagree with Beck’s premise that risk in modernity is characterized by a boundlessness, or a boundary-less-ness–they believe in risk analysis regimes and their ability to quantify damages, both past and future. I think they would also deeply struggle with the first recurrent theme you identify (disasters are not “natural”); in fact, a common refrain I heard from nuclear experts following the accident was that we should direct our attention away from the reactors and toward the “real” victims of the “natural” disaster, for which nobody could be blamed (blame being central). These are just musings, and these essays are intended for STS scholars of course, not necessarily engineers or scientists. Yet the gap between our theory and the absolute dominance of the technocratic model is striking in practice.

  3. Daniel Aldrich permalink

    I enjoyed this essay, but thought there was room for debate on several of its points. Specifically, I did not agree with the claim “Japan is one of the few industrialized nations yet to undertake a deregulation of its nuclear power sector.” Japan’s nuclear power sector, many would argue, has been among the least regulated in the industrialized world – with the same agency promoting nuclear power over the post war period responsible for managing, regulating, and fining it for infractions. This has only changed since the public outcry following 3/11…

    • Scott Knowles permalink

      Thanks for your comments Daniel–your expertise on these issues will make my paper stronger and I look forward to the conversation in Berkeley!

      It might be that I should clarify this (though I was REALLY out of space). “Deregulation” is a tricky term and worth discussing as we go forward. It can be used to refer to a process of rolling back (or never carrying out statutory requirements) in state-enforced safety and environmental quality monitoring. It can refer to a process where the state loosens the control of state-sanctioned monopolies in a given sector. This has been true in Japan since the 1990s, but mostly (as I read it) deregulation hasn’t worked. The same regional monopolies control the nuclear/electricity markets the way they have since the 1950s when the regional monopoly system was created. Deregulation can also refer to the more aggressive policy of privatization–where states sell off their major utilities sectors. This type of “deregulation” isn’t go/no-go, but is usually on a spectrum, where states turn over rate-making authority, or allow competition–or even allow full privatization. I think you mean (correct me please) that the tight fit between Japan’s nuclear regulatory bureaucracy and TEPCO has led to lapses in the rigor of safety regulation. Free market enthusiasts seem to be arguing that further privatization is just the remedy required for what ails Japan post 3.11, both in terms of ensuring safety and lowering costs.
      http://spectrum.ieee.org/energy/the-smarter-grid/upstart-energy-provider-ennet-takes-on-japans-utilities/0

      I hope we will discuss deregulation and the privatization of risk in all its forms at the workshop.

      Thoughts?

  4. Laura Beltz Imaoka permalink

    Thank you for this overview on key concepts and new directions in disaster STS. I am already finding it useful in framing my future research. I think importantly you point out the practical benefits of social media, such as the possibility of “real-time” response for disaster victims and personnel, as well as providing an outlet for citizen voices in disaster investigation. You also mention the “big data” potential of such communications for formal investigations, despite the authoritative status of such evidence and the legal issues complicating its use value. I do think positioning the Internet and social media as true channels for citizen dissent does require some caution, however. Propping up such communities of “expertise” as being parallel to official expert bodies could disregard how these tools also fit within larger economic and political structures, thus necessitating a more critical approach.

  5. Bill Kinsella permalink

    Scott, this is an elegantly written essay that maps out a huge territory — I do want to see the more extended version. Your three big points (adapted and expanded from Steve Hilgartner) and your seven more specific points provide an excellent framework for further analysis. There’s another book here, or a bunch of articles, for sure.

    A number of the comments I made for Khota’s essay seem relevant to yours as well– rather than repeating them I’ll just suggest that you consider them in connection with both essays.

    The title of your essay does seem to capture one of your core arguments: expert inquiries are political activities. In Aristotelian terms, they’re exercises in “forensic” rhetoric — looking back at past events to determine what happened. But they’re also acts of “deliberative” rhetoric — looking to the future in terms of “lessons learned” (a very frequent trope in these inquiries) and their implications for future choices. And it might be said that they’re “epideictic” or “celebratory” rhetorical acts as well: they celebrate the ability of the system to learn and move on.

    So fundamentally, I’d say these expert inquiries serve to preserve the prevailing system rather than to open up spaces for more radical change. As a corollary, they tend to preserve prevailing power relations and what Beck calls “social risk positions.”

    For me, the epistemological element is especially interesting. Yes, this is politics in action, but one more specific part of those politics is a politics of knowledge. I’ve argued elsewhere that there are “limits of representation” associated with any epistemic process, such as risk analysis, or any form of linguistic or mathematical representation of a phenomenon. The problem is not that such limits exist–that’s inevitable–it’s more about how those limits are constituted in any specific case, who does that work, and who benefits and who is harmed by that work. Your framework is a strong foundation for looking further into such questions.

  6. Kath Weston permalink

    Scott, I was struck by the way you marked the situated irony in which these investigations are integral to the kind of socio-technical approach that brings high-risk systems into being in the first place. One of the paper’s real strengths lies in the way you distinguish between different actors and interests, which takes a much more nuanced approach to understanding expertise and never treats “the state”or ‘professionals’ as monoliths. You could even take that a step farther, exploring for example how deregulation produces uneven landscapes, especially in terms of privatization, subjecting private and public nuclear plant operators to different cost and incentive structures, etc. That’s important because it pushes the discussion past the usual way of framing questions in terms of rules and oversight (do we need more rules, better oversight of privatized operators, etc.).

  7. Scott, I find this very useful. The historian in me raises several questions.

    First, at a number of points I felt that what you were describing applies to older societies as well, but you were treating them as “modern.” I began to wonder if the (undefined) use of the term “technoscientific” did not create a narrowness of chronological focus that might not be warranted. Example 1: By the 18th century, dike-building in Japan had developed into two distinct, clearly-recognized traditions and experts from these traditions were the go-to guys for anyone contemplating a major flood control or irrigation project — despite the lack of the standard array of engineering tools that were developing in Europe at the time. Their knowledge was, compared to the image of what comes to (my) mind when a phrase like “technoscientific” appears. The dikes these folks build were also high-risk (especially given Japan’s hydrological characteristics), although we tend not to emphasize that in our own thinking. If we understand issues like this as long-standing, we can at least raise the possibility that a) approaches used to deal with them in other contexts may be suggestive for those people think of as at the forefront of contemporary concerns and b) some long-standing issues such as inundation take on new dimensions with the development of technologies such as nuclear power — remember, Fukushima Dai-ichi was flooded, not broken apart by an earthquake; provision for earthquake risk (new focus) worked as an old risk (tsunami) was “underrated.” Example 2: The link between state legitimacy and disasters is long-standing, not just modern. The classic case is pre-modern China, where increased floods (among other things) were seen as a sign that the regime had lost the “mandate of Heaven.”

    Second, in discussing the crisis of assessing regulatory effectiveness, there is a very long tradition in both China and Japan of government efforts to control certain activities by assigning corporate responsibility for effective management/regulation to non-governmental groups. These could be licensed mercantile groups (ton’ya or kabu nakama in Japan, salt monopolists and others in China) or semi-autonomous village/ward organizations. In a global, and long-term historical context, central governments did not have the revenues and staff to have independent audit capacity of the sort we presume in post-war Europe and the U.S. (I think development of this independent auditing capacity in the case of China and Japan remains far weaker than in the U.S.)

    Third, I find the idea of “discovery” of vulnerable populations interesting. To play a bit of the devil’s advocate, can I suggest that the “hiding” of these populations is, in certain ways, the product of the increased focus through the 20th century on urban middle class life? Vulnerability of rural populations to, e.g., flooding, has been widely recognized (as the example of Chinese state legitimation above suggests), but rural risk has receded as farm families drop below 5% of national population.

    Fourth, on the role of internet/social media, I think it helps to consider this issue as part of a long-term development going back, arguably, to the 19th century. Daniel Boorstein, as I recall, made this case in the U.S. for the development of local newspapers in the 19th century US — before the post-war consolidation of news services that increasingly limited their “oppositional” stance in the US. Simple broadsheets in Japan in the 1850s served a similar function. Introduction of mimeograph technologies, then foto-copying and facsimile transmission had similar impacts, making internal transmission of the works of Solzhenitsyn and other Russian dissidents. Per Edgerton, it is not just the new to which we must pay attention.

    There’s a lot to consider here, but I’ve said enough for the moment.

  8. Scott Knowles permalink

    Thanks everyone–very useful comments here–I’m taking notes and will look forward to discussing next week!

  9. Charlotte permalink

    Scott, thank you for making such an important contribution. I was very interested by your focus on investigation, which allows an insightful perspective on the emerging DSTS. I feel that the seven points you’re developing are bringing many questions, both in themselves, and also in their interconnections. I can that find some Callon’s hybrid communities and “socio-techniques assemblages” in your propositions. I am also very much interested to the emotions and value associated with the risk, the use blame specially would, I guess, need a special analysis to discuss the concept first proposed by Mary Douglas and Aaron Wildavsky, and the role their school of thought had along the years.
    If you haven’t come across it already – which I doubt! – you might be interested by Soraya Boudia and Nathalie Jas, « Risk and Risk society in Historical Perspective », History and Technology, 23, 4, 2007, p. 317-331 which offers an interesting critic of Beck work. I look forward to discuss that with you!

  10. Aya Okada permalink

    I really appreciated this essay for covering and sythesizing such broad field, particularly the seven trends identified among disaster STS scholarship. As Charlotte points out, I think each of them are worth discussing not only in themselves, but also in relation to one another. I’d be interested in hearing what you think is “missing” from these current trends – is there any theme or topic that we as those studying disaster STS should be exploring more in order to advance towards knowledge that let us deal better with disasters?

  11. Nicolas Sternsdorff permalink

    Scott — this is a great synthesis and analytical framework to think about disasters.

    I was taken by your sixth point, and the compartmentalization of knowledge in the wake of a disaster and organized ignorance. Comfort and Okada’s paper in session three propose to form a “knowledge commons” to counteract this effect, and the papers on safecast in Japan all point to citizen initiatives to make data more freely available.

    Some of the dynamics you point out in that section came up in my fieldwork — I heard people complain about the sometimes incompatible goals of different ministries in containing the fallout of Fukushima (i.e., protect farmers, or make public health the first priority). I wonder if organized ignorance can sometimes arise in the politically-charged process of identifying who the victims/affected are, and how one goes about catering to them.

  12. I agree with the various posts above; this will be a wonderful contribution to how we frame our conversations in the coming days.

    Insofar as this is also about the “making of risk” I’m curious how your analysis here can (and cannot as easily) be merged with the geographic-spatial perspective offered by Cabasse in her paper. Clearly all these efforts relate, as you point out, to how we distribute risk across different populations. In response to Nico’s comment from above, may of the grad students in our (STS) program at RPI are beginning to take an interest in agnotology (the study of ignorance), which I guess is receiving broader attention within science studies (Schiebinger, Proctor, Frickel). I think you are right on the mark, Nico, in suggesting that each bureacracy, by design, pay attention to certain factors to the exclusion of others, so that they are infact instances of organied ignorance, organized as such to achieve efficacy (as they define it) within their realm of responsibility. What Shineha documents, in broad statistica/aggregate terms, may be that voices that become excluded in a society that assigns considerable responsibilities to its bureaucratic institutions.

  13. Scott, I think the nod to Stephen Hilgartner is more than appropriate, given the shared topic of inquiry, the overarching scope of your programmatic (yet specific and empirically grounded) piece, and the concise eloquence with which you articulate your thesis.

    A theme of Hilgartner’s work, evident in the essay you cite as well as his book _Science on Stage_, is the tendency/need of official knowledge-producing bodies to project the reassuring meta-narrative that “everything is under control,” and one way he says they do this is by hiding the messy process of knowledge production and by carefully choreographing what (and when and how) they choose to reveal. This becomes particularly challenging in the context of a relatively high profile disaster investigation being carried out more or less in the public eye, the product of which will be much more widely read than, say, some wonky policy white paper.

    Double-plus like for Scott F’s “organized ignorance” as a productive concept, appropriate for helping to understand disaster investigations, as well as many other topics discussed in this workshop. It also calls to mind related STS concepts such as Michelle Murphy’s “regimes of perceptibility,” which she summarizes simply as “the way a discipline or epistemological tradition perceives and *does not perceive* the world” (emphasis added).

    Now, there is an issue that I would like to discuss in the broader context of the workshop with everyone, but perhaps it might be appropriate to raise it in response to your paper. I am not yet sure how to articulate this, so I suspect I will begin rambling, but I will try to express some of the things that have been occupying my mind recently. Namely, I want to ask about the role of moral and ethical responsibility with respect to disasters (both post-disaster “blame” and pre-disaster obligation), and how recognition of this responsibility plays out in action. This is both an empirical and an analytical concern. Empirically, I want to know whether and how a sense of moral responsibility animates the decisions and actions of actors, including disaster experts and policymakers. Analytically, this requires an openness to the possibility of viewing such actors — yes, even powerful ones — as individual moral agents, not merely as self-interested creatures of politics, nor as institutional dupes. Speaking for myself personally, it requires considerable, intentional effort for me to think in this way about those in certain positions of power.

    My intention is not to suggest that your own analysis denies the moral agency of officials and experts. However, I would like to hear more of your explicit thoughts on how the moral agency of investigators, as well as perhaps their overseers, affects the investigations, their conclusions, and how those conclusions are presented.

    Perhaps it might be helpful for me to explain a little more about where I’m coming from with this. I have been trying to figure out why Japanese policymakers at national, regional and local levels decided upon certain risk reduction measures sometime in 2011, and have continued to push these measures regardless of local objections (of which they must surely be aware). When I ask people — local residents as well as some experts from Kobe and elsewhere — why they think this is happening, of course they cite, for example, the long history of cozy relations between the Japanese government and the construction industry, but some also speculate that officials genuinely feel morally responsible for protecting lives and property, and skipping any obvious measures for doing that would simply be irresponsible. It has been suggested to me that, were another major tsunami to strike several years from now, and deaths and damage occur which might have been prevented by something as “straightforward” as a taller seawall, any official who might have nixed plans for its construction would feel intense moral guilt, and moreover would likely suffer severe political repercussions. Thus, even if local groups protest persuasively that seawalls are unnecessary and burdensome, no official would be willing to risk going against the conventional wisdom that seawalls protect lives and property. Obviously, a key question here becomes how “conventional wisdom” gets crystallized. (As a general answer, I think we can say “through history and habit,” but each particular case calls for specific study.)

    My question also comes from the general observation that disasters — preparing for them, responding to them, recovering from them, and researching them and those affected by them — are laden with moral and ethical concerns. Personally, I have been constantly, painfully aware of my position as a researcher seeking data from people who have endured so much, in a landscape of destruction sacralized by the sudden, violent loss of so many lives. I am still struggling with how to let this awareness express itself in my dissertation. However, in my opinion, much of the best, most compelling academic work, is animated by a moral heart and oriented by a moral compass, while simultaneously maintaining appropriate scholarly distance and rigor. (Your book The Disaster Experts and Kim Fortun’s work on Bhopal certainly come to mind, for example.) I think this has to be part of our group’s discussion, and it has to be explicitly part of the larger DSTS project.

  14. Jen Schneider permalink

    Hi everyone,

    Scott asked that those who had prepared more comments as respondents than were possible to share in our allotted two minutes share them here. I already commented above, so please forgive me if these seem redundant or excessive.

    I should also say that I feel singularly unqualified to respond to the paper because I am brand new to this subfield of Disaster Studies in STS, and because my own work in communication, energy, and engineering studies is at the margins of STS anyway, which is itself at the margins. This is a paper reflecting on where the subfield has been, if not where it is heading, and so my own inadequacies as a respondent will be obvious.

    I have a strong bias with regard to STS, which comes from the fact that I work at the Colorado School of Mines, an institution that educates engineers almost exclusively, and mostly at the undergraduate level. The majority of our graduates go on to work in the extractive energy industries. I teach courses in media studies and in energy studies, including a course called Nuclear Power and Public Policy, a core course in our Nuclear Science and Engineering program, a graduate program that grants master’s and PhD’s primarily to students who will become plant operators, researchers at national labs, or nuclear institution managers. And yes, that means that my life as a scholar in the humanities and social sciences is really, really strange.

    This experience has led me to read all work in STS as if with two sets of eyes. On the one hand, STS has provided me with a critical lens for understanding what is happening in engineering and in energy crises for the sake of my own work. It has been very formative in that regard. But I also assign foundational work on STS dealing with the deficit model and the construction of expertise and knowledge in the hopes that my students will become more reflective about their own work as technology practitioners. So on the other hand, I am always reading work in STS as if I was an engineering student about to go work for Halliburton, Shell, or Areva. Which requires imagination and leads to plenty of sometimes fruitful frustration, both for me and for my students.

    All of this is a long and self-involved way of saying that to Scott’s typology of seven commitments of STS in Disaster Studies, I might add an 8th that I would like to selfishly see added for the reasons stated above, and that I think emerged in a few of the other papers posted online, and in many of the comments. And that is this:

    The reflexive turn, obvious in many online papers and comments, and in Scott Frickel’s comments this morning, in which disaster studies seems to be calling STS scholars to question their moral and professional responsibility to be relevant to the actual management of or response to disasters. This concern about relevance and applicability is not necessarily specific to STS, and yet in the case of disaster studies in particular the siren song of praxis seems urgent.

    In particular, from my perspective, a fruitful area of exploration related to reflexivity has to do with how we imagine expertise. This stems from how we write—is our writing accessible and meaningful to practitioners, for example? And how do we imagine or describe expertise? How do we assert, defend, or deny our own expertise as scholars, which often seems largely theoretically based, or based on methodologies that may seem soft, foreign, or unnecessary to those in science, engineering, or regulation?

    And in particular, how do we theorize affective and cognitive responses from “experts” to our kinds of critique during disasters? For example, following Fukushima, nuclear science and engineering colleagues who had been relatively tolerant of social and political critique beforehand were understandably shaken and dealing with hard-to-measure-and-identify reactions of shame, culpability, and defensiveness. Otherwise ordinary conversations became incredibly fraught, weighted with the long history of critique and response that nuclear scientists and engineers have faced, and with my own inabilities to understand the deeply technical questions with which those experts were occupied at that time. How to enter the conversation? How to position myself? Understanding how I could interact in constructive ways, and yet maintain a voice that questioned problematic performances of expertise, proved much more difficult than I had imagined. Theory seemed to fail me in those moments.

    I’m not suggesting we abandon our critical perspectives simply in order to make them accessible—Gary Downey and Juan Lucena have showed that there is educational value in discomfort, particularly for engineers, and I think our oppositional stance as scholars of science and technology is a necessary corrective to key neoliberal strategies such as scientism, technocratic regimes, and the dominance of quantitative assessments. In fact, I think there are good arguments to be made for holding these oppositional spaces sacred given the corporate or other institutional commitments of much science and engineering today, and particularly in the energy industry.

    Therefore self-reflection about our own rhetorical strategies and positionality is key to the question of relevance, which we began to discuss this morning after Scott Frickel’s talk.

    In sum, I think the move to think critically about our own role in disaster studies is an important move, and may deserve a spot in Scott Knowles’s typology. But I would also suggest that there are risks in being so involved—there are significant political valences involved in conducting embedded studies of expertise, and these also require self-reflexivity. In order to speak meaningfully to and with science and technology experts, and to gain trust and access, we have to develop personal relationships with them, attempt to understand their approaches and worldviews, and therefore risk having our own worldviews and even political commitments challenged by hegemonic scientistic approaches. Ethnography feels particular messy in the case of disaster studies.

    To put it another way, there is a tension between our need to speak critically about disaster and to speak meaningfully to those involved in managing or planning for disasters. As a researcher and teacher who is deeply embedded in science and engineering education and who is equally invested in occupying a critical stance, this is the philosophical or existential question that concerns me most and which I think disasters in particular make very visible.

    I would conclude my comments by asking Scott to respond to Scott Frickel’s argument this morning, which suggests that inter-disaster comparison is both challenging and necessary. In this paper (Knowles’s) it is not totally clear how the seven characteristics you identify function across or through the two disasters examined, so a clearer mapping of the seven commitments on the two disasters would be useful. And finally, the question Atsushi posed to us at the beginning of this workshop: does your typology help us to answer the questions of where we go next? What is missing? Are there some areas that deserve our attention more than others?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: