Introduction

Howard Kushner's challenge to historians of drug use to ‘take biology seriously’ resonates with critical problems related to the role of science in a democracy (Kushner, 2006, 2010). Does scientific knowledge possess an epistemological primacy that suggests it should guide all areas of inquiry, including policy development? Conversely, when scientific findings clash with deeply held community values, is the electorate justified in restraining or ignoring science? If scientific knowledge can benefit humankind, does a just and democratic society have an obligation to insure equitable access to these benefits? How should citizens or policymakers – or historians – respond to recent scientific findings given that these may be revised or overturned on the basis of subsequent research?

More specifically, Kushner has urged historians of drug use to use the findings of neuroscience as a historical tool (while he also exhorts neuroscientists to engage with the work of social scientists). Funded largely by the National Institute on Drug Abuse (NIDA), neuroscientists have developed a compelling model for how addiction occurs. This model holds that addictive drugs interact with neurotransmitters in the brain, especially those parts of the brain that mediate our sense of reward; over time, continuous exposure to these drugs alters brain structure and function such that the drug user experiences craving to continue or resume use; these alterations make addiction a difficult condition to treat successfully. Apart from contributing to our understanding of a vexing human problem, neuroscientists have produced genuinely exciting scientific findings about the workings of the human brain. Images of the brain in action, developed through functional magnetic resonance imaging and rendered vivid by the application of pseudocolor to differentiate functional regions of the brain, add to the media appeal of this work. Finally, the model offers social utility to the degree that it leads to better treatments of addiction. NIDA has pledged its faith in this model with its adoption of ‘chronic relapsing brain disease’ as its definition of addiction.

Yet, to treat scientific findings as a neutral lens through which to view historical evidence avoids important questions. A burgeoning literature in the history, sociology and anthropology of science has shown that the laboratory, far from being an isolated and impermeable space, is intimately linked with the social world around it. Influence flows in both directions. Patronage, public expectation and more help shape research directions and methods. As findings emerge from the laboratory, they are selectively adopted and adapted; their application depends on alterations to physical and social structures and processes that enable scientific findings to work in the world (see, for example, Latour, 1983; Tomes, 1998). Moreover, as the federal government has become a major funder of scientific research, these relations include complex ties to government and policy. Such ties have existed in addiction research since the 1920s, when the Rockefeller-funded Committee on Drug Addictions worked closely with federal officials charged with enforcing the Harrison Narcotic Act. These links became structural when the committee's work was transferred to National Research Council oversight in the early 1930s (Acker, 2002). The Addiction Research Center at the US Public Health Service Narcotic Hospital in Lexington, Kentucky, was home to a cadre of researchers whose work from the 1930s to the 1970s was federally managed. Both Addiction Research Center personnel and findings were foundational for much of the expanded government-funded addiction research that followed the creation of NIDA in 1973. Thus, as an organized enterprise, addiction research in the United States has coexisted and been consistent with the nation's commitment to drug prohibition.

Moreover, neuroscience does not single handedly unravel the mystery of addiction. Scientists lack a solid consensus on the explanatory reach of the brain reward model. Some ascribe great power to the drugs themselves while others, such as behavioral pharmacologists, invoke the power of cues and reinforcers, that is, of the context in which drugs are consumed – or not (DeGrandpre, 2006; Campbell, 2007). These differences notwithstanding, neuroscience frames drug use as individual behavior. One implication of this focus is that neuroscience does not take on questions of incidence and prevalence of drug use or drug problems. Therefore it is not able to help us measure the social dimensions of problematic drug use.

As others have noted, science that wrests the individual from the social context supports a view of human beings as neoliberal subjects, as individual volitional units who, in modern society, must maintain self-control in the face of ubiquitous inducements to seek pleasure through consumption (Campbell, 2010; Gabriel, 2009; Keane, 2010; Vrecko and Hamill, 2010). Such a model of human behavior casts the addict as an isolated agent who perversely acts against his own self-interest by repeatedly consuming drugs despite glaringly bad consequences – to himself – of continued drug use. (This feature of addictive behavior drew behavioral economists into the fray; actions so manifestly not serving any rational sense of well-being attracted the attention of economists who challenged rational actor theory with the observation that humans frequently behave irrationally (Loewenstein, 1999)).

Finally, the focus on neurotransmission, and the crucial role of exogenous compounds in elucidating brain function (Acker, 1997), favor the continuing search for new drug therapies for addiction. The push for pharmacotherapies contrasts with an earlier history in which addiction treatment addressed the whole person, especially in institutional care, from inebriety hospitals in the late nineteenth century (White 1998; Tracy, 2005) to the US Public Health Service Narcotic Hospitals from the 1930s to the 1970s, (Acker, 2002; Campbell, 2007) to the Synanon-style therapeutic communities. (These are variously portrayed as humane institutions or tools of social control.) Given the disparities in access to health-care that characterize the American medical system, it is surely those who are socially at greatest risk for disorganizing drug use who will be most likely to be treated in institutional settings where pharmacotherapies are dispensed with minimal adjuncts in the form of counseling, educational or vocational services, or family support. A system that disproportionately triages drug users of color into the criminal justice system instead of the health-care system adds another layer of inequity (Currie, 1993).

In calling addiction a chronic relapsing brain disease, NIDA Director Alan Leshner was not simply channeling scientific discourse. However securely they are grounded in physiological malfunction, diseases – like all human categories – are socially constructed; disease definitions perform cultural work. The ‘chronic relapsing brain disease’ formulation neatly bridges two social worlds: that of brain researchers and that of drug treatment professionals. The chronicity and relapse aspects resonate with the Twelve-Step definition of addiction, while the brain invokes neuroscience and, by implication, hopes for newer, better pharmacotherapies for addiction. To the extent that calling addiction a disease casts it as a pathology of individuals, this process shifts attention away from the powerful social influences on drug use. It resonates with an American cultural emphasis on the role of individual action in securing well-being (Martensen, 2004).

To further assert that addiction hijacks the brain's reward system is to offer a metaphor, not an explanation. Metaphors require analysis. When the language of hijacking the brain's reward system appears without attribution to any particular person in NIDA's official characterizations, in the New York Times’ coverage of drugs and neuroscience, and in the work of drug scholars, then something is being elided or flattened. ‘Hijack’ is a volatile term, resonant with the greatest fears we have about the world today. To say that drugs ‘hijack the brain's reward system’ does at least two things. One, it equates drug use as a social problem with the most serious threats we face, echoing longstanding fears that certain forms of drug use threaten to undo American society (Jenkins, 1999; Speaker, 2004). Two, it contains a subtle bias in favor of pharmacotherapy as distinct from other forms of drug treatment: if cocaine can hijack the brain, perhaps only another drug, at least as powerful, can release the hostage.

The hijack metaphor echoes the absolutist nature of prohibition. It also resonates with enforcement rhetoric. The Drug Enforcement Administration (DEA) characterizes Colombian drug cartels as terrorist organizations because of their links to groups like the Revolutionary Armed Forces of Colombia.Footnote 1 Shortly after the terrorist attacks of September 11, 2001, the DEA added an exhibit on narcoterrorism, entitled ‘Target America: Traffickers, Terrorists, and You’, to the DEA Museum at its headquarters in Arlington, Virginia. Scientists, bureaucrats and policymakers have all deployed images comparing drug use to terrorism at a time when terrorism is the dominant national security concern of the United States.

Similarly, the public and the media sustain images of drug users as uniquely depraved, willing to violate any and every moral value in their quest for soul-numbing drugs. These images reflect a powerful symbolism that mainstream American population groups have attached to drug use outside of their social horizons. Most of the psychoactive drugs that the US government has declared unfit for non-medical use first aroused concerns because of their associations with marginal racial groups. Images of opium-smoking Chinese in the late nineteenth century, cocaine-maddened African Americans in the early twentieth century and marijuana-puffing Mexicans in the 1930s fueled movements that culminated in passage of the Harrison Narcotic Act in 1914 and the Marijuana Tax Act in 1937. Moral entrepreneurs, such as Progressive Era vice reformers (Acker, 2002), aspiring politicians and bureaucrats, such as Hamilton Wright (Musto, 1991) and Harry Anslinger (McWilliams, 1990), cast drug use as a threat to the national fabric; these attitudes, hardened into law, became the basis of American drug policy – which, as noted, has been tightly linked to addiction research. Given the intertwined nature of addiction research, public policy, media coverage and public attitudes, taking science seriously means engaging it critically.

Understanding the Social Burden of Addiction

‘Chronic relapsing brain disease’ is the latest in a 200-plus year tradition of conceptualizing addiction as a disease. Examining that tradition – as Keane and Hamill (2010), Courtwright (2010) and Gabriel (2009) note – reminds us that we have long had insightful understandings of addiction. Benjamin Rush's Moral Thermometer (1790) lays out as neatly as any late-twentieth-century biopsychosocial model the escalating consequences entailed by progressive levels of drug use.Footnote 2 The body of knowledge accumulated through the work of inebriety physicians from the late nineteenth century was interrupted when alcohol and drug prohibition seemingly obviated the need to study addiction, under the illusion that drug and alcohol use would cease once proscribed (White, 1998). Renewed research initiatives (on opiates in the 1930s; on alcoholism in the 1940s) were the forerunners of contemporary studies of addiction. Thus, we have long recognized addiction and ascribed to it a descriptive common-sense definition whose core is the pursuit of repeated drug use that eclipses other interests and activities and that proves difficult to stop (a definition entirely consistent with chronic relapsing brain disease). The boundaries of this definition blur in debates about how much is too much or whether non-drug behaviors constitute true addiction, but as Darryl Neill (commenting during the Addiction, Brain, and Society Conference, Atlanta, Georgia, February 2009) usefully notes, we are all addicts. Repeated stimulation of the reward system is necessary for learning and memory formation. Addictions can be functional or ruinous, and social context goes far to determine the degree of functionality, dysfunctionality or deviance of a behavior.

What are the range and limits of the reach beyond laboratory walls of the chronic relapsing brain disease model? There are at least four measures of the value of laboratory-based disease definitions. One is scientific luster: does the model illuminate the natural world in elegant ways? The chronic relapsing brain disease construction certainly meets this standard. Two, does the model possess diagnostic robustness? Does it reliably distinguish those with a given condition from those who do not suffer it? Are brain images necessary to diagnose chronic relapsing brain disease, or can behavioral measures work? Can the latter be defined with sufficient precision? Behavioral conditions like addiction are notoriously difficult to characterize with precision, as repeated revisions of the American Psychiatric Association's Diagnostic and Statistical Manual attest. Three, does the disease definition provide grounds for discovering new treatments? Although many treatments have been developed without knowledge of basic disease mechanisms, much biomedical research is justified on the grounds that understanding basic disease mechanisms will enable development of improved treatments. The jury is still out on the value of the chronic relapsing brain disease model in yielding improved treatments for addiction. A fourth criterion asks: is the model a platform for bringing incidence and prevalence of a disease under control (as, to take perhaps the strongest example, vaccination has for diseases from smallpox to polio)? This criterion is most distant from the laboratory; yet prevalence of a condition is a crucial measure of its human cost.

In what follows, I offer a model for assessing the social impact of addiction: how do we understand disproportionate concentrations of severe drug problems in particular population groups? The model draws methods from history, science and social science. From history, it ascribes agency and sets people in social, economic and cultural contexts as it examines the past. From science, it draws on classical pharmacology, examining drug forms, routes of administration, doses, frequency of use, dose–response curves,Footnote 3 chronicity of use and dependence. From social science, it profits from ethnographers’ qualitative studies of drug users and sociologists’ studies in drug-use epidemiology.

For the purposes of this discussion, I invoke the durable, common-sense understanding of addiction, informed by science and by the human experience of distress. I frame addiction as one among numerous risks associated with drug use; the rate of addiction in a population is one of many drug-related harms we would aim to reduce (granting that some addictions are benign). I frame drug use as a social behavior, with the understanding that patterns of drug use change over time, as do the levels of harm, such as addiction, associated with them. ‘Harm’ itself is not necessarily a straightforward term; some argue that any use of certain drugs is a harm to be eradicated. I focus, instead, on consequences of use as harms; distinct patterns of use, in turn, hold greater risk of contributing to harms such as addiction or overdose.

How do we account for the sudden upsurge of a pattern of drug use in a population accompanied by high rates of problems such as addiction and overdose? How do we distinguish such patterns from the more normal, or at least common, waxings and wanings of drug use patterns? Bruce Johnson introduced the concept of a drug era to refer to the adoption of a new form of drug use by a social group, characterized by an initial period of rising rates of first-time use. Drug eras end as the rate of new users either declines or reaches a steady state (Johnson et al, 1998). Drug era cohorts, in turn, live in historical time, responding to drug availability, to drug use norms in their immediate social group, and to the larger cultural and structural contexts of their lives. This understanding invokes Michael Windle's (2009) advice that we consider the life course of drug era cohorts and the opportunity structures within which people make choices. It also echoes Eric Schneider's use of spatial and social geography to trace rises in heroin consumption in American cities in the second half of the twentieth century in his recent book, Smack: Heroin and the American City (2008).

This formulation acknowledges that the adoption of new forms of drug use is a common human experience. Adoption of a new form of drug use involves social learning (Zinberg, 1984). Over time, populations encrust psychoactive drug use with meaning and develop norms that influence patterns of use. Alexander von Gernet (1995) described how various groups responded to the introduction of tobacco in different parts of the world during the sixteenth and seventeenth centuries. He noted that, while physical objects associated with smoking tobacco (such as pipes) traveled to new lands, the ideas surrounding use of the drug did not. Thus, while Native Americans incorporated tobacco smoking into shamanic spiritualism, Europeans explained tobacco's effects in light of humoral theory. Historical and anthropological inquiries have brought to light the processes whereby social groups create rituals to contain drug use and exploit drug effects for some purpose, whether sacred or profane. Similarly, social groups establish norms of moderation and define excess. They proscribe use in some situations even as they approve it in others. In a complex society like the United States in the twenty-first century, a diverse and overlapping set of social groups devise, sustain and revise drug-use norms in response to diverse and complex conditions.

Yet the process rarely goes smoothly. Especially early in a new drug era, before much experience has accrued, some users will experience dysphoric reactions or accidental overdose, or they will slide into compulsive, addictive use. These problems may wane as the drug behavior is either rejected wholesale or is brought under better control. In some populations, however, high rates of severe problems persist. In the United States, the media and politicians typically move quickly to demonize the drug and urge stiffer punishments for use or sale. Repeated cycles of this phenomenon have failed to prevent new drug crises.

Distinguishing the harms associated with drug use from drug use itself provides a better analytic focus. I propose an epidemiology of drug-related harm to distinguish true emergencies from common fluctuations in styles of drug use and to distinguish levels of severity among such epidemics. Doing so builds on the suggestion of Peter Reuter and Jonathan Caulkins (1995) that prevalence of drug-related harm is a better criterion for judging drug policy than simply tracking shifts in numbers of users or amounts of drugs consumed. It also avoids attaching the loaded term ‘epidemic’ to every rise in drug use, in the interest of developing a dispassionate and accurate vocabulary to counter the media's alarm-mongering coverage of drugs.

To explicate epidemics of drug-related harm, I borrow from Günter Risse's (1988) conception of the changing ecology of disease to explain how infectious diseases have spread among human population groups in specific times and places in the past. This exercise in historical epidemiology profits from resonances between history, on the one hand, and epidemiology and ecology, on the other. Epidemics are by definition dynamic events involving change over time; they have narrative arcs. Ecological systems are also dynamic as they move through periods of change, relative stability and renewed change.

The changing ecology of disease model incorporates pathogens (biological agents) and their vectors; the biological and the built environment; economic, social and cultural factors; and human agency in a single model to analyze the spread of specific diseases among population groups in the past. Risse's accounts of such epidemics as an outbreak of bubonic plague in Rome in the 1650s take seriously biological explanations, such as the role of the rat flea in transmitting the bacterium Yersinia pestis. The model also considers the economic activities that bring humans into contact with rats and create conditions in which the disease would spill over from its endemic animal hosts to humans. It applies ecological analysis such as consideration of the impact of the expanding belts of agricultural land that separated European cities from surrounding forests and thus isolated urban and sylvan rat species from each other. It assesses human response to epidemics, such as whether to hold prayer masses (to invoke God's aid) or to ban large gatherings (to reduce presumed contagion); scapegoating of marginal population groups characterizes the popular response to many epidemics. Similarly, analyzing the historical epidemiology of drug-related harm takes into account the structural and social contexts in which groups of people encounter and use drugs.

Applying a socioepidemiological perspective to past episodes of clustered drug use problems enables distinctions in the levels of severity of epidemics of drug-related harm. Accounting for such distinctions, in turn, suggests that the severity of epidemics of drug-related harm is a function of the social context of drug use, not the identity of the drug. Three episodes in American history – morphine use in the late nineteenth century, powder cocaine use in the 1970s and crack cocaine use in the 1980s – illustrate this point.

From the mid-nineteenth century, the hypodermic syringe offered a powerful new route of administration for morphine, and the drug was widely used among the growing middle class. By the last decades of the century, rates of opiate use among the American population were higher than before or since; use was particularly high among white middle-class, middle-aged women (Courtwright, 2001). Some became tragically addicted in the pattern illustrated by the character Mary Tyrone in Eugene O’Neill's play Long Day's Journey Into Night. This pattern had seeming legitimacy because physicians provided the drug or introduced the user to it; the drugs were framed as medicines in a context when physicians valued systemic effects as evidence of efficacy.

The high prevalence rates suggest large numbers of users who did not become addicted, who used occasionally or perhaps just once or for a single brief period. Much of the use was medical: prescribed by a physician for a specific complaint. But the range of use and users and the availability of syringe kits sold as fashion items around 1900 make clear that much use corresponded to what we think of as non-medical use today – even when we refer to such use as self-medicating. However, women swallowing or injecting morphine did not spark panic as long as they conformed to normative gender roles – which, ironically, some were better able to do in part because morphine helped assuage social anxieties (Kandall, 1996; Courtwright, 2001).

But the pattern waned in response to growing concern about such addiction, encouraged by physicians’ shifts in attitude and prescribing practices regarding opiates. From the 1890s, physicians reined in the prescription and administration of morphine. Rising concern about iatrogenic addiction coincided with shifting ideas about disease causation and, therefore, the role of drugs in medicine. Morphine came to symbolize old-fashioned, ineffective medicine that treated only symptoms (Acker, 1995; Courtwright, 2001). Disease was seen as less systemic and more local; drugs, similarly, were to attack the site of cause or kill the invading pathogen. Both the medical and non-medical uses of morphine declined in the face of concern about addiction.

This phenomenon constituted an epidemic of use and of drug-related harm. The rise of treatment institutions and the widespread advertising and sale of purported morphine addiction cures, in addition to physicians’ concern, all attest to the demographic reality and visibility of the phenomenon. Yet this epidemic was brought under control by a range of forces that can be characterized as a process of social learning involving groups with the resources to meet the challenge. Physicians’ own middle-class identity informed their assessments of ailments they ascribed to the middle class, and they responded by developing or revising diagnostic ideas and devising treatment regimens. Middle-class morphine addicts seeking relief could afford to pay for it, as exemplified by those who thronged to the Keeley Institutions to quaff its gold chloride cure and participate in its group sessions (White, 1998).

In the 1970s, many young white middle-class adults who had become comfortable with the experience of sampling new drugs based on their favorable experience with marijuana tried powder cocaine. Cocaine use was considered chic while it was associated with rock stars and stock brokers. When these users began to experience significant rates of dependence in the 1970s, they helped drive the dramatic expansion of private and public sector chemical dependency treatment facilities in the 1970s and 1980s. As their addiction to cocaine became clear, they influenced a reconceptualization of addiction, which shifted from the depressant-based focus on tolerance and physiologically overt withdrawal symptoms to criteria that included compulsive use that was out of control and continued in spite of adverse consequences. Equipped with health insurance and supported by Employee Assistance Programs,Footnote 4 many of these cocaine addicts became recovering addicts (Acker, 1993). Again, a middle-class group, experiencing an epidemic of drug-related harm, deployed resources to mitigate that harm and established norms marking off some forms of drug use as too risky.

These two episodes have several things in common. They involved significant problems, though rates of use in the overall population remained low. Observers (and drug users themselves) responded with concern, not by demonizing users or portraying them as threats to the social fabric. Treatment resources were adapted and expanded to meet the needs of users. Images of addicts, intended to mobilize compassion, also marked off unacceptable patterns of use. Rates of problematic use dropped as people experienced or witnessed negative consequences of use. Such a response is possible when the affected groups have the resources and resilience to address the problem effectively. In neither case did rates of addiction drop to zero, but they became and remained statistically rare in the populations that had experienced initially higher rates of problems.

In contrast, when a population already suffering from chronic multiple stressors adopts a new drug behavior characterized by a steep dose–response curve, the risk of out-of-control drug dependence is heightened. When population groups share exposure to chronic multiple stressors, the reasons are, at least in part, structural. Yet it is just such groups that are likely to be demonized through lurid portrayals of their depraved drug use.

Readily recognizable examples of severe, structurally mediated epidemics of drug-related harm include the gin craze in London in the 1720s and 1730s and crack use in American cities in the mid-to-late 1980s. Interestingly, both of these episodes involved the introduction of a new form of a drug, not a new drug. Alcohol was not new in eighteenth-century London, but gin offered a more powerful dosage form than the beer London's poor were accustomed to. Similarly, crack was not a new drug. Crack is a smokeable form of cocaine; the route of administration accounts for its more powerful wallop compared to snorted cocaine. Thus, it is more useful to examine (a) patterns of incidence and prevalence and (b) patterns of use, such as occasional, daily, oral, intravenous, high dose, low dose and so on, than to seek to sort drugs into safe (legal) and dangerous (illegal) categories. Benjamin Rush recognized this point; in his Moral Thermometer, risk of negative consequences of drinking rises as the dose and frequency of drinking rise. However, moderate consumption of wine or beer with meals he pronounced consistent with salubrity and productive of good cheer.

Crack in American Cities

How would this model interpret the ‘crack epidemic’ of the mid and late 1980s? All the conditions for a severe epidemic of drug-related harm were present. Some of the most disadvantaged residents of America's inner city neighborhoods adopted a new pattern of drug use that produced a steep dose–response curve. The new crack markets emerged from existing markets for powder cocaine. These markets were anchored in poor inner city neighborhoods, where illicit markets compete successfully against constrained opportunities in the legitimate workforce. From the early 1970s, because of its high price, much cocaine flowed through these neighborhoods and into the noses, and, sometimes, veins of affluent purchasers from other neighborhoods. Crack was a market innovation: the repackaging of an expensive commodity at a new price point, the US$5 or $10 rock. In smokeable form, cocaine produces a powerful but fleeting euphoria. A rock of crack was easily affordable in poor neighborhoods, and purchasers increasingly included their residents, not just those who drove in for furtive drug purchases. Crack came into neighborhoods where workplace involvement and educational attainment had decreased while social isolation had increased dramatically between 1970 and 1980 (Kasarda, 1992). Thus, crack was introduced to a population that had suffered multiple devastations. The combination of route of administration, a market that made individual dose units easily available at a low price, and multiple dimensions of disadvantage resulted in a rise in drug-related harms in a specific population.

How do we better understand both the context in which rates of compulsive crack use rose and the experiences of the human beings who became compulsive seekers of crack-induced euphoria? When Philippe Bourgois wrote In Search of Respect (1995), he offered not just ethnography but also history as he traced the fortunes of several generations of Puerto Ricans, from subsistence farmers in Puerto Rico in the nineteenth century to a cohort of young crack sellers in East Harlem in the 1980s. He argued that this history of progressive marginalization was critical to understanding the drug use and drug selling behavior of these men. Similarly, I believe understanding the history of America's poor urban neighborhoods is important to understanding the impact of crack on their residents. Here, I focus on Pittsburgh's Hill District, an exemplar of urban African American neighborhoods characterized in the 1980s by chronic poverty, low workplace involvement, low educational attainment, a dearth of legitimate businesses (Ford and Beveridge, 2004), social isolation and flourishing drug markets.

The Hill District achieved national fame as the setting of August Wilson's plays and the inspiration for the television series Hill Street Blues. Near Pittsburgh's downtown, it has held concentrations of the equivalents of visible drug markets, such as brothels, from the late nineteenth century. Since then, it has also been characterized by poverty, crowding, substandard housing, and disproportionate levels of disease and crime. During the Great Migration, the Hill District's population became predominantly black. Poverty and problems notwithstanding, the neighborhood was home to a wide range of businesses, from licit establishments like delicatessens through shady ones like pool rooms to illicit ones like dope dens. In the 1980s, the Hill District continued to be characterized by poverty, low educational attainment, dilapidated housing stock, and markets in illicit goods and services. But in contrast to the Hill District of the 1920s and 1930s, the Hill now had almost no legitimate businesses; its population had declined drastically in the wake of deindustrialization; and urban renewal projects had isolated it geographically. The Hill had come to typify the kind of ravaged urban neighborhood that evokes a powerful nostalgia for lost community among present and former black residents (Fullilove, 2004), while to many white outsiders it symbolized danger, whether alarming or alluring. Tracing these changes in the neighborhood captures some of the experience of its residents and sets the stage for the arrival of crack.

African Americans coming to the Lower Hill in the first half of the twentieth century settled in a neighborhood whose aging housing stock had already deteriorated (Pittler, 1930). Many buildings lacked indoor plumbing. Like earlier immigrant groups, the blacks who populated most of the Lower Hill in the 1920s and 1930s formed mutual aid societies and clubs. The Hill was home to the nationally known black newspaper the Pittsburgh Courier. African Americans reinterpreted cultural traditions from the agricultural South in light of their urban experience, from modifying traditional conjure practices to nurturing jazz innovation (Hajdu, 1996; Glasco, 2004). Most worked long hours at grueling jobs and sought relaxation in their scarce leisure hours. In the 1920s and 1930s, black lodgers lacked domestic space for socializing, and those without families had extra wages to spend. They formed a ready customer base for bars, poolrooms and brothels. For some young women, work in brothels compared favorably with low-paying, repetitive-motion factory jobs.

The longstanding concentration in the Hill District of markets in illicit sex and drugs reflects the neighborhood's location next to the downtown of a world-preeminent manufacturing center. In 1900, when the neighborhood population was still overwhelmingly white, the Bureau of Police cited the Lower Hill District's location adjacent to downtown to explain the large number of brothels there (Selavan, 1971). The neighborhood's growing population of young workers made it a natural site for brothels; so did its separateness from the neighborhoods where customers for the more elite establishments lived their outwardly respectable lives.

In 1930, Alexander Pittler (1930) developed a set of maps showing the distribution of ‘brothels and assignation houses’, ‘speakeasies, dope dens, and stills’, ‘gambling dens’, ‘pool rooms’ and ‘pawnshops’ in the Hill District. The location of many of these along border streets of the Hill and the designation of some establishments as catering to ‘whites only’ reinforces the likelihood that many of these businesses served a clientele who traveled to the Hill from other neighborhoods. Others, clustered closer to the center of the Hill, provided amusement to the neighborhood's own residents. With drug and alcohol prohibition in force (the latter only until 1933) and prostitution driven to more clandestine forms in response to Progressive Era anti-vice crusades, new social spaces emerged in neighborhoods like the Hill District. Some speakeasies were a transitional form between earlier saloons and beer gardens and the newly emerging cabaret or nightclub. These offered alcohol, risqué entertainment and jazz. Black and tan clubs drew integrated audiences, while in some establishments, African Americans performed for all-white audiences (Reckless, 1969). Renowned bands like Duke Ellington's performed in the Hill District, one of the country's major jazz centers. Such venues existed alongside Pittler's dope dens and poolrooms. The concentration of these venues in the Hill District, along with disproportionate rates of crime and disease, added to the neighborhood's growing citywide reputation as a blighted zone. For others it was, in Pittler's words, a ‘zone of anonymity’ where Pittsburghers from all neighborhoods could escape for thrills – and then return to the comparative safety of the neighborhoods where they lived (quoted in Selavan, 1971). This pattern in effect concentrates risks associated with drug use and commercial sex in poor neighborhoods, while they remain diffuse in other parts of the city even as residents from all neighborhoods engage in the risky behaviors (Braine et al, 2008).

These trends notwithstanding, on the eve of World War II, black residents of the Hill lived not just in a poor, overcrowded neighborhood in a city characterized by racial segregation in jobs and housing; they also lived in a neighborhood served by a wide variety of legitimate businesses that provided goods and services and kept cash flowing (Bodnar et al, 1982; Smith, 1993). The decline of manufacturing after World War II and ill-conceived urban renewal projects in the 1950s and 1960s helped transform the Hill into a neighborhood where such problems remained, but legitimate businesses were scarce while drug markets continued to flourish.

The beginnings of de-industrialization in the immediate wake of World War II were visible in declining employment levels in the region, as in other rust belt cities, from the mid-1940s (Sugrue, 1993). Black workers began losing jobs as soon as the war ended, in part because they had less seniority than white workers. Job loss became catastrophic with the collapse of the regional steel industry in the 1970s and 1980s. In the latter decade, Allegheny County lost 5 per cent of its population. Unemployment skyrocketed from 4 per cent in 1970 to 14.9 per cent in 1983; it settled back to between 4 and 5 per cent in 1989. Real wages declined by 15 per cent and those with low educational attainment were most likely to suffer this decline (Babcock et al, 1998).

The business elites planning urban renewal projects in the 1950s cited crime and disease statistics and falling property values to support the argument that the Hill District's third ward was a blighted neighborhood that would best be razed (Lubove, 1995). In 1950, the census classed two-thirds of the homes in the Hill as ‘dilapidated’ (Bodnar et al, 1982). Urban renewal projects affecting the Hill District included a maze of freeway ramps connecting downtown to the interstate highway system, completed in 1964. However, the most destructive action was the building of the Civic Arena on the land that made up the Lower Hill (Lubove, 1995). About 500 acres were bulldozed, over 400 businesses destroyed and 1551 families displaced (Bodnar et al, 1982). The razing of the Lower Hill destroyed businesses, cultural institutions, social clubs, churches and other foci of neighborhood life. As in other American cities, black neighborhoods bore most of the costs in dislocation and disorganization caused by the urban renewal projects of the 1950s and 1960s. As the regional manufacturing economy collapsed, the population of the Hill District fell by almost 75 per cent, from 38 100 in 1950 to 9830 in 1990. The neighborhood became over 90 per cent black (Lubove, 1995).

The loss of businesses and population, and the increase in empty lots and buildings, left the Hill District devastated. Throughout the second half of the twentieth century, lack of legitimate economic opportunity, loss of population and housing, and increasing social marginalization likely increased the power of underground economies as a source of income and social organization in the Hill. This was the state of the neighborhood when crack arrived in the late 1980s (somewhat later than in coastal cities like New York). The arrival of crack also occurred at a time when federal policies were both increasing levels of inequality in American society and reducing the availability of government services, including drug treatment.

The media responded quickly to the appearance of crack, trumpeting images of violent male crack dealers and sexually voracious female crack whores. However, like most media treatment of worrisome new drug trends, these images portrayed extreme cases as if they were typical (Reinarman and Levine, 1989). Their gendered nature played to mainstream American stereotypes of the underclass. Ethnographers, meanwhile, have developed a more complex view of the world of heavily involved crack users in inner city neighborhoods like the Hill District.

The young Puerto Rican men who sold crack in East Harlem (studied by Bourgois) lived in a neighborhood characterized by longstanding high rates of poverty and unemployment. Poorly educated and culturally isolated, they did not qualify for even low-skilled jobs in a service economy. The retail nature of the crack market – many sales of small units at a low price within the neighborhood itself – created remunerative opportunities for sellers and touts. In the mid-1980s, dealers increasingly recruited juveniles as drug sellers because, if arrested, they faced shorter sentences than adults. Because turf conflicts and other disputes in an illegal market cannot be resolved through contract enforcement by the courts, dealers adopt violence to maintain market control. Thus, the juvenile crack dealers often carried guns. Adolescent spats escalated easily into shootings.

In this world of dealing with its imperative of violence, these young men enacted a masculine identity that was denied them in the world of legitimate employment. Born a generation earlier, they would likely have worked in factories, where their capacity for demanding physical labor would have satisfied their sense of masculinity. In the 1980s, they were qualified for only the most menial jobs in an economy dominated by finance, insurance and real estate. Moreover, their supervisors in such settings would typically be white women. Those who ventured into this world, where masculine swagger was dysfunctional and where they rarely kept jobs for long, found the experience emasculating. Lacking access to legitimate adult roles that would enable them to act the part of father/provider, they were denied access to legitimate male authority. In this vacuum, they adopted a masculinity grounded in aggression and violence.

Young women who participated in the crack scene did so from a variety of motives, including a search for excitement or a more passive process of seeking identity and belonging (Bourgois and Dunlap, 1993). A persistent market for commercial sex meant young women could earn money as prostitutes, and some who did so plied their trade under the tutelage of more experienced sex workers, learning methods of reducing the risks inherent in selling sex to strangers. Compared to a longstanding pattern in which some prostitutes sold sex for money to support a heroin habit, crack-using prostitutes tended toward a more compulsive pattern in which they traded sex directly for drugs. Smoking a single crack rock produces an intense but brief euphoria followed quickly by craving to repeat the experience – and thus to do what it takes to acquire another rock. Trading sex for crack in a crack house where crack is easily available ensures a steady supply of crack, doled out by the rock in exchange for sex with men visiting the crack house (Ouellet et al, 1993; Ratner, 1993). An addiction to cocaine, taken in the powerful doses delivered by smoking, is a harder habit to manage in any stable way than heroin addiction; in comparison, a dose of injected heroin lasts 4–6 hours, an interval more conducive to exerting some degree of control in interactions with customers.

However extreme their behavior, these young crack-using women longed for conventional lives they had never had access to. Many reported to ethnographers abusive childhoods in disorganized families. Generally, they had never witnessed, let alone enjoyed, family life with two engaged and competent parents who worked in the legitimate economy (Dunlap, 1992; Bourgois and Dunlap, 1993). Like their male counterparts, they acted in desperation to control feelings of powerlessness.

If we take seriously that addiction is a disease, then concentrations of addiction among the disadvantaged and marginal – like disproportionate rates of tuberculosis among the poor and homeless well into the antibiotic age – must be seen as a reflection of the forces that keep certain population groups poor, unemployed and poorly educated (Lerner, 1998). That many diseases concentrate in the poorest, least employed, most nutritionally deprived, and most socially and culturally isolated population groups in American society suggests that it is these conditions, not the power of a drug or a bacterium, that accounts for these patterns. While the chronic relapsing brain disease model offers an explanation of the impact of crack use on individuals, it is less forthcoming on the issue of rates of addiction in specific population groups. The model would posit that a binge pattern of compulsive crack use, over time, would alter neurotransmission patterns in the user's brain. Treatment of some kind would be warranted; NIDA has for a number of years funded research whose goal is a pharmacotherapy for cocaine addiction. Following treatment, these crack users would be expected to face significant challenges in controlling craving and managing relapse risk. Given the cue-conditioned nature of drug craving, the brain disease model would predict that resumed life in the same setting that encouraged crack markets in the first place would mean that relapse risk was high. Thus, any treatment initiative unaccompanied by efforts to address the multiple environmental circumstances that had encouraged crack use in the first place would leave its patients vulnerable.

Meanwhile, direct witnesses to the crack scene learned from it. Like observers of morphine addiction in the late nineteenth century, or of powder cocaine excess in the 1970s, a new generation growing up in the inner city responded to the harms they witnessed by rejecting the risky behavior for themselves. They stigmatized ‘crack heads’, people whose crack use appeared out of control, whom they distinguished from episodic crack smokers or those who smoked crack in combination with marijuana (Furst et al, 1999). Others rejected crack in favor of milder drug choices: marijuana and beer. That they did not also reject involvement in drug markets and the violence they engender speaks to the enduring nature of the problems bedeviling these neighborhoods (Johnson et al, 2006).

Crack threw the nation into one of its periodic drug panics, fueled by sensationalist media coverage. But media frenzy alone cannot explain the grip on the public imagination of a form of drug use engaged in, at its peak, less than 2 per cent of the American population (Belenko, 1993). Images of violent crack dealers resonated with broader fears of urban crime. Ronald Reagan famously invoked the image of a Cadillac-driving black woman on welfare to justify cuts in federal support of social services; that poor black women smoking crack might be welfare cheats seemed all too plausible to Americans seeking scapegoats for complex social problems. When these women were also having crack-exposed babies, the threat of successive generations on the dole loomed. Two policy responses, attempts to charge parturient women with delivering cocaine to their infants through the umbilical cord and the sentencing disparities for possession of crack versus powder cocaine, landed with destructive force on African Americans.

When clinicians began reporting cases of cocaine-exposed newborns to black women who had smoked crack during their pregnancy, panic ratcheted up another notch. Crack babies represented an episode in which early scientific findings, though they were later qualified, drove moral panic. Images of black ‘crack babies’ excited alarmed pity while they portended future dependents on tax-payer dollars. Initial findings suggested that cocaine, when smoked as crack, was uniquely and severely damaging to the fetal brain. But the early studies had not controlled for other relevant factors, including other drug use, poor nutrition and lack of prenatal care. Continued research suggested that cocaine was less damaging to fetuses than alcohol and that babies born to women who had smoked crack during their pregnancies suffered the effects of a range of behaviors and health problems of their mothers. As Drew Humphries (1999) has shown, media attention in the early 1980s to white women who snorted cocaine while pregnant portrayed them sympathetically as they entered treatment; their babies appeared to suffer no lasting harm. The women arrested in delivery rooms and charged with endangering their infants were almost exclusively poor and black. For these women, the prospect of having children seemed a way to achieve the more conventional life that eluded them. Some reported that they believed crack, rather than harming the fetus, would animate it with the same effects they valued in crack (Bourgois and Dunlap, 1993).

Similarly, longer sentences for crack possession compared to equivalent weights of powder cocaine contributed to disproportionate levels of African Americans incarcerated for drug convictions (Currie, 1993). These periods of incarceration, in turn, further reduced young black men's engagement in education or legitimate employment and deepened their criminal involvement.

Thus, it is important that we study not only drug users, but also those who respond to drug users. These include researchers who study addiction and drug effects; physicians, clinicians, former drug users and others who seek to treat addiction; policymakers who seek to control drug use; the press who find drug use (and responses to it) compelling subject matter; a more amorphous ‘mainstream public’ that upholds broad norms of acceptance and condemnation of specific drugs and forms of use, and moral entrepreneurs (Becker, 1963). To the extent that responses to epidemics of drug-related harm neglect to address causal conditions or even exacerbate those conditions, they have contributed to marginalization and social exclusion. Continued marginalization and exclusion, in turn, create the conditions for future epidemics of drug-related harm.

Conclusion

Neuroscience research has been well funded over the last few decades. As I write this, today's New York Times carries a major story trumpeting the elucidation of an endogenous compound's role in forming memory (Carey, 2009). Typical of science reporting, the story emphasizes the scientific excitement of the findings and offers the tantalizing prospect of a cure for addiction from a drug that might selectively erase memories bound up in conditioned drug use cues. Yet these findings are based solely on rat studies and the prospect of chemical manipulation of memory raises complex ethical and clinical questions. As David Courtwright has noted (2010), neuroscience has yet to produce dramatically new pharmacotherapies for addiction.

Nonetheless, the American populace exerts unrelenting pressure on scientists to produce socially or economically useful results. Our national system for funding biomedical research through the National Institutes of Health has a structure that has succeeded in balancing political interests (the public's desire for medical breakthroughs, fueled by memories of antibiotics and by technological optimism) and scientists’ desire to pursue questions of scientific interest while maintaining professional autonomy. Thus, Congress appropriates funds that are given to disease-named institutes; then scientists decide which investigator-initiated projects to fund through a process of peer review (Stokes, 1997). NIDA-funded research on the effects of drugs on the brain falls within this model. Studying drug effects is yoked to one of the most exciting areas of biological research, neuroscience. Indeed, the prestige of neuroscience was such that the National Institutes of Health dubbed the 1990s the Decade of the Brain. But it is the hope for addiction cures that keeps the funds flowing to support this research.

Meanwhile, harm reduction takes science seriously and takes it to the street. Harm reduction gained traction in the context of needle exchange programs. Needle exchange activists repeatedly cited scientific evidence in support of the value of needle exchange in reducing the spread of blood-borne infectious disease. Harm reductionists also challenged dominant images of drug users as inherently self-destructive; they overturned a longstanding therapeutic orthodoxy that drug users could not benefit from services unless they first entered treatment for their drug use. They acted from the premise that injection drug users care about their health and that they deserve basic services in the prevention of infectious disease. Harm reductionists thus take drug users seriously as human beings. And needle exchange programs have proven to be an important source of referrals to drug treatment and to other social services as well as a platform for overdose prevention.

Andrew Weil (1972) called the desire to alter consciousness a drive. We can accept or reject that theoretical statement, but empirically it is clear that, simply put, people use drugs. Policymakers tend to underestimate the power of social norms to constrain and moderate drug use; they tend to over-rely on criminal sanctions. But, as David Courtwright (2001) and David Musto (1991) have shown for opiate and cocaine use in the first decades of the twentieth century, the period following the widespread prevalence among the middle classes, use of both of these drugs waned; it became concentrated in specific marginal groups and even this cohort did not effectively replace itself in the 1920s and 1930s. This downturn in prevalence predated the nation's enactment of drug prohibition.

The pharmaceutical industry will surely continue to spawn arrays of new psychoactive drugs. People will continue to explore new ways to get high, setting off new drug eras and, in some cases, new epidemics of drug-related harm. In the sweep of historical time, we are less than 200 years into the pharmaceutical era and still grappling with how best to manage this challenge. We need better models than panic for dealing with new patterns of drug use. May the social learning continue.