This essay appeared in as “Le Bogue, petite peur de l’an 2000” in Le Monde diplomatique 545 (August 1999), p. 11. LMD syndicated some pieces, so it was also translated into several other languages — Greek, Italian, Spanish, Serbian, and maybe more. This text is the English draft as submitted.
Millennial Bug
Since popular coverage of the “year 2000” or “Y2K” problem began two years ago (the thousand-day countdown was widely noted, with little irony), the media have insistently presented the issue as technical in nature. Time and again we are told that early programmers, constrained by scarce computational resources, were forced to shorten years from four digits to two, and, that this technique subsequently became a programming convention. While this explanation is true, it is far from complete; most notably, it fails to account for the problem’s larger, cultural background — why, of all possible resource-saving techniques, this one seemed more “natural” than others; and by ignoring these aspects, we run the risk of missing the lessons that a broader understanding of Y2K might offer.1
Not surprisingly, popular presentations of the technical basis of Y2K problems are simplistic. Most reports treat data as self-evident — ”strings” of numerical and alphabetical text such as “1 January 2000”. In fact, data can be structured in many ways: Morse code dots and dashes, rotary telephone clicks, and Braille are examples of various ways to encode numbers. Computing systems use many more data types; and it’s quite common for programs to “coerce” one data type into another — for example, by expanding the short numerical string “01012000” into the longer alphanumeric string “1 January 2000,” or by temporarily substituting an arbitrary value for a longer one (“0” for “19”, “1” for “20”, “2” for “21”, etc.). Thus, simply omitting two digits was only one among many possible ways to conserve computational resources.
When a typical mid-1960s computer was limited to processing records with no more than 72 characters (several of which were needed to ensure that programs were functioning correctly), programmers were faced with the choice of omitting the repetitive “19” from year designations or omitting other information that could be crucially unique — for example, the first three letters of a person’s name rather than just a first initial. In such a context, scarcity was a very concrete problem. But those conditions are long since passed; processing power has expanded at a phenomenal rate: a popular axiom of the computing industry, “Moore’s Law” — named for Intel co-founder Gordon Moore and formulated in 1965 — holds that computational power doubles each year. And it’s quite normal to hear claims — not without merit — that the ubiquitous computing of the “information age” is transforming society in ways we can only begin to imagine. In light of such radical changes, one can only be skeptical about accounts of Y2K which place too much emphasis on the technical constraints of decades ago.
Programmers, in any case, are quite familiar with data sources that behave in very different ways: for example, temperatures are in constant flux, whereas surnames rarely change. Dates are noteworthy in this regard, because each component functions differently: days and months are cyclical, whereas years increment cumulatively. In fact, each new decade involves a “rollover” like Y2K but of a lower order of magnitude (1979–1980, 1989–1990, etc.). Given the fact that error-handling routines to detect and circumvent false or absurd results are standard programming practice — and given the mind-boggling complexity and mass of the data and procedures that computers and networks have routinely handled over the last decades — if innumerable programmers worldwide failed to anticipate and prepare for Y2K, it would be naive to view their mistakes as merely individualistic or a “technical” conventional oversight.
What is less naive are the moral anxieties expressed in these media stories: the dangers of choices determined by conditions of scarcity and by reducing social relations and exchanges to inflexible technical systems. But to admit that these are the issues which Y2K poses would involve questioning the ways in which computer systems are deployed and the interests they serve — and perhaps even questioning the wisdom of equating social progress with technical progress. But the pivotal role that computers have played in recent economic acceleration discourages such questions. It’s simpler to engage in a vague scapegoating by pointing fingers at past culprits rather than considering how we can consciously shape our future; and it’s simpler to assume that threats are posed by specific technical flaws rather than by the total operations of these systems.
In place of sustained analyses of this kind, though, media reports hint darkly at Y2K “nightmare scenarios” — the threat that more or less random computer systems might fail simultaneously throughout communications, automation, and record-keeping systems worldwide. In fact, these systems fail all the time, but there would be nothing especially ominous or “simultaneous” about a list of systems that were malfunctioning at, say, 7:32 PM GMT on 3 July 1999. The extraordinary significance we attribute to Y2K has less to do with the unknown objective failures than with the moment itself — the turn of the millennium which inaugurates the “twenty-first century,” which for decades has served as the trope par excellence for the Future. Massive Y2K failures threaten that the utopian promises of a technologically progressive world will collapse into a dystopian chaos: that countless “dynamic” mechanisms — each of which supports a specific social context — could be surreally frozen, their operations exposed to scrutiny as people find other ways to perform these functions. Thus, to speak of Y2K as an objective, technical issue is misleading, for it suggests that these technologies form a counterworld divorced from the institutions and relations they pervade. One could persist in this illusion if these systems continued to function perfectly; but the alpha and the omega of technology is society, not an imaginary cybernetic counterworld, and their failures will be measured above all in social costs. Consequently, though the proximate cause of Y2K problems may be scarce computational resources and ubiquitous technical functions, the threat it poses is that these systems’ failures will bring about widespread scarcity of basic resources and ubiquitous social dysfunction. In this respect, the forms that Y2K remediation efforts take are telling. The average person can do little to ensure that the distant computer systems he or she relies on will function smoothly; so responsibility falls to the technocratic elites. Many media reports have focused on how these high-level efforts are proceeding — often with dismal results. For example, in late 1998 the U.S. Defense Department reported that the Defense Special Weapons Agency had certified most of its computer systems without performing the required tests on them; nevertheless, the officer in charge of the agency’s Y2K programs asserted that by April 1999 the systems will be “100% in compliance” and added, “I have a good feeling about Y2K in this agency.”2 In January 1997 — too late for major concerns repair up to tens of thousands of interrelated systems — the staff of the Office of the Independent Council responsible for investigating President Clinton was ten times larger than that of the President’s Council on Year 2000 Conversion.3
Given such reports, it is no surprise that many might take measures to protect themselves. But with few exceptions, politicians and pundits have derided popular preparations as “panic” or “hysteria.” For example, in December 1998 U.S. Senator Robert Bennett warned that “even if the Y2K problem is solved, the panic…can end up hurting us as badly.”4 Anticipatory problems often cited include hoarding, bank runs, and resistance to tax or debt obligations. In the event that Y2K problems do prove to be serious, many governments expect civil unrest. Almost invariably, these specters are presented as the actions of ill-informed, irrational masses; but the elites are hardly immune to these anxieties. For example, the U.S. Federal Reserve is printing an additional $50 billion in currency, raising the total amount in circulation by roughly 30 percent. And in January, the Global 2000 Coordinating Group, a consortium of hundreds of financial organizations, announced it would release a report rating various countries’ Y2K preparedness; within weeks it reversed itself for fear that the report would precipitate national-scale disinvestments with effects far more devastating than the potential problems warned about in the report.5 This about-face was only a very visible instance of a wider reluctance to treat Y2K issues openly, evident in closed-door legislative hearings, regulatory suppression of preparedness assessments, secrecy surrounding executive preparations for unrest, and edgy attempts at intermilitary cooperation.
Those vested with the responsibility of minimizing Y2K problems have a conflicting mandate. They need to mobilize people and resources to solve the technical problems in their purview, but discreetly, without causing alarm; and, in a period when financial returns are seeing unparalleled highs in many sectors, the expenses incurred in Y2K upgrades return merely the ability to function normally. These contradictions play out in other ways, too: for example, some hardware and software manufacturers see Y2K upgrades as a source of huge profits, and find little incentive to support products already sold;6 other manufacturers quietly anticipate steep declines in revenues as buyers postpone expenditures lest disaster strike. The recent legislation passed by the U.S. Congress to limit manufacturers’ liability for Y2K-related problems only exacerbates these tendencies — by encouraging both profiteering on the part of vendors protected by liability limitations, on the one hand, and fiscal caution on the part of companies threatened by Y2K problems, on the other. In short, the mere anticipation of Y2K problems has itself become an “event” by exaggerating various structural tensions. If, as is widely predicted, less-developed countries fare badly due to their reliance on older computer systems and “pirated” software, these structural tensions will very likely erupt on the level of global politics as well.
Clearly, then “Y2K” is not limited to objective failures; on the contrary, uncertainty about what will happen has become a pervasive social logic, which will surely increase as the year passes. And just as these forces precede the “event,” they will follow it — for years, as the fallout of failures works its way through economic, political, and judicial structures. Though we cannot know what forms these consequences will take, it seems certain that the repercussions will be worse for some institutions than for others — because popular sentiments will necessarily play a large part in shaping what happens. Ultimately, these remediations may prove to be a measure less of objective failure than of the social failures they make manifest. Put simply, those institutions which are seen as having used technical systems as instruments of domination and expropriation may fare badly.
It is in this regard — the ways in which objective and subjective Y2K problems are being resolved by social rather than technical means — that the millenarian irony of the year 2000 reveals itself. “Modern” society fancies itself and its future as rationally oriented, and its technical achievements as marking decisive breaks from the “irrational” past. In our haste to declare ourselves modern, postmodern, hypermodern, and beyond, though, we have sought to consign apocalypticism — the belief in a decisive, revelatory break in the society’s fabric — to prehistory. But in doing so, we have failed to recognize how adaptable the idea of a decisive moment can be — in particular, how the moral discourses embedded in it do not rely on the divinity we supposedly left behind. Our fanatical pursuit of novelty has expressed itself through ever-telescoping periodizations, in which centuries-long eras have given way to pseudo-decades and “generations” of only a few years. Small wonder that in such a milieu, we should blithely assume that years could be abbreviated to two digits without consequence. But it is precisely this omission which ensures that our passage into the twenty-first century threatens a technical “apocalypse,” marked by a resurrection of the past. Countless apocalypses have been predicted before, each expressed in terms of a society’s anxieties about its failure to live up to its ideals. Apocalypses never arrive; but in providing a point in time for considering failed promises, when they depart, they tend to leave societies changed, sometimes very radically.
Notes
Footnotes
-
Discussions of the problem began to appear on a regular basis in technical journals such as the Communications of the Association for Computing Machinery in the early 1990s. \ ↩
-
USA Today, Nov. 1998, pp. 27–29. ↩
-
Capers Jones, “The Global Impact of the Year 2000 Software Problem,” (Burlington, MA: Software Productivity Research, 23 Jan. 1997, archived here). ↩
-
Declan McCullagh, “Bankers: Prepared for a Panic?”, Wired News (3 Dec 1998, archived here). ↩
-
Barnaby J. Feder, “Group Rethinks Publicly Rating 30 Nations’ Year 2000 Readiness,” New York Times (27 Jan 1999, archived here). ↩
-
In “Microsoft Releases Year 2000 Product Ratings,” ZDNet (15 Apr 1998), Mary Jo Foley reported that only one-third of Microsoft’s top software products were Y2K compliant (link). Indeed, a vast amount of computing hardware (for example, Intel 386- and 486-chip–based personal computers) will suffer serious or terminal problems. ↩