HCI in the Global Knowledge-Based Economy: Designing to Support Worker Adaptation KIM J. VICENTE University of Toronto Increasingly, people are being required to perform open-ended intellectual tasks that require discretionary decision making. These demands require a relatively unique approach to the design of computer-based support tools. A review of the characteristics associated with the global knowledge-based economy strongly suggests that there will be an increasing need for workers, managers, and organizations to adapt to change and novelty. This is equivalent to a call for designing computer tools that foster continuous learning. There are reasons to believe that the need to support adaptation and continuous learning will only increase. Thus, in the new millennium, HCI should be concerned with explicitly designing for worker adaptation. The cognitive work analysis framework is briefly described as a potential programmatic approach to this practical design challenge. Categories and Subject Descriptors: H.5.2 [Information Interfaces and Presentation]: User Interfaces—User interface management systems (UIMS); H.5.3 [Information Interfaces and Presentation]: Group and Organization Interfaces General Terms: Human Factors Additional Key Words and Phrases: Knowledge-based economy, adaptation, cognitive work analysis 1. INTRODUCTION Where is the field of HCI going in the new millennium? Any answer to this question is bound to be a personal one, so I will begin by laying my intellectual cards on the table. For the past 13 years, I have been conducting research on how to support workers in complex sociotechnical systems This article is largely based on portions of a recently published research monograph [Vicente 1999]. This research was sponsored in part by a research grant from the Natural Sciences and Engineering Research Council of Canada. Author’s address: Cognitive Engineering Laboratory, Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, Ontario M5S 3G8, Canada; email: benfica@mie.utoronto.ca; http://www.mie.utoronto.ca/labs/cel/. Permission to make digital / hard copy of part or all of this work for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication, and its date appear, and notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and / or a fee. © 2000 ACM 1073-0516/00/0600 –0263 $5.00 ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000, Pages 263–280. 264 • Table I. K. J. Vicente Characteristics of Complex Sociotechnical Systems. See Vicente [1999] for a detailed account. (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) large problem spaces social heterogeneous perspectives distributed dynamic potentially high hazards many coupled subsystems significant use of automation uncertain data mediated interaction via computers disturbances by designing better computer-based tools (e.g., Vicente [1990; 1996; 1999] and Vicente and Rasmussen [1990; 1992]). These application domains (e.g., process control plants, aviation cockpits, engineering design, and medicine) are somewhat different from the domains with which most HCI researchers have been concerned. More specifically, complex sociotechnical systems tend to have many, although not all, of the characteristics listed in Table I. And because different types of problems require different types of solution methods, the work analysis techniques that are suitable for complex sociotechnical systems need to be somewhat different than those in the toolkit of most HCI researchers and designers. The theses of this article are that problems of this type will become more prevalent in the new millennium, and that they will cause HCI to be increasingly concerned with designing for worker adaptation. These points are best made by example. 2. CASE STUDY: HEDGE FUNDS IN AUGUST, 1998 In August of 1998, the world financial markets experienced a severe setback. One of the interesting stories to emerge from this event was the catastrophic losses experienced by hedge funds led by Wall Street’s “rocket scientists.” These funds are based on very complex computer-aided trading strategies and, thus, are a fascinating (if esoteric) example of HCI in complex sociotechnical systems. The following account of what happened with these hedge funds is based on the insightful article by Coy et al. [1998]. What are Hedge Funds? Hedge funds are a high-tech form of financial investment that relies very heavily on quantitative computer models to make trading decisions. Hedge funds are rather unique because they are purported to be “a clean, rational way to earn high returns with little risk” [Coy et al. 1998, p. 116]. They are based on a sophisticated arbitrage strategy that is intended to be “market-neutral,” meaning that the funds are designed to make money regardless of whether prices are falling or rising. Before August, 1998, hedge funds lived up to these claims. For example, one hedge fund tripled in value between March of 1994 and ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. HCI in the Global Knowledge-Based Economy • 265 December of 1997. Moreover, the fund had never lost more than 3% of its value in any one month. At first glance, hedge funds seem like the financial holy grail— excellent returns with minimal risk. How is this possible? The computer models are constructed using historical patterns based on very large amounts of data from many years of market behavior. These historical patterns define a referent for expected market behavior. When prices move outside of the normal relationships defined by this referent, a signal is sent to make a trade, the expectation being that prices will revert to their historical patterns. In addition, other (hedge) trades are made to protect against the anticipated risks that may accompany the initial trade. These investment strategies usually take advantage of very small price discrepancies. As a result, enormous investments are required to generate significant returns. This, in turn, has led to an increasing need for borrowed money to be used as leverage in the trades. This brief description should make it clear that hedge funds are a very sophisticated form of investment. The highly paid scientists that have developed these models are frequently referred to as “rocket scientists” or “quants” (short for quantitative), and they include two Nobel Laureates in economics among their number. What Happened to the Hedge Funds in August, 1998? Given their success, hedge funds appeared to be impervious to economic disturbances. However, in August of 1998, the claim that high returns could always be obtained with minimal risk was shattered. For example, one hedge fund lost 44% of its value (US $2 billion) in one month! For some, the losses were worse than those experienced during the 1987 market crash. As one hedge fund newsletter put it at the time, there is a “breakdown in market structure...something highly unusual is happening” [Coy et al. 1998, p. 118]. Financial institutions using similar arbitrage strategies also lost a great deal of money during the same period. For example, Smith Solomon Barney Holdings lost US $300 million, and Merrill Lynch lost US $135 million. Why Did It Happen? How could a seemingly iron-clad investment strategy lead to such catastrophic failure? Coy et al. [1998] identified a number of related causes: (a) the assumptions that were embedded in the computer models were violated; (b) there was a breakdown in the historical patterns; (c) the computer models were “black box” models that were making automated decisions without the real-time input of seasoned traders; (d) multiple, unanticipated events occurred all at the same time (e.g., Russia experienced a setback on its road to capitalism; the Asian financial crisis worsened); ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. 266 • K. J. Vicente (e) many people were making the same kind of bets, and money was being borrowed heavily to finance the bets, thereby amplifying the losses. As one financial analyst put it, “What occurred...was the financial world’s equivalent of a ‘perfect storm’— everything went wrong at once” [Coy et al. 1998, p. 117]. Generalizing the Lessons Learned. What does any of this have to do with HCI? After all, there is no mention of menus, mice, windows, navigation, or the WWW. If we define the field of HCI narrowly as being solely concerned with people interacting with computers and usability as the only criterion, then there is no lesson to be learned from this case study. However, if we define HCI more broadly as also being concerned with people interacting through computers to a complex world teeming with novelty and change and usefulness as an important criterion, then there are very important lessons to be learned from this case study. In fact, for those who are familiar with the details of large-scale industrial disasters, such as the Three Mile Island nuclear power plant incident, the lessons are very familiar ones (cf. Perrow [1984], Reason [1990], and Leveson [1995]). Complex sociotechnical systems are open systems, meaning that they are subject to disturbances that are unfamiliar to workers and that were not, and in some cases could not have been, anticipated by designers. These unanticipated events can range from the catastrophic— such as the multiple plant failures at Three Mile Island or the multiple financial disturbances during August of 1998 —to the mundane. As an example of the latter, Norros [1996] conducted a field study of flexible manufacturing systems, and found that workers had to cope with an average of three disturbances per hour (i.e., events during which the system was functioning in ways that were not anticipated by designers). Because these events are unanticipated, the procedures or automation that designers provide will— by definition—not be directly applicable in these cases. Therefore, to deal with these disturbances effectively, workers must use their expertise and ingenuity to improvise a novel solution. In complex sociotechnical systems, the primary value of having people in the system is precisely to play this adaptive role. Workers must adapt on-line in real time to disturbances that have not been, or cannot be, foreseen by system designers. Playing the role of adaptive problem solver to cope with unanticipated events is a challenge. If we expect workers to play such a role effectively, then we should provide them with the appropriate support rather than just expect them to play this role on their own, unaided. Specifically, we need to design computer tools that are tailored to help workers perform open-ended intellectual tasks that involve discretionary decision making. In the case study above, this would involve providing financial analysts with tools that would allow them to take advantage of their domain knowledge to improvise a solution to an unfamiliar and unanticipated set of events like the one that occurred during August of 1998. ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. HCI in the Global Knowledge-Based Economy • 267 Note that this conclusion generalizes well beyond the details of this particular case study. I believe that, in the future, HCI will have to be increasingly concerned with systematically designing tools that deliberately support adaptation. The rationale for this claim is described next. 3. THE GLOBAL KNOWLEDGE-BASED ECONOMY AND THE DEMAND FOR ADAPTATION Everywhere we look in the media, we see a number of buzzwords appearing with increasing frequency, such as: innovation, knowledge worker, wealth creation, flexible manufacturing, free trade, and global village. What do these terms have in common, and what is their significance for HCI? These terms are symptomatic of a fundamental shift toward a global knowledgebased economy that will lead to an increasing demand for workers, managers, and organizations to adapt in the future. 3.1 The Global Knowledge-Based Economy There are a number of trends that have acted together to transform qualitatively the nature of contemporary industry and economics. These changes, in turn, pose a new set of requirements for success in many workplaces. Brzustowski [1998], the President of the Natural Sciences and Engineering Research Council of Canada, has identified the following contributors: —The market is full of products based on recent advances in science and technology —These products are made all over the world, regardless of brand name —High quality is expected in the market, and is frequently being achieved —Today’s new high-tech products become tomorrow’s commodity products —The market for medium and low-tech goods is fiercely competitive —Commodity prices are low, and distance is not a factor even for bulk materials —Everybody has to compete on price, so productivity improvement is key —Changes in demand and market conditions can be rapid and unpredictable —New knowledge appears in new goods and services at an increasing pace. According to Brzustowski, these factors have led to a global knowledgebased economy that is qualitatively different from the economy of the past. The speed and connectedness of the global knowledge-based economy is well illustrated by Motorola’s manufacturing of electronic pagers [Morone 1993]. Seventeen minutes after an order for a pager is received from anywhere in the United States, a bar code is placed on a blank circuit board in a factory. Within 2 hours of the order, a finished product is shipped, even ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. 268 • K. J. Vicente if the lot size is one. Davis and Meyer [1998] provide additional evidence for the qualitative changes represented by the global knowledge-based economy. What implications do these changes have for success in the workplace and, thus, for HCI? 3.2 The Future Demand for Adaptation I believe that the trend toward a global knowledge-based economy will increase the need for adaptation. Workers, managers, and organizations will all have to become more flexible and adaptive than in the past. There are a number of arguments from a diverse set of sources that can be put forth in support of this claim. Workers. The U.S. National Academy of Science/National Research Council Committee on Human Factors issued a report a few years ago, documenting what it believed were the most fertile areas for human factors research in the next few decades [Nickerson 1995]. The report was written by a diverse group of experts, spanning the entire range of human factors, from physical to psychological to social-organizational aspects. One of the important points made in the report is that computerization will continue to change the nature of work. “Technological change means changes in job requirements. The ability to satisfy changing, and not entirely predictable, job requirements in a complex, culturally diverse, and constantly evolving environment will require a literate workforce that has good problem solving skills and learning skills” [Nickerson 1995, p. 22]. Consequently, “a critical aspect of industrial competitiveness will be the ability to adapt quickly to rapid technological developments and constantly changing market conditions” [Nickerson 1995, p. 42]. Therefore, in the future, the work force will have to be more versatile and adaptive than in the past. In his recent monograph on the interaction between engineering and society, Pool [1997] reaches essentially the same conclusion but from a somewhat different path. By reviewing the influence that society has had on technology, Pool noted that there is an increasing social need to “do more with less.” This trend will only increase with the demands imposed by the global knowledge-based economy. Because of the premium that is put on increasing efficiency and providing new functionality, engineering systems have become—and will continue to become—more complex. This increase in complexity has had an unintended result, namely a commensurate increase in unanticipated events. In other words, as systems become more complex, they become more open. Unanticipated disturbances are bound to occur. Thus, there is a greater need for worker and organizational adaptation than in the past. By inference, we can expect that the requirement to support adaptation will increase accordingly. Cannon-Bowers et al. [1997] describe an excellent example of the trend identified by Pool [1997]. Because of a tremendous reduction in operating budget, the U.S. Navy is under pressure to greatly reduce costs. As a result, the Navy has set the goal of reducing the manning level on a future generation of ships from the current level of 350 workers to an envisioned ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. HCI in the Global Knowledge-Based Economy • 269 level of 95 workers. Clearly, this is a very ambitious target, but the magnitude of the design problem is compounded by the fact that the missions that such ships are expected to play in the future will increase in complexity. There will be more missions. They will be more varied in nature, and their nature will be highly unpredictable. Therefore, to meet the challenge to do “more with less” effectively, there will be a greater need to design tools that help workers perform open-ended intellectual tasks that involve discretionary decision making. Managers. Adaptation is relevant, not just to workers, but to managers as well, as evidenced by Morone’s [1993] fascinating field studies. He studied corporate general managers that have been successful in turning their companies (Motorola, Corning, and General Electric) into global leaders in high-technology markets. Such markets are particularly affected by the symptoms accompanying the trend toward a global knowledge-based economy, exhibiting a high degree of volatility and uncertainty. Morone learned that successful managers adapted to all of this turbulent change to keep their companies competitive. As one manager put it, “You need a strategic intent— but within that context, you have to be totally opportunistic.... You can’t know what’s around the next corner, so construct an organization that is able to adapt” [Morone 1993, p. 119]. In another part of his book, Morone observed that some of the successful businesses he studied “followed the general course that had been hoped for, but the specific form they took, the specific market and technological developments to which they had to respond as they followed that general course, were not, and could not have been, anticipated” [Morone 1993, p. 190]. Thus, yet again we see the need to adapt to unanticipated events. Finally, another manager pointed out: “A lot of people think of product development as involving a lot of planning, but... the key is learning, and an organization’s ability to learn” [Morone 1993, p. 224]. The high-tech markets studied by Morone have been particularly volatile in the past, and show every sign of continuing to be so. However, as the trends identified by Brzustowski [1998] affect different sectors of the economy, we can expect to see the same strong need for managers to adapt to uncertainty and novelty in other industries as well. Organizations. Adaptation is also relevant at an organizational level. In his national bestseller, Senge [1990] discussed the importance of organizational learning in the global knowledge-based economy. In many cases, traditional ways of governing and managing have become outdated and are breaking down because of the changes documented by Brzustowski [1998]. The static, hierarchical organizational structures that have dominated in the past are no longer as appropriate, given the current pace of change. According to Senge, one path that modern organizations can adopt in order to succeed and survive in this environment is to learn to accept, embrace, and seek change. This, in turn, would require a decentralized organizational structure whose individuals are committed to continuous learning and adaptation to novelty. ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. 270 • K. J. Vicente More recently, Davis and Meyer [1998] have identified a similar set of corporate requirements. Because the pace of change has accelerated, the rules that guided corporate decision making in the past are no longer as reliable. As a result, companies might have to give up on the idea of stable solutions to business problems. Instead, they may have to move from relying on prediction, planning, and foresight to building in flexibility, speed, and self-organization. In short, “an enterprise must adapt to its environment....It needs to be every bit as adaptable as the economy in which it participates” [Davis and Meyer, 1998, p. 114]. Summary. The global knowledge-based economy will continue to transform the landscape of modern work. Changing and unpredictable circumstances will be the norm. As a result, there is an increasing need for workers, managers, and organizations to become more flexible and adaptive than in the past. These trends suggest that the requirement to design for adaptation will only increase in the future. 3.3 The Relationship between Adaptation and Learning Along with the buzzwords identified earlier, we also frequently find another set, such as: life-long learning, continuous improvement, and learning organization. How does continuous learning fit into the picture I have been drawing? So far, I have argued that the global knowledge-based economy has created a strong need for adaptation to change. In essence, this is equivalent to a call for learning to learn. By adapting to disturbances, workers are, in effect, engaging in opportunities for learning. Hirschhorn made this point in the context of process control: “Each time operators diagnose a novel situation, they become learners, reconstructing and reconfiguring their knowledge” [Hirschhorn 1984, p. 95]. The connection between adaptation and learning goes well beyond process control, however. Related conclusions have been reached in other research areas, such as psychology, control theory, and cognitive science (e.g., Narendra [1986], Gibson [1991], Johannson [1993], and Norros [1996]). Nowhere is the robust relationship between adaptation to novelty, action variability, and learning opportunities as well thought out as in the study of human motor control in ecological psychology, thanks to the seminal work of Nicholai Bernstein. This relationship was clearly expressed in a book written approximately 50 years ago but that has only been published much more recently [Bernstein 1996]. Bernstein was concerned with a very different problem, so his terminology is different than that used here. For instance, he uses the term “dexterity” to refer to the capability to find “a motor solution for any situation and in any condition” [Bernstein 1996, p. 21]. In the terms of this article, he is referring to the capability to adapt to unanticipated demands. According to Bernstein, the need for dexterity (i.e., adaptation) increases under the following conditions: (a) the problem to be solved becomes more complex; (b) the problem to be solved becomes more variable; (c) the ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. HCI in the Global Knowledge-Based Economy • 271 number of unique and unexpected problems that need to be solved in real-time increases. Although the content is obviously different for motor control, these generic characteristics are surprisingly similar to those that are associated with the global knowledge-based economy and complex engineering design projects. Demands on workers are becoming more complex; those problems take different forms, rarely repeating themselves; and, there is an increasing need to deal with novel, unanticipated situations in a timely fashion. Just as these characteristics lead to an increase in the need for dexterity in motor control, they also lead to an increase in the need for adaptation in complex sociotechnical systems. These similarities establish the connection between motor dexterity and worker adaptation. How does the relationship with learning fit in? The answer to this question can be found in the rationale behind Bernstein’s beautiful phrase, “repetition without repetition” [Bernstein 1996, p. 204, emphasis in original]. Allowing workers to play the role of adaptive actors means that they will repeatedly have to generate a solution to a problem, rather than following a prepackaged procedural solution. As Bernstein pointed out in the context of motor control, “during a correctly organized exercise, a student is repeating many times, not the means for solving a given motor problem, but the process of its solution, the changing and improving of the means” [Bernstein 1996, p. 205, emphasis in original]. But because workers are solving the problem anew each time, opportunities for learning are created. This process is explained by this extended quotation: Repetitions of a movement or action are necessary in order to solve a motor problem many times (better and better) and to find the best ways of solving it. Repetitive solutions of a problem are also necessary because, in natural conditions, external conditions never repeat themselves and the course of the movement is never ideally reproduced. Consequently, it is necessary to gain experience relevant to all various modifications of a task, primarily, to all the impressions that underlie the sensory corrections of a movement. This experience is necessary for the animal not to be confused by future modifications of the task and external conditions, no matter how small they are, and to be able to adapt rapidly [Bernstein 1996, p. 176, emphasis in original]. Bernstein’s statements can be generalized to the case of workers in the global knowledge-based economy. Here too, conditions rarely repeat themselves precisely, so workers should gain experience with solving problems under a wide variety of initial conditions. Supporting workers to be adaptive problem solvers accommodates this need for learning because it recognizes the situated nature of action [Suchman 1987]. By accommodating context-conditioned variability in action (cf. Turvey et al. [1978]), we can help workers gain valuable experience so that they are not confused if they have to perform the same task in a different way in the future because of a change in context. In other words, designing for adaptation is equivalent to designing for continuous learning. ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. 272 • K. J. Vicente 3.4 Recap: How Much Have Things Changed? In this section, I have argued that the characteristics of the global knowledge-based economy put a premium on adaptation and continuous learning, and that these trends will increase. The novelty of these work demands is perhaps best illustrated by contrasting them to the Tayloristic approach that was prevalent early in the 20th century [Taylor 1911]. The following quotations from a recent biography of Frederick Taylor make the point in a stark fashion: The control of work must be taken from the men who did it and placed in the hands of a new breed of planners and thinkers. These men would think everything through beforehand. The workmen— elements of production to be studied, manipulated, and controlled—were to do as they were told [Kanigel 1997, p. 371]. The work itself might be no more physically demanding, but somehow, by day’s end, it felt as if it were. Going strictly by somebody else’s say-so, rigidly following directions, doing it by the clock, made Taylor’s brand of work distasteful. You had to do it in the one best way prescribed for you and not in your old, idiosyncratic, if perhaps less efficient way [Kanigel 1997, pp. 209 – 210]. Clearly, times have changed considerably since Taylor’s days. A more flexible approach to work analysis is needed to meet the needs of the global knowledge-based economy. 4. COGNITIVE WORK ANALYSIS: A POTENTIAL PROGRAMMATIC APPROACH So far, I have described a phenomenon (the global knowledge-based economy) and an associated objective (designing for adaptation, or equivalently, continuous learning). In this section, I will briefly describe a potential programmatic approach for achieving that objective. Cognitive work analysis (CWA [Rasmussen et al. 1994; Vicente 1999]) is a work analysis framework that can be used to create computer-based tools to support worker adaptation and continuous learning. CWA was developed since the 1960s by researchers at Risø National Laboratory in Roskilde, Denmark (see Vicente [1998; 2000] for an introduction to, and historical review of, this intellectual lineage), and its concepts have been tailored to the properties listed in Table I. 4.1 A Constraint-Based Approach If a work analysis framework is going to support continuous learning and adaptation to novelty and change, then it must be flexible enough to support variability in action that is sensitive to local, contextual details that cannot be, or have not been, anticipated. CWA accomplishes this by adopting a constraint-based approach. Vicente [1999] provides a detailed description of this approach, but the basic idea is illustrated generically in Figure 1. Various layers of goal-relevant constraint can be identified as ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. HCI in the Global Knowledge-Based Economy • 273 Space of Action Possibilities Trajectory 2 Trajectory 1 Fig. 1. The constraint-based approach. Designers identify constraints on action that can be embedded in a computer-based tool. Workers then have the flexibility to adapt within the remaining space of possibilities. Although not shown in the figure, the constraint boundaries, and thus the available degrees of freedom, are situation-dependent and therefore dynamic. Adapted from Vicente [1999]. being relevant to a particular design problem (see the following subsection for a more detailed description of the categories of constraint adopted by CWA). These constraints can be integrated to create a dynamic constraint boundary. The resulting constraint space, illustrated in Figure 1, defines a set of action possibilities. These possibilities represent the relevant degrees of freedom for productive action. Of course, the constraint boundaries, and thus the constraint space, will change as a function of the context (e.g., the task being performed, the strategy being adopted, the competencies of the actor). By identifying these goal-relevant constraints, designers can generate a set of information requirements that can be used to design a computerbased tool. That tool would provide workers with feedback and decision support as to the constraints that need to be respected, but it would not identify a particular path or trajectory through the constraint space. It would be up to workers to decide which trajectory (i.e., set of actions) to take for a particular set of circumstances. This constraint-based approach thereby provides workers with continual opportunities for learning and the flexibility to adapt within the space of relevant action possibilities. In one circumstance, workers may select one trajectory, whereas in another circumstance, they may have to select a different trajectory to achieve the same task goals. Thus, the constraint space provides the flexibility required to support situated action [Suchman 1987]. In addition, the same worker can choose different trajectories, even when the circumstances remain the same. As a result, the constraint space is also flexible enough to support the intrinsic variability frequently observed in human action. Finally, different workers can choose different trajectories to achieve the same outcome in different ways. Therefore, the constraint space is also flexible enough to support individual differences between workers. ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. 274 • K. J. Vicente Control Task 2 Strategy A Soc-Org Worker Control Task 1 Strategy B Fig. 2. The CWA framework is an example of a constraint-based approach that is comprised of five layers: work domain, control tasks, strategies, social-organizational, and worker competencies. These relationships are logically nested with a progressive reduction of degrees of freedom. When the constraints are integrated, the result is a constraint space like the one in Figure 1. Adapted from Vicente [1999]. In summary, the constraint-based approach illustrated in Figure 1 allows workers to respond to unanticipated contingencies and to follow their subjective preferences (by choosing a different trajectory through the constraint space), while at the same time, satisfying the demands of the job (by staying within the constraint boundaries). For an example of an interface and a decision support system designed according to this philosophy, see Vicente [1996] and Guerlain et al. [1999], respectively. 4.2 Five Layers of Constraint The CWA framework is an example of a constraint-based approach that is comprised of the five layers of constraint. The first layer of constraint is the work domain that is a map of the environment to be acted upon. The second layer of constraint is the set of control tasks that represents what needs to be done to the work domain. The third layer of constraint is the set of strategies that represents the various processes by which action can be effectively carried out. The fourth layer of constraint is the social-organizational structure that represents how the preceding set of demands is allocated among actors, as well as how those actors can productively organize and coordinate themselves. Finally, the fifth layer of constraint is the set of worker competencies that represent the capabilities that are required for success. Figure 2 illustrates a hypothetical example showing how these five layers of constraint are logically nested. The size of each set in this diagram represents the productive degrees of freedom for actors, so large sets represent many relevant possibilities for action whereas small sets repreACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. HCI in the Global Knowledge-Based Economy • 275 sent fewer relevant possibilities for action. The outer boundary represents the first phase of CWA, work domain analysis, and shows what the controlled system is capable of doing. This level is a fundamental bedrock of constraint on the actions of any actor. No matter what control task is being pursued, what strategy has been adopted, what social-organizational structure is in place, or what the competencies of the workers are, there are certain constraints on action that are imposed by the functional structure of the system being acted upon. For example, pilots cannot use engines for functions that they are not capable of achieving. Thus, the work domain delimits the productive degrees of freedom that are available for action. The second phase, control task analysis, inherits the constraints of the first phase but adds additional constraints as well. For the sake of clarity, only two hypothetical control tasks are illustrated in the example in Figure 2. Although the work domain provides a large number of total degrees of freedom, when actors are pursuing a particular control task, only a subset of those degrees of freedom are usually relevant. For example, when pilots are navigating at cruising altitude, the constraints associated with the landing gear and brakes are usually not relevant. Furthermore, there are new constraints that must be respected above and beyond those imposed by the work domain. For example, for some control tasks, it is important that certain actions be performed before others. This constraint is a property of the control task, not the work domain. As shown in Figure 2, the net result is a reduction in the relevant degrees of freedom. When workers are pursuing a particular control task, only certain actions are meaningful and require consideration. It is for this reason that the sets depicting control tasks 1 and 2 in Figure 2 are nested within the set for the entire work domain. The third phase of CWA is depicted in the example in Figure 2 by two hypothetical strategies, A and B, for control task 2 (strategies for control task 1 are not shown for the sake of clarity). The strategy phase also inherits the constraints associated with previous phases of analysis. After all, a strategy cannot make a work domain do something that it is not capable of doing. This is why the two sets for strategies A and B in Figure 2 are subsets of the work domain set. In addition, a strategy must also work within the constraints associated with its corresponding control task; otherwise it will not reliably achieve the required task goals. It is for this reason that strategies A and B in Figure 2 are nested inside the set for control task 2. The strategies phase also introduces new constraints of its own, however. The control task level merely identifies the degrees of freedom associated with achieving a particular goal. There are conceivably many different ways in which control task 2, for instance, can be performed. All of these processes are encompassed by the control task 2 set in Figure 2. When a particular strategy for performing the control task is identified, some degrees of freedom are usually not relevant because they are only required for other strategies. A specific strategy imposes a certain flow or process that adds constraints on top of those that are imposed by ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. 276 • K. J. Vicente merely achieving the desired outcome. It is for this reason that strategies A and B are nested within the set for control task 2 in Figure 2. The fourth phase of CWA, social-organizational analysis, follows a similar pattern. It too inherits the constraints imposed by previous phases of work analysis, and it too adds a new layer of constraint. There are multiple organizational structures that could conceivably be adopted for any one strategy. To take a very simple example, a strategy may be performed by one worker alone, by two workers in a collaborative manner, by one worker supervising another, or by a worker supervising automation. In each of these cases the same strategy is being adopted; the same control task is being pursued; and the same work domain is being acted upon. Nevertheless, these different organizational architectures have different constraints associated with them. A worker executing the strategy alone will likely draw on a different, although probably overlapping, set of relevant actions than two workers executing the strategy in a cooperative fashion. Thus, a particular social-organizational structure represents a further narrowing of degrees of freedom. This logic explains why the social-organizational set in Figure 2 is nested within the set for strategy A (analogous constraints for strategy B are not shown for the sake of clarity). Finally, the fifth phase of worker competencies reduces the degrees of freedom even further. There are certain things that people are simply not capable of doing. Consequently, particular ways of working are not feasible. For example, some activities require too much working memory load, too much time, too much knowledge, or too much computational effort for people to perform. These constraints are specifically associated with workers’ competencies, not with any of the other preceding phases of analysis alone. This final narrowing down of degrees of freedom is illustrated in Figure 2 by the worker competency set, which is nested within the social-organizational set for strategy A. 4.3 Modeling Tools and Design Implications The CWA framework is also comprised of modeling tools that can be used to identify each layer of constraint [Rasmussen et al. 1994; Vicente 1999]. For example, the abstraction hierarchy can be used to conduct a work domain analysis (layer 1); the decision ladder can be used to conduct a control task analysis (layer 2); and the skills, rules, and knowledge taxonomy can be used to conduct a work competencies analysis (layer 5). These modeling tools are used to create models for particular design problems. As shown in Table II, each of these models is linked to a particular class of design interventions. The list is merely intended to be illustrative, not definitive or exhaustive. Beginning with the work domain, analyzing the system being controlled provides a great deal of insight into what information is required to understand its state. This analysis, in turn, has important implications for the design of sensors and models [Reising and Sanderson 1996]. The work domain analysis also reveals the functional structure of the system being controlled. These insights can then be used to ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. HCI in the Global Knowledge-Based Economy Table II. • 277 Relationships between the Five Phases of CWA and Various Classes of Systems Design Interventions (from Vicente [1999]) (1) Work Domain What information should be measured? (sensors) What information should be derived? (models) How should information by organized? (database) (2) Control Tasks What goals must be pursued, and what are the constraints on those goals? (procedures or automation) What information and relations are relevant for particular classes of situations? (context-sensitive interface) (3) Strategies What frames of reference are useful? (dialog modes) What control mechanisms are useful? (process flow) (4) Social-Organizational What are the responsibilities of all of the actors? (role allocation) How should actors communicate with each other? (organizational structure) (5) Worker Competencies What knowledge, rules, and skills do workers need to have? (selection, training, and interface form) design a database that keeps track of the relationships between variables, providing a coherent, integrated, and global representation of the information contained therein. The control task analysis deals not with data structures, but with control structures. The goals that need to be satisfied for certain classes of situations, and the constraints on the achievement of those goals, are identified here. This knowledge can then be used to design either constraint-based procedures that guide workers in achieving those goals, or automation that achieves those goals autonomously or semiautonomously. In addition, this analysis will also identify what variables and relations in the work domain may be relevant for certain classes of situations (see Figure 2). Those insights can be used to design context-sensitive interface mechanisms that present workers with the right information at the right time [Woods 1991]. The strategies analysis deals not just with what needs to be done but also with how it is to be done. Each strategy is a different frame of reference for pursuing control task goals, each with its unique flow and process requirements. Thus, identifying what strategies can be used for each control task provides some insight into what type of human-computer dialog modes should be designed. Ideally, each mode should be tailored to the unique requirements of each strategy (e.g., Pejtersen [1992]). The strategies analysis also reveals the generative mechanisms (i.e., rules or algorithms) constituting each strategy, which, in turn, helps specify the process flow for each dialog mode. The social-organizational analysis deals with two very important and challenging classes of design interventions. Given the knowledge uncovered in previous phases, analysts can decide what the responsibilities of the ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. 278 • K. J. Vicente various actors are, including workers, designers, and automation. These role allocation decisions define the job content of the various actors. In addition, analysts should also determine how the various actors can effectively communicate with each other. That analysis will help identify the authority and coordination patterns that constitute a viable organizational structure. The final phase shown in Table II is the analysis of worker competencies. Since the work demands have been thoroughly analyzed by this point, the knowledge, rules, and skills that workers must have to function effectively can be determined. This analysis will help develop a set of specifications for worker training and selection (if relevant). In addition, this analysis will also provide some insight into how information should be presented to workers because some competencies may not be triggered unless information is presented in particular forms [Vicente and Rasmussen 1992]. Although the list in Table II is not definitive, it shows how—right from the very start—CWA is deliberately geared toward uncovering implications for systems design. 4.4 Summary The CWA framework is intended to be a programmatic approach to one large class of HCI problems: how to design computer-based tools that help workers perform open-ended intellectual tasks that require discretionary decision making. These tasks require continuous learning and adaptation to novelty. CWA tries to support these demands by adopting a flexible constraint-based approach, by identifying five categories of goal-relevant constraints, by providing modeling tools that can be used to analyze each layer of constraint, and by linking each layer of constraint to a particular category of systems design interventions. 5. THE FUTURE: WHAT CAN WE BE SURE OF? Predicting the future is always an uncertain endeavor, but given the analysis presented in this article, there are a few things that appear to be indisputable. As complex sociotechnical systems become more open, change will become the norm, not the exception. Therefore, to be competitive in the global knowledge-based economy, there will be an increasing demand for workers, managers, and organizations to be flexible and adaptive. At the same time, there will be an accompanying need for learning to learn. Accordingly, computer-based tools should be deliberately and systematically designed to help workers effectively fulfill these challenging needs. The CWA framework is intended to be a programmatic approach to this practical problem. But regardless of whether or not CWA itself succeeds in achieving its objectives, HCI in the new millennium should be, and will be, increasingly concerned with systematically designing for worker adaptation. ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. HCI in the Global Knowledge-Based Economy • 279 ACKNOWLEDGMENTS I would like to thank Jack Carroll for encouraging me to write this article, and the three reviewers for their exceptionally incisive and constructive comments. REFERENCES BERNSTEIN, N. A. 1996. On dexterity and its development. In Dexterity and Its Development, M. L. Latash and M. T. Turvey, Eds. Lawrence Erlbaum Assoc. Inc., Hillsdale, NJ, 1–244. BRZUSTOWSKI, T. 1998. Engineering design and innovation, what’s the connection?. Clarice Chalmers’ Design Lecture. Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada. CANNON-BOWERS, J. A., BOST, R., HAMBURGER, T., CRISP, H., OSGA, G., AND PERRY, A. 1997. Achieving affordability through human systems integration. In Proceedings of the 3rd Annual Naval Aviation Systems Engineering Supportability Symposium (Arlington, VA), COY, P., WOOLLEY, S., SPIRO, L. N., AND GLASGALL, W. 1998. Failed wizards of Wall Street. Bus. Week, Sept. 21, 114 –119. DAVIS, S. AND MEYER, C. 1998. Blur: The Speed of Change in the Connected Economy. Addison-Wesley, Reading, MA. GIBSON, E. J. 1991. An Odyssey in Learning and Perception. MIT Press, Cambridge, MA. GUERLAIN, S. A., SMITH, P. J., OBRADOVICH, J. H., RUDMANN, S., STROHM, P., SMITH, J. W., SVIRBELY, J., AND SACHS, L. 1999. Interactive critiquing as a form of decision support: An empirical evaluation. Hum. Factors 41, 1, 72– 89. HIRSCHHORN, L. 1984. Beyond Mechanization: Work and Technology in a Postindustrial Age. MIT Press, Cambridge, MA. JOHANNSON, R. 1993. System Modeling and Identification. Prentice-Hall, Englewood Cliffs, NJ. KANIGEL, R. 1997. The One Best Way: Frederick Winslow Taylor and the Enigma of Efficiency. Viking, New York, NY. LEVESON, N. G. 1995. Safeware: System Safety and Computers. Addison-Wesley Longman Publ. Co., Inc., Reading, MA. MORONE, J. G. 1993. Winning in High-Tech Markets: The Role of General Management. Harvard Business School Press, Boston, MA. NARENDRA, K. S. 1986. Adaptive and Learning Systems: Theory and Applications. Plenum Press, New York, NY. NICKERSON, R. S. 1995. Emerging Needs and Opportunities for Human Factors Research. National Academy Press, Washington, DC. NORROS, L. 1996. System disturbances as springboard for development of operators’ expertise. In Cognition and Communication at Work, Y. Engeström and D. Middleton, Eds. Cambridge University Press, New York, NY, 159 –176. PEJTERSEN, A. M. 1992. The Book House: An icon based database system for fiction retrieval in public libraries. In The Marketing of Library and Information Services 2, B. Cronin, Ed. ASLIB, The Association for Information Management, London, UK, 572–591. PERROW, C. 1984. Normal Accidents: Living with High Risk Technologies. Basic Books, Inc., New York, NY. POOL, R. 1997. Beyond Engineering: How Society Shapes Technology. Oxford University Press, Inc., New York, NY. RASMUSSEN, J. 1986. Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering. North-Holland Publishing Co., Amsterdam, The Netherlands. RASMUSSEN, J., PEJTERSEN, A. M., AND GOODSTEIN, L. P. 1994. Cognitive Systems Engineering. Wiley series in systems engineering. John Wiley and Sons, Inc., New York, NY. REASON, J. 1990. Human Error. Cambridge University Press, New York, NY. REISING, D. V. AND SANDERSON, P. M. 1996. Work domain analysis of a pasteurization plant: Building an abstraction hierarchy representation. In Proceedings of the Human Factors and Ergonomics Society 40th Annual Meeting (Santa Monica, CA), Human Factors and Ergonomics Society, Inc., Santa Monica, CA, 293–297. ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000. 280 • K. J. Vicente SENGE, P. M. 1990. The Fifth Discipline: The Art and Practice of the Learning Organization. Doubleday, New York, NY. SUCHMAN, L. A. 1987. Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press, New York, NY. TAYLOR, F. W. 1911. The Principles of Scientific Management. Harper and Row Publishers, Inc., New York, NY. TURVEY, M. T., SHAW, R. E., AND MACE, W. 1978. Issues in the theory of action: Degrees of freedom, coordinative structures and coalitions. In Attention and Performance VII, J. Requin, Ed. Lawrence Erlbaum Assoc. Inc., Hillsdale, NJ, 557–595. VICENTE, K. J. 1990. Coherence- and correspondence-driven work domains: Implications for systems design. Behav. Inf. Tech. 9, 6, 493–502. VICENTE, K. J. 1996. Improving dynamic decision making in complex systems through ecological interface design: A research overview. Syst. Dyn. Rev. 12, 251–279. VICENTE, K. J. 1998. An evolutionary perspective on the growth of cognitive engineering: The Risø genotype. Ergonomics 41, 2, 156 –159. VICENTE, K. J. 1999. Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work. Lawrence Erlbaum Assoc. Inc., Hillsdale, NJ. VICENTE, K. J. 2000. Cognitive engineering research at Risø from 1962–1979. In Human/ Technology Interaction in Complex Systems, E. Salas, Ed. JAI Press, Inc., Greenwich, CT. In Press. VICENTE, K. J. AND RASMUSSEN, J. 1990. The ecology of human-machine systems II: Mediating “direct perception” in complex work domains. Ecol. Psychol. 2, 3, 207–250. VICENTE, K. J. AND RASMUSSEN, J. 1992. Ecological interface design: Theoretical foundations. IEEE Trans. Syst. Man Cybern. SMC-22, 4, 589 – 606. WOODS, D. D. 1991. The cognitive engineering of problem representations. In HumanComputer Interaction in Complex Systems, J. Alty and G. Weir, Eds. Academic Press Ltd., London, UK, 169 –188. ACM Transactions on Computer-Human Interaction, Vol. 7, No. 2, June 2000.