Human factors and ergonomics
Поможем в ✍️ написании учебной работы
Поможем с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой

Human factors and ergonomics, also known as comfort design, func- tional design, and systems, is the practice of designing products, sys- tems, or processes to take proper account of the interaction between them and the people who use them.

The field has seen some contributions from numerous disciplines, such as psychology, engineering, biomechanics, industrial design, phys- iology, and anthropometry. In essence, it is the study of designing equipment, devices and processes that fit the human body and its cogni- tive abilities. The two terms "human factors" and "ergonomics" are es- sentially synonymous.

The International Ergonomics Association defines ergonomics or human factors as follows: Ergonomics is the scientific discipline con- cerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance.

HF&E is employed to fulfill the goals of occupational health and safety and productivity. It is relevant in the design of such things as safe furniture and easy-to-use interfaces to machines and equipment.

Proper ergonomic design is necessary to prevent repetitive strain in- juries and other musculoskeletal disorders, which can develop over time and can lead to long-term disability. Human factors and ergonomics is concerned with the "fit" between the user, equipment and their environ- ments. It takes account of the user's capabilities and limitations in seek-


ing to ensure that tasks, functions, information and the environment suit each user.

To assess the fit between a person and the used technology, human factors specialists or ergonomists consider the job being done and the demands on the user; the equipment used, and the information used. Er- gonomics draws on many disciplines in its study of humans and their environments, including anthropometry, biomechanics, mechanical en- gineering, industrial engineering, industrial design, information design, kinesiology, physiology, cognitive psychology, industrial and organiza- tional psychology, and space psychology.

The term ergonomics first entered the modern lexicon when Polish scientist W. Jastrzębowski used the word in his 1857 article. The intro- duction of the term to the English lexicon is widely attributed to British psychologist Hywel Murrell, at the 1949 meeting at the UK's Admiralty, which led to the foundation of The Ergonomics Society. He used it to encompass the studies in which he had been engaged during and after World War II.

The expression human factors is a predominantly North American term which has been adopted to emphasise the application of the same methods to non work-related situations. A "human factor" is a physical or cognitive property of an individual or social behavior specific to hu- mans that may influence the functioning of technological systems. The terms "human factors" and "ergonomics" are essentially synonymous. Ergonomics comprise three main fields of research: Physical, cognitive and organisational ergonomics.

There are many specializations within these broad categories. Spe- cialisations in the field of physical ergonomics may include visual ergo- nomics. Specialisations within the field of cognitive ergonomics may include usability, human–computer interaction, and user experience en- gineering.

Some specialisations may cut across these domains: Environmental ergonomics is concerned with human interaction with the environment as characterized by climate, temperature, pressure, vibration, light. The emerging field of human factors in highway safety uses human factor principles to understand the actions and capabilities of road users – car and truck drivers, pedestrians, bicyclists, etc. – and use this knowledge to design roads and streets to reduce traffic collisions. Driver error is listed as a contributing factor in 44% of fatal collisions in the United States, so a topic of particular interest is how road users gather and pro-


cess information about the road and its environment, and how to assist them to make the appropriate decision.

New terms are being generated all the time. For instance, "user trial engineer" may refer to a human factors professional who specialises in user trials. Although the names change, human factors professionals apply an understanding of human factors to the design of equipment, systems and working methods in order to improve comfort, health, safe- ty, and productivity. According to the International Ergonomics Associ- ation, within the discipline of ergonomics there exist domains of spe- cialization:

Physical ergonomics is concerned with human anatomy, and some of the anthropometric, physiological and bio mechanical characteristics as they relate to physical activity. Physical ergonomic principles have been widely used in the design of both consumer and industrial products. Physical ergonomics is important in the medical field, particularly to those diagnosed with physiological ailments or disorders such as arthritis (both chronic and temporary) or carpal tunnel syndrome. Pressure that is insignificant or imperceptible to those unaffected by the- se disorders may be very painful, or render a device unusable, for those who are. Many ergonomically designed products are also used or rec- ommended to treat or prevent such disorders, and to treat pressure- related chronic pain.

One of the most prevalent types of work-related injuries is musculo- skeletal disorder. Work-related musculoskeletal disorders result in per- sistent pain, loss of functional capacity and work disability, but their initial diagnosis is difficult because they are mainly based on complaints of pain and other symptoms. Every year, 1.8 million U.S. workers expe- rience WRMDs and nearly 600,000 of the injuries are serious enough to cause workers to miss work. Certain jobs or work conditions cause a higher rate of worker complaints of undue strain, localized fatigue, dis- comfort, or pain that does not go away after overnight rest. These types of jobs are often those involving activities such as repetitive and force- ful exertions; frequent, heavy, or overhead lifts; awkward work posi- tions; or use of vibrating equipment. The Occupational Safety and Health Administration has found substantial evidence that ergonomics programs can cut workers' compensation costs, increase productivity and decrease employee turnover. Therefore, it is important to gather da- ta to identify jobs or work conditions that are most problematic, using sources such as injury and illness logs, medical records, and job anal-


yses. Cognitive ergonomics is concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect in- teractions among humans and other elements of a system. Organization- al ergonomics is concerned with the optimization of socio-technical sys- tems, including their organizational structures, policies, and processes.

The foundations of the science of ergonomics appear to have been laid within the context of the culture of Ancient Greece. A good deal of evidence indicates that Greek civilization in the 5th century BC used ergonomic principles in the design of their tools, jobs, and workplaces. One outstanding example of this can be found in the description Hip- pocrates gave of how a surgeon's workplace should be designed and how the tools he uses should be arranged. The archaeological record also shows that the early Egyptian dynasties made tools and household equipment that illustrated ergonomic principles.

In the 19th century, Frederick Winslow Taylor pioneered the "scien- tific management" method, which proposed a way to find the optimum method of carrying out a given task. Taylor found that he could, for ex- ample, triple the amount of coal that workers were shoveling by incre- mentally reducing the size and weight of coal shovels until the fastest shoveling rate was reached. Frank and Lillian Gilbreth expanded Tay- lor's methods in the early 1900s to develop the "time and motion study". They aimed to improve efficiency by eliminating unnecessary steps and actions. By applying this approach, the Gilbreths reduced the number of motions in bricklaying from 18 to 4.5, allowing bricklayers to increase their productivity from 120 to 350 bricks per hour.

However, this approach was rejected by Russian researchers who fo- cused on the well being of the worker. At the First Conference on Scien- tific Organization of Labour Vladimir Bekhterev and Vladimir Nikola- yevich Myasishchev criticised Taylorism. Bekhterev argued that "The ultimate ideal of the labour problem is not in it, but is in such organisa- tion of the labour process that would yield a maximum of efficiency coupled with a minimum of health hazards, absence of fatigue and a guarantee of the sound health and all round personal development of the working people." Myasishchev rejected Frederick Taylor's proposal to turn man into a machine. Dull monotonous work was a temporary ne- cessity until a corresponding machine can be developed. He also went on to suggest a new discipline of "ergology" to study work as an integral part of the re-organisation of work. The concept was taken up by Mya-


sishchev's mentor, Bekhterev, in his final report on the conference, merely changing the name to "ergonology".

Prior to World War I, the focus of aviation psychology was on the aviator himself, but the war shifted the focus onto the aircraft, in par- ticular, the design of controls and displays, and the effects of altitude and environmental factors on the pilot. The war saw the emergence of aeromedical research and the need for testing and measurement meth- ods. Studies on driver behaviour started gaining momentum during this period, as Henry Ford started providing millions of Americans with au- tomobiles. Another major development during this period was the per- formance of aeromedical research. By the end of World War I, two aer- onautical labs were established, one at Brooks Air Force Base, Texas and the other at Wright-Patterson Air Force Base outside of Dayton, Ohio. Many tests were conducted to determine which characteristic dif- ferentiated the successful pilots from the unsuccessful ones. During the early 1930s, Edwin Link developed the first flight simulator. The trend continued and more sophisticated simulators and test equipment were developed. Another significant development was in the civilian sector, where the effects of illumination on worker productivity were examined. This led to the identification of the Hawthorne Effect, which suggested that motivational factors could significantly influence human perfor- mance.

World War II marked the development of new and complex ma- chines and weaponry, and these made new demands on opera- tors' cognition. It was no longer possible to adopt the Tayloristic princi- ple of matching individuals to preexisting jobs. Now the design of equipment had to take into account human limitations and take ad- vantage of human capabilities. The decision-making, attention, situa- tional awareness and hand-eye coordination of the machine's operator became key in the success or failure of a task. There was substantial re- search conducted to determine the human capabilities and limitations that had to be accomplished. A lot of this research took off where the aeromedical research between the wars had left off. An example of this is the study done by Fitts and Jones, who studied the most effective con- figuration of control knobs to be used in aircraft cockpits.

Much of this research transcended into other equipment with the aim of making the controls and displays easier for the operators to use. The entry of the terms "human factors" and "ergonomics" into the modern lexicon date from this period. It was observed that fully functional air-


craft flown by the best-trained pilots, still crashed. In 1943 Alphonse Chapanis, a lieutenant in the U.S. Army, showed that this so-called "pi- lot error" could be greatly reduced when more logical and differentiable controls replaced confusing designs in airplane cockpits. After the war, the Army Air Force published 19 volumes summarizing what had been established from research during the war. In the decades since World War II, HF&E has continued to flourish and diversify. Work by Elias Porter and others within the RAND Corporation after WWII extended the conception of HF&E. In the initial 20 years after the World War II, most activities were done by the "founding fathers": Alphonse Chapanis, Paul Fitts, and Small.

The beginning of the Cold War it was very cold and led to a major expansion of Defense supported research laboratories. Also, many labs established during WWII started expanding. Most of the research fol- lowing the war was military-sponsored. Large sums of money were granted to universities to conduct research. The scope of the research also broadened from small equipments to entire workstations and sys- tems. Concurrently, a lot of opportunities started opening up in the civil- ian industry. The focus shifted from research to participation through advice to engineers in the design of equipment. After 1965, the period saw a maturation of the discipline. The field has expanded with the de- velopment of the computer and computer applications.

The Space Age created new human factors issues such as weight- lessness and extreme g-forces. Tolerance of the harsh environment of space and its effects on the mind and body were widely studied.

The dawn of the Information Age has resulted in the related field of human–computer interaction. Likewise, the growing demand for and competition among consumer goods and electronics has resulted in more companies and industries including human factors in their product design. Using advanced technologies in human kinetics, body-mapping, movement patterns and heat zones, companies are able to manufacture purpose-specific garments, including full body suits, jerseys, shorts, shoes, and even underwear.

Formed in 1946 in the UK, the oldest professional body for human factors specialists and ergonomists is The Chartered Institute of Ergo- nomics and Human Factors, formally known as the Institute of Ergo- nomics and Human Factors and before that, The Ergonomics Society.

The Human Factors and Ergonomics Society was founded in 1957. The Society's mission is to promote the discovery and exchange of


knowledge concerning the characteristics of human beings that are ap- plicable to the design of systems and devices of all kinds.

The International Ergonomics Association is a federation of ergo- nomics and human factors societies from around the world. The mission of the IEA is to elaborate and advance ergonomics science and practice, and to improve the quality of life by expanding its scope of application and contribution to society. As of September 2008, the International Er- gonomics Association has 46 federated societies and 2 affiliated socie- ties. The Institute of Occupational Medicine was founded by the coal industry in 1969. From the outset the IOM employed an ergonomics staff to apply ergonomics principles to the design of mining machinery and environments. To this day, the IOM continues ergonomics activi- ties, especially in the fields of musculoskeletal disorders; heat stress and the ergonomics of personal protective equipment. Like many in occupa- tional ergonomics, the demands and requirements of an ageing UK workforce are a growing concern and interest to IOM ergonomists.

The International Society of Automotive Engineers is a professional organization for mobility engineering professionals in the aerospace, automotive, and commercial vehicle industries. The Society is a stand- ards development organization for the engineering of powered vehicles of all kinds, including cars, trucks, boats, aircraft, and others. The Socie- ty of Automotive Engineers has established a number of standards used in the automotive industry and elsewhere. It encourages the design of vehicles in accordance with established Human Factors principles. It is one of the most influential organizations with respect to Ergonomics work in Automotive design. This society regularly holds conferences which address topics spanning all aspects of Human Factors/ Ergonom- ics.

Human factors practitioners come from a variety of backgrounds, though predominantly they are psychologists and physiologists. Design- ers (industrial, interaction, and graphic), anthropologists, technical communication scholars and computer scientists also contribute. Typi- cally, an ergonomist will have an undergraduate degree in psychology, engineering, design or health sciences, and usually a masters degree or doctoral degree in a related discipline. Though some practitioners enter the field of human factors from other disciplines, both M.S. and PhD degrees in Human Factors Engineering are available from several uni- versities worldwide.


Until recently, methods used to evaluate human factors and ergo- nomics ranged from simple questionnaires to more complex and expen- sive usability labs. Some of the more common HF&E methods are listed below:

· Ethnographic analysis: Using methods derived from ethnogra- phy, this process focuses on observing the uses of technology in a prac- tical environment. It is a qualitative and observational method that fo- cuses on "real-world" experience and pressures, and the usage of tech- nology or environments in the workplace. The process is best used early in the design process.

· Focus Groups are another form of qualitative research in which one individual will facilitate discussion and elicit opinions about the technology or process under investigation. This can be on a one-to-one interview basis, or in a group session. Can be used to gain a large quan- tity of deep qualitative data, though due to the small sample size, can be subject to a higher degree of individual bias. Can be used at any point in the design process, as it is largely dependent on the exact questions to be pursued, and the structure of the group. Can be extremely costly.

· Iterative design: Also known as prototyping, the iterative design process seeks to involve users at several stages of design, in order to correct problems as they emerge. As prototypes emerge from the design process, these are subjected to other forms of analysis as outlined in this article, and the results are then taken and incorporated into the new de- sign. Trends amongst users are analyzed, and products redesigned. This can become a costly process, and needs to be done as soon as possible in the design process before designs become too concrete.

· Meta-analysis: A supplementary technique used to examine a wide body of already existing data or literature in order to derive trends or form hypotheses in order to aid design decisions. As part of a litera- ture survey, a meta-analysis can be performed in order to discern a col- lective trend from individual variables.

· Subjects-in-tandem: Two subjects are asked to work concurrent- ly on a series of tasks while vocalizing their analytical observations. The technique is also known as "Co-Discovery" as participants tend to feed off of each other's comments to generate a richer set of observations than is often possible with the participants separately. This is observed by the researcher, and can be used to discover usability difficulties. This process is usually recorded.


· Surveys and Questionnaires: A commonly used technique out- side of Human Factors as well, surveys and questionnaires have an ad- vantage in that they can be administered to a large group of people for relatively low cost, enabling the researcher to gain a large amount of data. The validity of the data obtained is, however, always in question, as the questions must be written and interpreted correctly, and are, by definition, subjective. Those who actually respond are in effect self- selecting as well, widening the gap between the sample and the popula- tion further.

· Task analysis: A process with roots in activity theory, task anal- ysis is a way of systematically describing human interaction with a sys- tem or process to understand how to match the demands of the system or process to human capabilities. The complexity of this process is gen- erally proportional to the complexity of the task being analyzed, and so can vary in cost and time involvement. It is a qualitative and observa- tional process. Best used early in the design process.

· Think aloud protocol: Also known as "concurrent verbal proto- col", this is the process of asking a user to execute a series of tasks or use technology, while continuously verbalizing their thoughts so that a researcher can gain insights as to the users' analytical process. Can be useful for finding design flaws that do not affect task performance, but may have a negative cognitive affect on the user. Also useful for utiliz- ing experts in order to better understand procedural knowledge of the task in question. Less expensive than focus groups, but tends to be more specific and subjective.

· User analysis: This process is based around designing for the at- tributes of the intended user or operator, establishing the characteristics that define them, creating a persona for the user. Best done at the outset of the design process, a user analysis will attempt to predict the most common users, and the characteristics that they would be assumed to have in common. This can be problematic if the design concept does not match the actual user, or if the identified are too vague to make clear design decisions from. This process is, however, usually quite inexpen- sive, and commonly used.

· "Wizard of Oz": This is a comparatively uncommon technique but has seen some use in mobile devices. Based upon the Wizard of Oz experiment, this technique involves an operator who remotely controls the operation of a device in order to imitate the response of an actual


computer program. It has the advantage of producing a highly changea- ble set of reactions, but can be quite costly and difficult to undertake.

· Methods Analysis is the process of studying the tasks a worker completes using a step-by-step investigation. Each task in broken down into smaller steps until each motion the worker performs is described. Doing so enables you to see exactly where repetitive or straining tasks occur.

· Time studies determine the time required for a worker to com- plete each task. Time studies are often used to analyze cyclical jobs. They are considered "event based" studies because time measurements are triggered by the occurrence of predetermined events.

· Work sampling is a method in which the job is sampled at ran- dom intervals to determine the proportion of total time spent on a par- ticular task. It provides insight into how often workers are performing tasks which might cause strain on their bodies.

· Predetermined time systems are methods for analyzing the time spent by workers on a particular task. One of the most widely used pre- determined time system is called Methods-Time-Measurement. Other common work measurement systems include MODAPTS and MOST.

· Cognitive Walkthrough: This method is a usability inspection method in which the evaluators can apply user perspective to task sce- narios to identify design problems. As applied to macroergonomics, evaluators are able to analyze the usability of work system designs to identify how well a work system is organized and how well the work- flow is integrated.

· Kansei Method: This is a method that transforms consumer's re- sponses to new products into design specifications. As applied to mac- roergonomics, this method can translate employee's responses to chang- es to a work system into design specifications.

· High Integration of Technology, Organization, and People: This is a manual procedure done step-by-step to apply technological change to the workplace. It allows managers to be more aware of the human and organizational aspects of their technology plans, allowing them to effi- ciently integrate technology in these contexts.

· Top Modeler: This model helps manufacturing companies iden- tify the organizational changes needed when new technologies are being considered for their process.

· Computer-integrated Manufacturing, Organization, and People System Design: This model allows for evaluating computer-integrated


manufacturing, organization, and people system design based on knowledge of the system.

· Anthropotechnology: This method considers analysis and de- sign modification of systems for the efficient transfer of technology from one culture to another.

· Systems Analysis Tool: This is a method to conduct systematic trade-off evaluations of work-system intervention alternatives.

· Macroergonomic Analysis of Structure: This method analyzes the structure of work systems according to their compatibility with unique sociotechnical aspects.

· Macroergonomic Analysis and Design: This method assesses work-system processes by using a ten-step process.

· Virtual Manufacturing and Response Surface Methodolog: This method uses computerized tools and statistical analysis for workstation design.

Problems related to measures of usability include the fact that measures of learning and retention of how to use an interface are rarely employed and some studies treat measures of how users interact with interfaces as synonymous with quality-in-use, despite an unclear rela- tion.

Although field methods can be extremely useful because they are conducted in the users' natural environment, they have some major limi- tations to consider. The limitations include:

1. Usually take more time and resources than other methods

2. Very high effort in planning, recruiting, and executing com- pared with other methods

3. Much longer study periods and therefore requires much good- will among the participants

4. Studies are longitudinal in nature, therefore, attrition can be- come a problem.

 











Environmental ethics

Environmental ethics is the discipline in philosophy that studies the moral relationship of human beings to, and also the value and moral sta- tus of, the environment and its non-human contents. This entry covers:

(1) the challenge of environmental ethics to the anthropocentrism em- bedded in traditional western ethical thinking; (2) the early development of the discipline in the 1960s and 1970s; (3) the connection of deep ecology, feminist environmental ethics, animism and social ecology to


politics; (4) the attempt to apply traditional ethical theories, including consequentialism, deontology, and virtue ethics, to support contempo- rary environmental concerns; (5) the preservation of biodiversity as an ethical goal; (6) the broader concerns of some thinkers with wilderness, the built environment and the politics of poverty; (7) the ethics of sus- tainability and climate change, and (8) some directions for possible fu- ture developments of the discipline.

Suppose putting out natural fires, culling feral animals or destroying some individual members of overpopulated indigenous species is neces- sary for the protection of the integrity of a certain ecosystem. Will these actions be morally permissible or even required? Is it morally acceptable for farmers in non-industrial countries to practise slash and burn tech- niques to clear areas for agriculture? Consider a mining company which has performed open pit mining in some previously unspoiled area. Does the company have a moral obligation to restore the landform and surface ecology? And what is the value of a humanly restored environment compared with the originally natural environment? It is often said to be morally wrong for human beings to pollute and destroy parts of the nat- ural environment and to consume a huge proportion of the planet‘s natu- ral resources. If that is wrong, is it simply because a sustainable envi- ronment is essential to human well-being? Or is such behaviour also wrong because the natural environment and/or its various contents have certain values in their own right so that these values ought to be respect- ed and protected in any case? These are among the questions investigat- ed by environmental ethics. Some of them are specific questions faced by individuals in particular circumstances, while others are more global questions faced by groups and communities. Yet others are more ab- stract questions concerning the value and moral standing of the natural environment and its non-human components.

In the literature on environmental ethics the distinction be- tween instrumental value andintrinsic value has been of considerable importance. The former is the value of things as means to further some other ends, whereas the latter is the value of things as ends in them- selves regardless of whether they are also useful as means to other ends. For instance, certain fruits have instrumental value for bats who feed on them, since feeding on the fruits is a means to survival for the bats. However, it is not widely agreed that fruits have value as ends in them- selves. We can likewise think of a person who teaches others as having instrumental value for those who want to acquire knowledge. Yet, in


addition to any such value, it is normally said that a person, as a person, has intrinsic value, i.e., value in his or her own right independently of his or her prospects for serving the ends of others.

For another example, a certain wild plant may have instrumental val- ue because it provides the ingredients for some medicine or as an aes- thetic object for human observers. But if the plant also has some value in itself independently of its prospects for furthering some other ends such as human health, or the pleasure from aesthetic experience, then the plant also has intrinsic value. Because the intrinsically valuable is that which is good as an end in itself, it is commonly agreed that some- thing‘s possession of intrinsic value generates a prima facie direct moral duty on the part of moral agents to protect it or at least refrain from damaging it.

Many traditional western ethical perspectives, however, are anthro- pocentric or human-centered in that either they assign intrinsic value to human beings alone or they assign a significantly greater amount of in- trinsic value to human beings than to any non-human things such that the protection or promotion of human interests or well-being at the ex- pense of non-human things turns out to be nearly always justified. For example, Aristotle maintains that ―nature has made all things specifical- ly for the sake of man‖ and that the value of non-human things in nature is merely instrumental. Generally, anthropocentric positions find it prob- lematic to articulate what is wrong with the cruel treatment of non- human animals, except to the extent that such treatment may lead to bad consequences for human beings. Immanuel Kant, for instance, suggests that cruelty towards a dog might encourage a person to develop a char- acter which would be desensitized to cruelty towards humans. From this standpoint, cruelty towards non-human animals would be instrumental- ly, rather than intrinsically, wrong. Likewise, anthropocentrism often recognizes some non-intrinsic wrongness of anthropogenic (i.e. human- caused) environmental devastation. Such destruction might damage the well-being of human beings now and in the future, since our well-being is essentially dependent on a sustainable environment.

When environmental ethics emerged as a new sub-discipline of phi- losophy in the early 1970s, it did so by posing a challenge to traditional anthropocentrism. In the first place, it questioned the assumed moral superiority of human beings to members of other species on earth. In the second place, it investigated the possibility of rational arguments for assigning intrinsic value to the natural environment and its non-human


contents. It should be noted, however, that some theorists working in the field see no need to develop new, non-anthropocentric theories. Instead, they advocate what may be called enlightenedanthropocentrism. Briefly, this is the view that all the moral duties we have towards the environ- ment are derived from our direct duties to its human inhabitants. The practical purpose of environmental ethics, they maintain, is to provide moral grounds for social policies aimed at protecting the earth‘s envi- ronment and remedying environmental degradation. Enlightened an- thropocentrism, they argue, is sufficient for that practical purpose, and perhaps even more effective in delivering pragmatic outcomes, in terms of policy-making, than non-anthropocentric theories given the theoreti- cal burden on the latter to provide sound arguments for its more radical view that the non-human environment has intrinsic value. Furthermore, some prudential anthropocentrists may hold what might be called cyni- cal anthropocentrism, which says that we have a higher-level anthropo- centric reason to be non-anthropocentric in our day-to-day thinking. Suppose that a day-to-day non-anthropocentrist tends to act more be- nignly towards the non-human environment on which human well-being depends. This would provide reason for encouraging non- anthropocentric thinking, even to those who find the idea of non- anthropocentric intrinsic value hard to swallow. In order for such a strategy to be effective one may need to hide one‘s cynical anthropocen- trism from others and even from oneself. The position can be structural- ly compared to some indirect form of consequentialism and may attract parallel critiques.

Although nature was the focus of much nineteenth and twentieth cen- tury philosophy, contemporary environmental ethics only emerged as an academic discipline in the 1970s. The questioning and rethinking of the relationship of human beings with the natural environment over the last thirty years reflected an already widespread perception in the 1960s that the late twentieth century faced a human population explosion as well as a serious environmental crisis. Among the accessible work that drew attention to a sense of crisis was Rachel Carson‘s Silent Spring, which consisted of a number of essays earlier published in the New York- er magazine detailing how pesticides such as DDT, aldrin and deildrin concentrated through the food web. Commercial farming practices aimed at maximizing crop yields and profits, Carson speculates, are ca- pable of impacting simultaneously on environmental and public health.


In a much cited essay on the historical roots of the environmental cri- sis, historian Lynn White argued that the main strands of Judeo- Christian thinking had encouraged the overexploitation of nature by maintaining the superiority of humans over all other forms of life on earth, and by depicting all of nature as created for the use of humans. White‘s thesis was widely discussed in theology, history, and has been subject to some sociological testing as well as being regularly discussed by philosophers. Central to the rationale for his thesis were the works of the Church Fathers and The Bible itself, supporting the anthropocentric perspective that humans are the only things that matter on Earth. Conse- quently, they may utilize and consume everything else to their ad- vantage without any injustice. For example, Genesis  1: 27–8 states:

―God created man in his own image, in the image of God created he him; male and female created he them. And God blessed them, and God said unto them, Be fruitful, and multiply, and replenish the earth, and subdue it: and have dominion over fish of the sea, and over fowl of the air, and over every living thing that moveth upon the earth.‖ Likewise, Thomas Aquinas argued that non-human animals are ―ordered to man‘s use‖. According to White, the Judeo-Christian idea that humans are cre- ated in the image of the transcendent supernatural God, who is radically separate from nature, also by extension radically separates humans themselves from nature. This ideology further opened the way for un- trammeled exploitation of nature. Modern Western science itself, White argued, was ―cast in the matrix of Christian theology‖ so that it too in- herited the ―orthodox Christian arrogance toward nature‖. Clearly, with- out technology and science, the environmental extremes to which we are now exposed would probably not be realized. The point of White‘s the- sis, however, is that given the modern form of science and technology, Judeo-Christianity itself provides the original deep-seated drive to un- limited exploitation of nature. Nevertheless, White argued that some minority traditions within Christianity might provide an antidote to the

―arrogance‖ of a mainstream tradition steeped in anthropocentrism.

Around the same time, the Stanford ecologists Paul and Anne Ehr- lich warned in The Population Bomb that the growth of human popula- tion threatened the viability of planetary life-support systems. The sense of environmental crisis stimulated by those and other popular works was intensified by NASA‘s production and wide dissemination of a particu- larly potent image of earth from space taken at Christmas 1968 and fea- tured in the Scientific American in September 1970. Here, plain to see,


was a living, shining planet voyaging through space and shared by all of humanity, a precious vessel vulnerable to pollution and to the overuse of its limited capacities. In 1972 a team of researchers at MIT led by Den- nis Meadows published the Limits to Growth study, a work that summed up in many ways the emerging concerns of the previous decade and the sense of vulnerability triggered by the view of the earth from space. In the commentary to the study, the researchers wrote:

We affirm finally that any deliberate attempt to reach a rational and enduring state of equilibrium by planned measures, rather than by chance or catastrophe, must ultimately be founded on a basic change of values and goals at individual, national and world levels. The call for a

―basic change of values‖ in connection to the environment reflected a need for the development of environmental ethics as a new sub- discipline of philosophy.

The new field emerged almost simultaneously in three countries – the United States, Australia, and Norway. In the first two of these coun- tries, direction and inspiration largely came from the earlier twentieth century American literature of the environment. For instance, the Scot- tish emigrant John Muir and subsequently the forester Aldo Leopold had advocated an appreciation and conservation of things ―natural, wild and free‖. Their concerns were motivated by a combination of ethical and aesthetic responses to nature as well as a rejection of crudely economic approaches to the value of natural objects (a historical survey of the con- frontation between Muir‘s reverentialism and the human-centred con- servationism of Gifford Pinchot (one of the major influences on the de- velopment of the US Forest Service) is provided in Norton 1991; also see Cohen 1984 and Nash. Leopold‘s A Sand County Almanac, in par- ticular, advocated the adoption of a ―land ethic‖. That land is a commu- nity is the basic concept of ecology, but that land is to be loved and re- spected is an extension of ethics. A thing is right when it tends to pre- serve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise. However, Leopold himself provided no systematic ethical theory or framework to support these ethical ideas concerning the environment. His views therefore presented a challenge and opportunity for moral theorists: could some ethical theory be de- vised to justify the injunction to preserve the integrity, stability and beauty of the biosphere?

The land ethic sketched by Leopold, attempting to extend our moral concern to cover the natural environment and its non-human contents,


was drawn on explicitly by the Australian philosopher Richard Routley. According to Routley, the anthropocentrism imbedded in what he called the ―dominant western view‖, or ―the western superethic‖, is in effect

―human chauvinism‖. This view, he argued, is just another form of class chauvinism, which is simply based on blind class ―loyalty‖ or prejudice, and unjustifiably discriminates against those outside the privileged class. Echoing the plot of a popular movie some three years earlier, Routley speculates in his ―last man‖ arguments about a hypothetical sit- uation in which the last person, surviving a world catastrophe, acts to ensure the elimination of all other living things and the last people set about destroying forests and ecosystems after their demise. From the human-chauvinistic perspective, the last person would do nothing mor- ally wrong, since his or her destructive act in question would not cause any damage to the interest and well-being of humans, who would by then have disappeared. Nevertheless, Routley points out that there is a moral intuition that the imagined last acts would be morally wrong. An explanation for this judgment, he argued, is that those non-human ob- jects in the environment, whose destruction is ensured by the last person or last people, have intrinsic value, a kind of value independent of their usefulness for humans. From his critique, Routley concluded that the main approaches in traditional western moral thinking were unable to allow the recognition that natural things have intrinsic value, and that the tradition required overhaul of a significant kind.

Leopold‘s idea that the ―land‖ as a whole is an object of our moral concern also stimulated writers to argue for certain moral obligations toward ecological wholes, such as species, communities, and ecosys- tems, not just their individual constituents. The U.S.-based theologian and environmental philosopher Holmes Rolston III, for instance, argued that species protection was a moral duty. It would be wrong, he main- tained, to eliminate a rare butterfly species simply to increase the mone- tary value of specimens already held by collectors. Like Routley‘s ―last man‖ arguments, Rolston‘s example is meant to draw attention to a kind of action that seems morally dubious and yet is not clearly ruled out or condemned by traditional anthropocentric ethical views. Species, Rol- ston went on to argue, are intrinsically valuable and are usually more valuable than individual specimens, since the loss of a species is a loss of genetic possibilities and the deliberate destruction of a species would show disrespect for the very biological processes which make possible the emergence of individual living things. Natural processes deserve


respect, according to Rolston‘s quasi-religious perspective, because they constitute a nature which is itself intrinsically valuable.

Meanwhile, the work of Christopher Stone had become widely dis- cussed. Stone proposed that trees and other natural objects should have at least the same standing in law as corporations. This suggestion was inspired by a particular case in which the Sierra Club had mounted a challenge against the permit granted by the U.S. Forest Service to Walt Disney Enterprises for surveys preparatory to the development of the Mineral King Valley, which was at the time a relatively remote game refuge, but not designated as a national park or protected wilderness ar- ea. The Disney proposal was to develop a major resort complex serving 14000 visitors daily to be accessed by a purpose-built highway through Sequoia National Park. The Sierra Club, as a body with a general con- cern for wilderness conservation, challenged the development on the grounds that the valley should be kept in its original state for its own sake.

Stone reasoned that if trees, forests and mountains could be given standing in law then they could be represented in their own right in the courts by groups such as the Sierra Club. Moreover, like any other legal person, these natural things could become beneficiaries of compensation if it could be shown that they had suffered compensatable injury through human activity. When the case went to the U.S. Supreme Court, it was determined by a narrow majority that the Sierra Club did not meet the condition for bringing a case to court, for the Club was unable and un- willing to prove the likelihood of injury to the interest of the Club or its members. In a dissenting minority judgment, however, justices Douglas, Blackmun and Brennan mentioned Stone‘s argument: his proposal to give legal standing to natural things, they said, would allow conserva- tion interests, community needs and business interests to be represented, debated and settled in court.

Reacting to Stone‘s proposal, Joel Feinberg raised a serious problem. Only items that have interests, Feinberg argued, can be regarded as hav- ing legal standing and, likewise, moral standing. For it is interests which are capable of being represented in legal proceedings and moral debates. This same point would also seem to apply to political debates. For in- stance, the movement for ―animal liberation‖, which also emerged strongly in the 1970s, can be thought of as a political movement aimed at representing the previously neglected interests of some animals. Granted that some animals have interests that can be represented in this


way, would it also make sense to speak of trees, forests, rivers, barna- cles, or termites as having interests of a morally relevant kind? This is- sue was hotly contested in the years that followed. Meanwhile, John Passmore argued, like White, that the Judeo-Christian tradition of thought about nature, despite being predominantly ―despotic‖, contained resources for regarding humans as ―stewards‖ or ―perfectors‖ of God‘s creation. Skeptical of the prospects for any radically new ethic, Pass- more cautioned that traditions of thought could not be abruptly over- hauled. Any change in attitudes to our natural surroundings which stood the chance of widespread acceptance, he argued, would have to resonate and have some continuities with the very tradition which had legiti- mized our destructive practices. In sum, then, Leopold‘s land ethic, the historical analyses of White and Passmore, the pioneering work of Rout- ley, Stone and Rolston, and the warnings of scientists, had by the late 1970s focused the attention of philosophers and political theorists firmly on the environment.

The confluence of ethical, political and legal debates about the envi- ronment, the emergence of philosophies to underpin animal rights activ- ism and the puzzles over whether an environmental ethic would be something new rather than a modification or extension of existing ethi- cal theories were reflected in wider social and political movements. The rise of environmental or ―green‖ parties in Europe in the 1980s was ac- companied by almost immediate schisms between groups known as ―re- alists‖  versus  ―fundamentalists‖.  The  ―realists‖  stood  for  reform  envi- ronmentalism, working with business and government to soften the im- pact of pollution and resource depletion especially on fragile ecosystems or endangered species. The ―fundies‖ argued for radical change, the set- ting of stringent new priorities, and even the overthrow of capitalism and liberal individualism, which  were taken as the major ideological causes of anthropogenic environmental devastation. It is not clear, how- ever, that collectivist or communist countries do any better in terms of their environmental record.

Underlying these political disagreements was the distinction between

―shallow‖ and ―deep‖ environmental movements, a distinction intro- duced in the early 1970s by another major influence on contemporary environmental ethics, the Norwegian philosopher and climber Arne Næss. Since the work of Næss has been significant in environmental politics, the discussion of his position is given in a separate section be- low. ―Deep ecology‖ was born in Scandinavia, the result of discussions


between Næss and his colleagues Sigmund Kvaløy and Nils Faarlund 1999 for a historical survey and commentary on the development of deep ecology). All three shared a passion for the great mountains. On a visit to the Himalayas, they became impressed with aspects of ―Sherpa culture‖ particularly when they found that their Sherpa guides regarded certain mountains as sacred and accordingly would not venture onto them. Subsequently, Næss formulated a position which extended the reverence the three Norwegians and the Sherpas felt for mountains to other natural things in general.

The ―shallow ecology movement‖, as Næss calls it, is the ―fight against pollution and resource depletion‖, the central objective of which is ―the health and affluence of people in the developed countries.‖ The

―deep ecology movement‖, in contrast, endorses ―biospheric egalitarian- ism‖, the view that all living things are alike in having value in their own right, independent of their usefulness to others. The deep ecologist respects this intrinsic value, taking care, for example, when walking on the mountainside not to cause unnecessary damage to the plants.

Inspired by Spinoza‘s metaphysics, another key feature of Næss‘s deep ecology is the rejection of atomistic individualism. The idea that a human being is such an individual possessing a separate essence, Næss argues, radically separates the human self from the rest of the world. To make such a separation not only leads to selfishness towards other peo- ple, but also induces human selfishness towards nature. As a counter to egoism at both the individual and species level, Næss proposes the adoption  of  an  alternative  relational  ―total-field  image‖  of  the  world. According to this relationalism, organisms are best understood as

―knots‖ in the biospherical net. The identity of a living thing is essen- tially constituted by its relations to other things in the world, especially its ecological relations to other living things. If people conceptualise themselves and the world in relational terms, the deep ecologists argue, then people will take better care of nature and the world in general.

As developed by Næss and others, the position also came to focus on the possibility of the identification of the human ego with nature. The idea is, briefly, that by identifying with nature I can enlarge the bounda- ries of the self beyond my skin. My largerecological – Self, deserves respect as well. To respect and to care for my Self is also to respect and to care for the natural environment, which is actually part of me and with which I should identify. ―Self-realization‖, in other words, is the reconnection of the shriveled human individual with the wider natural


environment. Næss maintains that the deep satisfaction that we receive from identification with nature and close partnership with other forms of life in nature contributes significantly to our life quality.

When Næss‘s view crossed the Atlantic, it was sometimes merged with ideas emerging from Leopold‘s land ethic. But Næss – wary of the apparent totalitarian political implications of Leopold‘s position that individual interests and well-being should be subordinated to the holistic good of the earth‘s biotic community – has always taken care to distance himself from advocating any sort of ―land ethic‖.

Some critics have argued that Næss‘s deep ecology is no more than an extended social-democratic version of utilitarianism, which counts human interests in the same calculation alongside the interests of all natural things in the natural environment. However, Næss failed to ex- plain in any detail how to make sense of the idea that oysters or barna- cles, termites or bacteria could have interests of any morally relevant sort at all. Without an account of this, Næss‘s early ―biospheric egalitar- ianism‖ – that all living things whatsoever had a similar right to live and flourish – was an indeterminate principle in practical terms. It also re- mains unclear in what sense rivers, mountains and forests can be regard- ed as possessors of any kind of interests. This is an issue on which Næss always remained elusive.

Biospheric egalitarianism was modified in the 1980s to the weaker claim that the flourishing of both human and non-human life have value in themselves. At the same time, Næss declared that his own favoured ecological philosophy – ―Ecosophy T‖, as he called it after his Tver- gastein mountain cabin – was only one of several possible foundations for an environmental ethic. Deep ecology ceased to be a specific doc- trine, but instead became a ―platform‖, of eight simple points, on which Næss hoped all deep green thinkers could agree. The platform was con- ceived as establishing a middle ground, between underlying philosophi- cal orientations, whether Christian, Buddhist, Daoist, process philoso- phy, or whatever, and the practical principles for action in specific situa- tions, principles generated from the underlying philosophies. Thus the deep ecological movement became explicitly pluralist.

While Næss‘s Ecosophy T sees human Self-realization as a solution to the environmental crises resulting from human selfishness and exploi- tation of nature, some of the followers of the deep ecology platform in the United States and Australia further argue that the expansion of the human self to include non-human nature is supported by the Copenha-


gen interpretation of quantum theory, which is said to have dissolved the boundaries between the observer and the observed. These "relationalist" developments of deep ecology are, however, criticized by some feminist theorists. The idea of nature as part of oneself, one might argue, could justify the continued exploitation of nature instead. For one is presuma- bly more entitled to treat oneself in whatever ways one likes than to treat another independent agent in whatever ways one likes. According to some feminist critics, the deep ecological theory of the ―expanded self‖ is in effect a disguised form of human colonialism, unable to give nature its due as a genuine ―other‖ independent of human interest and purposes.

Meanwhile, some third-world critics accused deep ecology of being elitist in its attempts to preserve wilderness experiences for only a select group of economically and socio-politically well-off people. The Indian writer Ramachandra Guha for instance, depicts the activities of many western-based conservation groups as a new form of cultural imperial- ism, aimed at securing converts to conservationism. ―Green missionar- ies‖, as Guha calls them, represent a movement aimed at further dispos- sessing the world‘s poor and indigenous people. ―Putting deep ecology in its place,‖ he writes, ―is to recognize that the trends it derides as

―shallow‖ ecology might in fact be varieties of environmentalism that are more apposite, more representative and more popular in the coun- tries of the South.‖ Although Næss himself repudiates suggestions that deep ecology is committed to any imperialism, Guha‘s criticism raises important questions about the application of deep ecological principles in different social, economic and cultural contexts. Finally, in other cri- tiques, deep ecology is portrayed as having an inconsistent utopian vi- sion.

Broadly speaking, a feminist issue is any that contributes in some way to understanding the oppression of women. Feminist theories at- tempt to analyze women‘s oppression, its causes and consequences, and suggest strategies and directions for women‘s liberation. By the mid 1970s, feminist writers had raised the issue of whether patriarchal modes of thinking encouraged not only widespread inferiorizing and colonizing of women, but also of people of colour, animals and nature. Sheila Collins, for instance, argued that male-dominated culture or pa- triarchy is supported by four interlocking pillars: sexism, racism, class exploitation, and ecological destruction.


Emphasizing the importance of feminism to the environmental movement and various other liberation movements, some writers, such as Ynestra King, argue that the domination of women by men is histori- cally the original form of domination in human society, from which all other hierarchies – of rank, class, and political power – flow. For in- stance, human exploitation of nature may be seen as a manifestation and extension of the oppression of women, in that it is the result of associat- ing nature with the female, which had been already inferiorized and op- pressed by the male-dominating culture. But within the plurality of fem- inist positions, other writers, such as Val Plumwood, understand the op- pression of women as only one of the many parallel forms of oppression sharing and supported by a common ideological structure, in which one party uses a number of conceptual and rhetorical devices to privilege its interests over that of the other party. Facilitated by a common structure, seemingly diverse forms of oppression can mutually reinforce each oth- er.

Not all feminist theorists would call that common underlying oppres- sive structure ―androcentric‖ or ―patriarchal‖. But it is generally agreed that core features of the structure include ―dualism‖, hierarchical think- ing, and the ―logic of domination‖, which are typical of, if not essential to, male-chauvinism. These patterns of thinking and conceptualizing the world, many feminist theorists argue, also nourish and sustain other forms of chauvinism, including, human-chauvinism, which is responsi- ble for much human exploitation of, and destructiveness towards, na- ture. The dualistic way of thinking, for instance, sees the world in polar opposite terms, such as male/female, masculinity/femininity, rea- son/emotion, freedom/necessity, active/ passive, mind/body, pure/soiled, white/coloured, civilized/primitive, transcendent/immanent, human/ an- imal, culture/nature. Furthermore, under dualism all the first items in these contrasting pairs are assimilated with each other, and all the se- cond items are likewise linked with each other. For example, the male is seen to be associated with the rational, active, creative, Cartesian human mind, and civilized, orderly, transcendent culture; whereas the female is regarded as tied to the emotional, passive, determined animal body, and primitive, disorderly, immanent nature. These interlocking dualisms are not just descriptive dichotomies, according to the feminists, but involve a prescriptive privileging of one side of the opposed items over the oth- er. Dualism confers superiority to everything on the male side, but infe- riority to everything on the female side. The ―logic of domination‖ then


dictates that those on the superior side are morally entitled to dominate and utilize those on the inferior side as mere means.

The problem with dualistic and hierarchical modes of thinking, how- ever, is not just that that they are epistemically unreliable. It is not just that the dominating party often falsely sees the dominated party as lack- ing (or possessing) the allegedly superior qualities, or that the dominat- ed party often internalizes false stereotypes of itself given by its oppres- sors, or that stereotypical thinking often overlooks salient and important differences among individuals. More important, according to feminist analyses, the very premise of prescriptive dualism – the valuing of at- tributes of one polarized side and the devaluing of those of the other, the idea that domination and oppression can be justified by appealing to attributes like masculinity, rationality, being civilized or developed, etc.

– is itself problematic.

Feminism represents a radical challenge for environmental thinking, politics, and traditional social ethical perspectives. It promises to link environmental questions with wider social problems concerning various kinds of discrimination and exploitation, and fundamental investigations of human psychology. However, whether there are conceptual, causal or merely contingent connections among the different forms of oppression and liberation remains a contested issue. The term ―ecofeminism‖ or

―ecological feminism‖ was for a time generally applied to any view that combines environmental advocacy with feminist analysis. However, because of the varieties of, and disagreements among, feminist theories, the label may be too wide to be informative and has generally fallen from use.

An often overlooked source of ecological ideas is the work of the neo-Marxist Frankfurt School of critical theory founded by Max Hork- heimer and Theodore Adorno. While classical Marxists regard nature as a resource to be transformed by human labour and utilized for human purposes, Horkheimer and Adorno saw Marx himself as representative of the problem of ―human alienation‖. At the root of this alienation, they argue, is a narrow positivist conception of rationality – which sees ra- tionality as an instrument for pursuing progress, power and technologi- cal control, and takes observation, measurement and the application of purely quantitative methods to be capable of solving all problems. Such a positivistic view of science combines determinism with optimism. Natural processes as well as human activities are seen to be predictable and manipulable. Nature is no longer mysterious, uncontrollable, or


fearsome. Instead, it is reduced to an object strictly governed by natural laws, which therefore can be studied, known, and employed to our bene- fit. By promising limitless knowledge and power, the positivism of sci- ence and technology not only removes our fear of nature, the critical theorists argue, but also destroys our sense of awe and wonder towards it. That is to say, positivism ―disenchants‖ nature – along with every- thing that can be studied by the sciences, whether natural, social or hu- man.

The progress in knowledge and material well-being may not be a bad thing in itself, where the consumption and control of nature is a neces- sary part of human life. However, the critical theorists argue that the positivistic disenchantment of natural things disrupts our relationship with them, encouraging the undesirable attitude that they are nothing more than things to be probed, consumed and dominated. According to the critical theorists, the oppression of ―outer nature‖ through science and technology is bought at a very high price: the project of domination requires the suppression of our own ―inner nature‖ – e.g., human crea- tivity, autonomy, and the manifold needs, vulnerabilities and longings at the centre of human life. To remedy such an alienation, the project of Horkheimer and Adorno is to replace the narrow positivistic and instru- mentalist model of rationality with a more humanistic one, in which the values of the aesthetic, moral, sensuous and expressive aspects of hu- man life play a central part. Thus, their aim is not to give up our rational faculties or powers of analysis and logic. Rather, the ambition is to ar- rive at a dialectical synthesis between Romanticism and Enlightenment, to return to anti-deterministic values of freedom, spontaneity and crea- tivity.

In his later work, Adorno advocates a re-enchanting aesthetic attitude of ―sensuous immediacy‖ towards nature. Not only do we stop seeing nature as primarily, or simply, an object of consumption, we are also able to be directly and spontaneously acquainted with nature without interventions from our rational faculties. According to Adorno, works of art, like natural things, always involve an ―excess‖, something more than their mere materiality and exchange value. The re-enchantment of the world through aesthetic experience, he argues, is also at the same time a re-enchantment of human lives and purposes. Adorno‘s work re- mains largely unexplored in mainstream environmental philosophy, alt- hough the idea of applying critical theory to both environmental issues


and the writings of various ethical and political theorists has spawned an emerging field of ―ecocritique‖ or ―eco-criticism‖.

Some students of Adorno‘s work have argued that his account of the role of ―sensuous immediacy‖ can be understood as an attempt to de- fend a ―legitimate anthropomorphism‖ that comes close to a weak form of animism. Others, more radical, have claimed to take inspiration from his notion of ―non-identity‖, which, they argue, can be used as the basis for a deconstruction of the notion of nature and perhaps even its elimi- nation from eco-critical writing. For example, Timothy Morton argues that ―putting something called Nature on a pedestal and admiring it from afar does for the environment what patriarchy does for the figure of Woman. It is a paradoxical act of sadistic admiration‖, and that ―in the name of all that we value in the idea of ‗nature‘, thoroughly examines how nature is set up as a transcendental, unified, independent category. Ecocritique does not think that it is paradoxical to say, in the name of ecology itself: ‗down with nature!‘ ‖.

It remains to be seen, however, whether the radical attempt to purge the concept of nature from eco-critical work meets with success. Like- wise, it is unclear whether the dialectic project on which Horkheimer and Adorno embarked is coherent, and whether Adorno, in particular, has a consistent understanding of ―nature‖ and ―rationality‖.

On the other hand, the new animists have been much inspired by the serious way in which some indigenous peoples placate and interact with animals, plants and inanimate things through ritual, ceremony and other practices. According to the new animists, the replacement of traditional animism by a form of disenchanting positivism directly leads to an an- thropocentric perspective, which is accountable for much human de- structiveness towards nature. In a disenchanted world, there is no mean- ingful order of things or events outside the human domain, and there is no source of sacredness or dread of the sort felt by those who regard the natural world as peopled by divinities or demons. When a forest is no longer sacred, there are no spirits to be placated and no mysterious risks associated with clear-felling it. A disenchanted nature is no longer alive. It commands no respect, reverence or love. It is nothing but a giant ma- chine, to be mastered to serve human purposes. The new animists argue for reconceptualizing the boundary between persons and non-persons. For them, ―living nature‖ comprises not only humans, animals and plants, but also mountains, forests, rivers, deserts, and even planets.


Whether the notion that a mountain or a tree is to be regarded as a person is taken literally or not, the attempt to engage with the surround- ing world as if it consists of other persons might possibly provide the basis for a respectful attitude to nature. If disenchantment is a source of environmental problems and destruction, then the new animism can be regarded as attempting to re-enchant, and help to save, nature. More po- etically, David Abram has argued that a phenomenological approach of the kind taken by Merleau-Ponty can reveal to us that we are part of the

―common flesh‖ of the world, that we are in a sense the world thinking itself.

In her work, Freya Mathews has tried to articulate a version of ani- mism or panpsychism that captures ways in which the world contains many kinds of consciousness and sentience. For her, there is an underly- ing unity of mind and matter in that the world is a ―self-realizing‖ sys- tem containing a multiplicity of other such systems. According to Mathews, we are meshed in communication, and potential communica- tion, with the ―One‖ and its many lesser selves. Materialism, she argues, is self-defeating by encouraging a form of ―collective solipsism‖ that treats the world either as unknowable or as a social-construction. Mathews also takes inspiration from her interpretation of the core Daoist idea of wuwei as ―letting be‖ and bringing about change through ―ef- fortless action‖. The focus in environmental management, development and commerce should be on ―synergy‖ with what is already in place ra- ther than on demolition, replacement and disruption. Instead of bulldoz- ing away old suburbs and derelict factories, the synergistic panpsychist sees these artefacts as themselves part of the living cosmos, hence part of what is to be respected. Likewise, instead of trying to eliminate feral or exotic plants and animals, and restore environments to some imag- ined pristine state, ways should be found – wherever possible – to pro- mote synergies between the newcomers and the older native populations in ways that maintain ecological flows and promote the further unfold- ing and developing of ecological processes. Panpsychism, Mathews ar- gues, frees us from the ―ideological grid of capitalism‖, can reduce our desire for consumer novelties, and can allow us and the world to grow old together with grace and dignity.

In summary, if disenchantment is a source of environmentally de- structive or uncaring attitudes, then both the aesthetic and the ani- mist/panpsychist re-enchantment of the world are intended to offer an


antidote to such attitudes, and perhaps also inspirations for new forms of managing and designing for sustainability.

Apart from feminist-environmentalist theories and Næss‘s deep ecology, Murray Bookchin‘s ―social ecology‖ has also claimed to be radical, subversive, or countercultural. Bookchin‘s version of critical theory takes the ―outer‖ physical world as constituting what he calls

―first nature‖, from which culture or ―second nature‖ has evolved. Envi- ronmentalism, on his view, is a social movement, and the problems it confronts are social problems. While Bookchin is prepared, like Hork- heimer and Adorno, to regard nature as an aesthetic and sensuous mar- vel, he regards our intervention in it as necessary. He suggests that we can choose to put ourselves at the service of natural evolution, to help maintain complexity and diversity, diminish suffering and reduce pollu- tion. Bookchin‘s social ecology recommends that we use our gifts of sociability, communication and intelligence as if we were ―nature ren- dered conscious‖, instead of turning them against the very source and origin from which such gifts derive. Exploitation of nature should be replaced by a richer form of life devoted to nature‘s preservation.

John Clark has argued that social ecology is heir to a historical, communitarian tradition of thought that includes not only the anarchist Peter Kropotkin, but also the nineteenth century socialist geographer Elisée Reclus, the eccentric Scottish thinker Patrick Geddes and the lat- ter‘s disciple, Lewis Mumford. Ramachandra Guha has described Mum- ford  as  ―the  pioneer  American  social  ecologist‖.  Mumford  adopted  a regionalist perspective, arguing that strong regional centres of culture are the basis of ―active and securely grounded local life‖. Like the pes- simists in critical theory, Mumford was worried about the emergence under industrialised capitalism of a ―megamachine‖, one that would op- press and dominate human creativity and freedom, and one that – de- spite being a human product – operates in a way that is out of our con- trol. While Bookchin is more of a technological optimist than Mumford, both writers have inspired a regional turn in environmental thinking. Bioregionalism gives regionalism an environmental twist. This is the view that natural features should provide the defining conditions for places of community, and that secure and satisfying local lives are led by those who know a place, have learned its lore and who adapt their lifestyle to its affordances by developing its potential within ecological limits. Such a life, the bioregionalists argue, will enable people to enjoy the fruits of self-liberation and self-development.


However, critics have asked why natural features should significant in defining the places in which communities are to be built, and have puzzled over exactly which natural features these should be – geologi- cal, ecological, climatic, hydrological, and so on. If relatively small, bio- regional communities are to be home to flourishing human societies, then a question also arises over the nature of the laws and punishments that will prevail in them, and also of their integration into larger regional and global political and economic groupings. For anarchists and other critics of the predominant social order, a return to self-governing and self-sufficient regional communities is often depicted as liberating and refreshing. But for the skeptics, the worry remains that the bioregional vision is politically over-optimistic and is open to the establishment of illiberal, stifling and undemocratic communities. Further, given its em- phasis on local self-sufficiency and the virtue of life in small communi- ties, a question arises over whether bioregionalism is workable in an overcrowded planet.

Deep ecology, feminism, and social ecology have had a considerable impact on the development of political positions in regard to the envi- ronment. Feminist analyses have often been welcomed for the psycho- logical insight they bring to several social, moral and political problems. There is, however, considerable unease about the implications of critical theory, social ecology and some varieties of deep ecology and animism. Some writers have argued, for example, that critical theory is bound to be ethically anthropocentric, with nature as no more than a ―social con- struction‖ whose value ultimately depends on human determinations. Others have argued that the demands of ―deep‖ green theorists and ac- tivists cannot be accommodated within contemporary theories of liberal politics and social justice. A further suggestion is that there is a need to reassess traditional theories such as virtue ethics, which has its origins in ancient Greek philosophy within the context of a form of stewardship similar to that earlier endorsed by Passmore. If this last claim is correct, then the radical activist need not, after all, look for philosophical sup- port in radical, or countercultural, theories of the sort deep ecology, feminism, bioregionalism and social ecology claim to be.

Although environmental ethicists often try to distance themselves from the anthropocentrism embedded in traditional ethical views, they also quite often draw their theoretical resources from traditional ethical systems and theories. Consider the following two basic moral questions:


What kinds of thing are intrinsically valuable, good or bad? What makes an action right or wrong?

Consequentialist ethical theories consider intrinsic ―value‖ / ―disval- ue‖  or  ―goodness‖  /  ―badness‖  to  be  more  fundamental  moral  notions than ―rightness‖ / ―wrongness‖, and maintain that whether an action is right/wrong is determined by whether its consequences are good/bad. From this perspective, answers to question are informed by answers to question. For instance, utilitarianism, a paradigm case of consequential- ism, regards pleasure (or, more broadly construed, the satisfaction of interest, desire, and/or preference) as the only intrinsic value in the world, whereas pain (or the frustration of desire, interest, and/or prefer- ence) is the only intrinsic disvalue, and maintains that right actions are those that would produce the greatest balance of pleasure over pain.

As the utilitarian focus is the balance of pleasure and pain as such, the question of to whom a pleasure or pain belongs is irrelevant to the calculation and assessment of the rightness or wrongness of actions. Hence, the eighteenth century utilitarian Jeremy Bentham, and now Pe- ter Singer, have argued that the interests of all the sentient beings – in- cluding non-human ones – affected by an action should be taken equally into consideration in assessing the action. Furthermore, rather like Rout- ley, Singer argues that the anthropocentric privileging of members of the species Homo sapiens is arbitrary, and that it is a kind of ―spe- ciesism‖ as unjustifiable as sexism and racism.

Singer regards the animal liberation movement as comparable to the liberation movements of women and people of colour. Unlike the envi- ronmental philosophers who attribute intrinsic value to the natural envi- ronment and its inhabitants, Singer and utilitarians in general attribute intrinsic value to the experience of pleasure or interest satisfaction as such, not to the beings who have the experience. Similarly, for the utili- tarian, non-sentient objects in the environment such as plant species, rivers, mountains, and landscapes, all of which are the objects of moral concern for environmentalists, are of no intrinsic but at most instrumen- tal value to the satisfaction of sentient beings. Furthermore, because right actions, for the utilitarian, are those that maximize the overall bal- ance of interest satisfaction over frustration, practices such as whale- hunting and the killing of an elephant for ivory, which cause suffering to non-human animals, might turn out to be right after all: such practices might produce considerable amounts of interest-satisfaction for human beings, which, on the utilitarian calculation, outweigh the non-human


interest-frustration involved. As the result of all the above considera- tions, it is unclear to what extent a utilitarian ethic can also be an envi- ronmental ethic. This point may not so readily apply to a wider conse- quentialist approach, which attributes intrinsic value not only to pleasure or satisfaction, but also to various objects and processes in the natural environment.

Deontological ethical theories, in contrast, maintain that whether an action is right or wrong is for the most part independent of whether its consequences are good or bad. From the deontologist perspective, there are several distinct moral rules or duties, the observance / violation of which is intrinsically right/wrong; i.e., right/wrong in itself regardless of consequences. When asked to justify an alleged moral rule, duty or its corresponding right, deontologists may appeal to the intrinsic value of those beings to whom it applies. For instance, ―animal rights‖ advocate Tom Regan argues that those animals with intrinsic value have the mor- al right to respectful treatment, which then generates a general moral duty on our part not to treat them as mere means to other ends. We have, in particular, a prima facie moral duty not to harm them. Regan main- tains that certain practices violate the moral right of intrinsically valua- ble animals to respectful treatment. Such practices, he argues, are intrin- sically wrong regardless of whether or not some better consequences ever flow from them. Exactly which animals have intrinsic value and therefore the moral right to respectful treatment? Regan‘s answer is: those that meet the criterion of being the ―subject-of-a-life‖. To be such a subject is a sufficient condition for having intrinsic value, and to be a subject-of-a-life involves, among other things, having sense- perceptions, beliefs, desires, motives, memory, a sense of the future, and a psychological identity over time.

Some authors have extended concern for individual well-being fur- ther, arguing for the intrinsic value of organisms achieving their own good, whether those organisms are capable of consciousness or not. Paul Taylor‘s version of this view, which we might call biocentrism, is a de- ontological example. He argues that each individual living thing in na- ture – whether it is an animal, a plant, or a micro-organism – is a ―teleo- logical-center-of-life‖ having a good or well-being of its own which can be enhanced or damaged, and that all individuals who are teleological- centers-of life have equal intrinsic value which entitles them to moral respect. Furthermore, Taylor maintains that the intrinsic value of wild living things generates a prima facie moral duty on our part to preserve


or promote their goods as ends in themselves, and that any practices which treat those beings as mere means and thus display a lack of re- spect for them are intrinsically wrong. A more recent and biologically detailed defence of the idea that living things have representations and goals and hence have moral worth is found in Agar 2001. Unlike Tay- lor‘s egalitarian and deontological biocentrism, Robin Attfield argues for a hierarchical view that while all beings having a good of their own have intrinsic value, some of them have intrinsic value to a greater ex- tent.

Attfield also endorses a form of consequentialism which takes into consideration, and attempts to balance, the many and possibly conflict- ing goods of different living things. However, some critics have pointed out that the notion of biological good or well-being is only descriptive not prescriptive. For instance, even if HIV has a good of its own this does not mean that we ought to assign any positive moral weight to the realization of that good.

More recently, the distinction between these two traditional ap- proaches has taken its own specific form of development in environ- mental philosophy. Instead of pitting conceptions of value against con- ceptions of rights, it has been suggested that there may be two different conceptions of intrinsic value in play in discussion about environmental good and evil. One the one side, there is the intrinsic value of states of affairs that are to be promoted - and this is the focus of the consequen- tialist thinkers. On the other hand there is the intrinsic values of entities to be respected. These two different foci for the notion of intrinsic value still provide room for fundamental argument between deontologists and consequentialist to continue, albeit in a somewhat modified form.

Note that the ethics of animal liberation or animal rights and biocen- trism are both individualistic in that their various moral concerns are directed towards individuals only – not ecological wholes such as spe- cies, populations, biotic communities, and ecosystems. None of these is sentient, a subject-of-a-life, or a teleological-center-of-life, but the preservation of these collective entities is a major concern for many en- vironmentalists. Moreover, the goals of animal liberationists, such as the reduction of animal suffering and death, may conflict with the goals of environmentalists. For example, the preservation of the integrity of an ecosystem may require the culling of feral animals or of some indige- nous animal populations that threaten to destroy fragile habitats. So


there are disputes about whether the ethics of animal liberation is a proper branch of environmental ethics.

Criticizing the individualistic approach in general for failing to ac- commodate conservation concerns for ecological wholes, J. Baird Cal- licott once advocated a version of land-ethical holism which takes Leo- pold‘s statement ―A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise‖ to be the supreme deontological principle. In this theory, the earth‘s biotic community per se is the sole locus of intrinsic value, whereas the value of its individual members is merely instrumental and dependent on their contribution to the ―integrity, stability, and beauty‖ of the larger community.

A straightforward implication of this version of the land ethic is that an individual member of the biotic community ought to be sacrificed whenever that is needed for the protection of the holistic good of the community. For instance, Callicott maintains that if culling a white- tailed deer is necessary for the protection of the holistic biotic good, then it is a land-ethical requirement to do so. But, to be consistent, the same point also applies to human individuals because they are also members of the biotic community. Not surprisingly, the misanthropy implied by Callicott‘s land-ethical holism was widely criticized and re- garded as a reductio of the position, Kheel, Ferré, and Shrader- Frechette. Tom Regan, in particular, condemned the holistic land ethic‘s disregard of the rights of the individual as ―environmental fascism‖.

Under pressure from the charge of ecofascism and misanthropy, Cal- licott later revised his position and now maintains that the biotic com- munity all have intrinsic value. To further distance himself from the charge of ecofascism, Callicott introduced explicit principles which pri- oritize obligations to human communities over those to natural ones. He called these ―second-order‖ principles for specifying the conditions un- der which the land ethic‘s holistic and individualistic obligations were to be ranked. As he put it: obligations generated by membership in more venerable and intimate communitiestake precedence over these generated in more recently-emerged and impersonal communities. The second second-order principle is that stronger interests generate duties that take precedence over duties generated by weaker interests.

Lo provides an overview and critique of Callicott‘s changing posi- tion over two decades, while Ouderkirk and Hill 2002 gives an overview of debates between Callicott and others concerning the metaethical and


metaphysical foundations for the land ethic and also its historical ante- cedents. As Lo pointed out, the final modified version of the land ethic needs more than two second-order principles, since a third-order princi- ple is needed to specify Callicott‘s implicit view that the second second- order principle generally countermands the first one when they come into conflict. In his most recent work, Callicott follows Lo‘s suggestion, while cautioning against aiming for too much precision in specifying the demands of the land ethic.

The controversy surrounding Callicott‘s original position, however, has inspired efforts in environmental ethics to investigate possibilities of attributing intrinsic value to ecological wholes, not just their individual constituent parts. Following in Callicott‘s footsteps, and inspired by Næss‘s relational account of value, Warwick Fox has championed a theory of ―responsive cohesion‖ which apparently gives supreme moral priority to the maintenance of ecosystems and the biophysical world. It remains to be seen if this position escapes the charges of misanthropy and totalitarianism laid against earlier holistic and relational theories of value.

Individual natural entities, Andrew Brennan argues, are not designed by anyone to fulfill any purpose and therefore lack ―intrinsic function‖. This, he proposes, is a reason for thinking that individual natural entities should not be treated as mere instruments, and thus a reason for assign- ing them intrinsic value. Furthermore, he argues that the same moral point applies to the case of natural ecosystems, to the extent that they lack intrinsic function. In the light of Brennan‘s proposal, Eric Katz ar- gues that all natural entities, whether individuals or wholes, have intrin- sic value in virtue of their ontological independence from human pur- pose, activity, and interest, and maintains the deontological principle that nature as a whole is an ―autonomous subject‖ which deserves moral respect and must not be treated as a mere means to human ends.

Carrying the project of attributing intrinsic value to nature to its ul- timate form, Robert Ellio argues that naturalness itself is a property in virtue of possessing which all natural things, events, and states of af- fairs, attain intrinsic value. Furthermore, Elliot argues that even a con- sequentialist, who in principle allows the possibility of trading off in- trinsic value from naturalness for intrinsic value from other sources, could no longer justify such kind of trade-off in reality. This is because the reduction of intrinsic value due to the depletion of naturalness on earth, according to him, has reached such a level that any further reduc-


tion of it could not be compensated by any amount of intrinsic value generated in other ways, no matter how great it is.

As the notion of ―natural‖ is understood in terms of the lack of hu- man contrivance and is often opposed to the notion of ―artifactual‖, one much contested issue is about the value of those parts of nature that have been interfered with by human artifice – for instance, previously degraded natural environments which have been humanly restored. Based on the premise that the properties of being naturally evolved and having a natural continuity with the remote past are ―value adding‖, El- liot argues that even a perfectly restored environment would necessarily lack those two value-adding properties and therefore be less valuable than the originally undegraded natural environment. Katz, on the other hand, argues that a restored nature is really just an artifact designed and created for the satisfaction of human ends, and that the value of restored environments is merely instrumental.

However, some critics have pointed out that advocates of moral dual- ism between the natural and the artifactual run the risk of diminishing the value of human life and culture, and fail to recognize that the natural environments interfered with by humans may still have morally relevant qualities other than pure naturalness. Two other issues central to this debate are that the key concept ―natural‖ seems ambiguous in many dif- ferent ways, and that those who argue that human interference reduces the intrinsic value of nature seem to have simply assumed the crucial premise that naturalness is a source of intrinsic value. Some thinkers maintain that the natural, or the ―wild‖ construed as that which ―is not humanized‖ or to some degree ―not under human control‖ is intrinsical- ly valuable. Yet, as Bernard Williams points out, we may, paradoxical- ly, need to use our technological powers to retain a sense of something not being in our power. The retention of wild areas may thus involve planetary and  ecological  management  to  maintain, or  even  ―imprison‖ such areas, raising a question over the extent to which national parks and wilderness areas are free from our control. An important message underlying the debate, perhaps, is that even if ecological restoration is achievable, it might have been better to have left nature intact in the first place.

Given the significance of the concept of naturalness in these debates, it is perhaps surprising that there has been relatively little analysis of that concept itself in environmental thought. In his pioneering work on the ethics of the environment, Holmes Rolston has worked with a num-


ber of different conceptions of the natural. An explicit attempt to pro- vide a conceptual analysis of a different sort is found in Siipi 2008, while an account of naturalness linking this to historical narratives of place is given in O‘Neill, Holland and Light 2008, ch. 8.

As an alternative to consequentialism and deontology both of which consider ―thin‖ concepts such as ―goodness‖ and ―rightness‖ as essential to morality, virtue ethics proposes to understand morality – and assess the ethical quality of actions – in terms of ―thick‖ concepts such as

―kindness‖, ―honesty‖, ―sincerity‖ and ―justice‖. As virtue ethics speaks quite a different language from the other two kinds of ethical theory, its theoretical focus is not so much on what kinds of things are good/bad, or what makes an action right/wrong. Indeed, the richness of the language of virtues, and the emphasis on moral character, is sometimes cited as a reason for exploring a virtues-based approach to the complex and al- ways-changing questions of sustainability and environmental care. One question central to virtue ethics is what the moral reasons are for acting one way or another. For instance, from the perspective of virtue ethics, kindness and loyalty would be moral reasons for helping a friend in hardship. These are quite different from the deontologist‘s reason or the consequentialist reason.

From the perspective of virtue ethics, the motivation and justification of actions are both inseparable from the character traits of the acting agent. Furthermore, unlike deontology or consequentialism the moral focus of which is other people or states of the world, one central issue for virtue ethics is how to live a flourishing human life, this being a cen- tral concern of the moral agent himself or herself. ―Living virtuously‖ is Aristotle‘s recipe for flourishing. Versions of virtue ethics advocating virtues such as ―benevolence‖, ―piety‖, ―filiality‖, and ―courage‖, have also been held by thinkers in the Chinese Confucian tradition.

The connection between morality and psychology is another core subject of investigation for virtue ethics. It is sometimes suggested that human virtues, which constitute an important aspect of a flourishing human life, must be compatible with human needs and desires, and per- haps also sensitive to individual affection and temperaments. As its cen- tral focus is human flourishing as such, virtue ethics may seem unavoid- ably anthropocentric and unable to support a genuine moral concern for the non-human environment. But just as Aristotle has argued that a flourishing human life requires friendships and one can have genuine friendships only if one genuinely values, loves, respects, and cares for


one‘s friends for their own sake, not merely for the benefits that they may bring to oneself, some have argued that a flourishing human life requires the moral capacities to value, love, respect, and care for the non-human natural world as an end in itself.

Despite the variety of positions in environmental ethics developed over the last thirty years, they have focused mainly on issues concerned with wilderness and the reasons for its preservation. The importance of wilderness experience to the human psyche has been emphasized by many environmental philosophers. Næss, for instance, urges us to en- sure we spend time dwelling in situations of intrinsic value, whereas Rolston seeks ―re-creation‖ of the human soul by meditating in the wil- derness. Likewise, the critical theorists believe that aesthetic apprecia- tion of nature has the power to re-enchant human life.

As wilderness becomes increasingly rare, people‘s exposure to wild things in their natural state has become reduced, and according to some authors this may reduce the chance of our lives and other values being transformed as a result of interactions with nature. An argument by Bry- an Norton draws attention to an analogy with music. Someone exposed for the first time to a new musical genre may undergo a transformation in musical preferences, tastes and values as a result of the experience (Norton 1987. Such a transformation can affect their other preferences and desires too, in both direct and indirect ways. In the attempt to pre- serve opportunities for experiences that can change or enhance people‘s valuations of nature, there has been a move since the early 2000s to find ways of rewilding degraded environments, and even parts of cities.

By contrast to the focus on wild places, relatively little attention has been paid to the built environment, although this is the one in which most people spend most of their time. In post-war Britain, for example, cheaply constructed new housing developments were often poor re- placements for traditional communities. They have been associated with lower amounts of social interaction and increased crime compared with the earlier situation. The destruction of highly functional high-density traditional housing, indeed, might be compared with the destruction of highly diverse ecosystems and biotic communities. Likewise, the loss of the world‘s huge diversity of natural languages has been mourned by many, not just professionals with an interest in linguistics. Urban and linguistic environments are just two of the many ―places‖ inhabited by humans. Some philosophical theories about natural environments and


objects have potential to be extended to cover built environments and non-natural objects of several sorts.

Certainly there are many parallels between natural and artificial do- mains: for example, many of the conceptual problems involved in dis- cussing the restoration of natural objects also appear in the parallel con- text of restoring human-made objects.

The focus on the value of wilderness and the importance of its preservation has overlooked another important problem – namely that lifestyles in which enthusiasms for nature rambles, woodland medita- tions or mountaineering can be indulged demand a standard of living that is far beyond the dreams of most of the world‘s population. Moreo- ver, mass access to wild places would likely destroy the very values held in high esteem by the ―natural aristocrats‖, a term used by Hugh Stretton to characterize the environmentalists ―driven chiefly by love of the wilderness‖. Thus, a new range of moral and political problems open up, including the environmental cost of tourist access to wilderness are- as, and ways in which limited access could be arranged to areas of natu- ral beauty and diversity, while maintaining the individual freedoms cen- tral to liberal democracies.

Lovers of wilderness sometimes consider the high human popula- tions in some developing countries as a key problem underlying the en- vironmental crisis. Rolston, for instance, claims that humans are a kind of planetary ―cancer‖. He maintains that while ―feeding people always seems humane, when we face up to what is really going on, by just feed- ing people, without attention to the larger social results, we could be feeding a kind of cancer.‖ This remark is meant to justify the view that saving nature should, in some circumstances, have a higher priority than feeding people. But such a view has been criticized for seeming to re- veal a degree of misanthropy, directed at those human beings least able to protect and defend themselves.

The empirical basis of Rolston‘s claims has been queried by work showing that poor people are often extremely good environmental man- agers. Guha‘s worries about the elitist and ―missionary‖ tendencies of some kinds of deep green environmentalism in certain rich western countries can be quite readily extended to theorists such as Rolston. Can such an apparently elitist sort of wilderness ethics ever be democratised? How can the psychically-reviving power of the wild become available to those living in the slums of Calcutta or São Paolo? These questions so far lack convincing answers.


Furthermore, the economic conditions which support the kind of en- joyment of wilderness by Stretton‘s ―natural aristocrats‖, and more gen- erally the lifestyles of many people in the affluent countries, seem im- plicated in the destruction and pollution which has provoked the envi- ronmental turn in the first place. For those in the richer countries, for instance, engaging in outdoor recreations usually involves the motor car. Car dependency, however, is at the heart of many environmental prob- lems, a key factor in urban pollution, while at the same time central to the economic and military activities of many nations and corporations, for example securing and exploiting oil reserves. In an increasingly crowded industrialised world, the answers to such problems are press- ing. Any adequate study of this intertwined set of problems must in- volve interdisciplinary collaboration among philosophers and theorists in the social as well as the natural sciences.

Connections between environmental destruction, unequal resource consumption, poverty and the global economic order have been dis- cussed by political scientists, development theorists, geographers and economists as well as by philosophers. Links between economics and environmental ethics are particularly well established. Work by Mark Sagoff, for instance, has played a major part in bringing the two fields together. He argues that ―as citizens rather than consumers‖ people are concerned about values, which cannot plausibly be reduced to mere or- dered preferences or quantified in monetary terms. Sagoff‘s distinction between people as consumers and people as citizens was intended to blunt the use of cost-benefit analysis as the final arbiter in discussions about nature‘s value. Of course, spouses take out insurance on each oth- ers‘ lives.

We pay extra for travel insurance to cover the cost of cancellation, illness, or lost baggage. Such actions are economically rational. They provide us with some compensation in case of loss. No-one, however, would regard insurance payments as replacing lost limbs, a loved one or even the joys of a cancelled vacation. So it is for nature, according to Sagoff. We can put dollar values on a stand of timber, a reef, a beach, a national park. We can measure the travel costs, the money spent by visi- tors, the real estate values, the park fees and all the rest. But these dollar measures do not tell us the value of nature any more than my insurance premiums tell you the value of a human life. If Sagoff is right, cost- benefit analysis of the kind mentioned in section 5 above cannot be a basis for an ethic of sustainability any more than for an ethic of biodi-


versity. The potentially misleading appeal to economic reason used to justify the expansion of the corporate sector has also come under critical scrutiny by globalisation theorists. These critiques do not aim to elimi- nate economics from environmental thinking; rather, they resist any re- ductive, and strongly anthropocentric, tendency to believe that all social and environmental problems are fundamentally or essentially economic. Other interdisciplinary approaches link environmental ethics with bi- ology, policy studies, public administration, political theory, cultural history, post-colonial theory, literature, geography, and human ecology. Many assessments of issues concerned with biodiversity, ecosystem health, poverty, environmental justice and sustainability look at both human and environmental issues, eschewing in the process commitment either to a purely anthropocentric or purely ecocentric perspective. The future development of environmental ethics depend on these, and other interdisciplinary synergies, as much as on its anchorage within philoso-

phy.

The Convention on Biological Diversity discussed in section 5 was influenced by Our Common Future, an earlier United Nations document on sustainability produced by the World Commission on Environment and Development. The commission was chaired by Gro Harlem Brund- tland, Prime Minister of Norway at the time, and the report is sometimes known as the Brundtland Report. This report noted the increasing tide of evidence that planetary systems vital to supporting life on earth were under strain. The key question it raised is whether it is equitable to sacri- fice options for future well-being in favour of supporting current life- styles, especially the comfortable, and sometimes lavish, forms of life enjoyed in the rich countries. As Bryan Norton puts it, the world faces a global challenge to see whether different human groups, with widely varying  perspectives,  can  perhaps  ―accept  responsibility  to  maintain  a non-declining set of opportunities based on possible uses of the envi- ronment‖.

The preservation of options for the future can be readily linked to no- tions of equity if it is agreed that ―the future ought not to face, as a result of our actions today, a seriously reduced range of options and choices, as they try to adapt to the environment that they face‖. Note that refer- ences to ―the future‖ need not be limited to the future of human beings only. In keeping with the non-anthropocentric focus of much environ- mental philosophy, a care for sustainability and biodiversity can em- brace a care for opportunities available to non-human living things.


However, when the concept ―sustainable development‖ was first ar- ticulated in the Brundtland Report, the emphasis was clearly anthropo- centric. In face of increasing evidence that planetary systems vital to life-support were under strain, the concept of sustainable development is constructed in the report to encourage certain globally coordinated di- rections and types of economic and social development. The report de- fines ―sustainable development‖ in the following way:

Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs. It contains within it two key concepts:

· the concept of ―needs‖, in particular the essential needs of the world‘s poor, to which overriding priority should be given; and

· the idea of limitations imposed by the state of technology and social organization on the environment‘s ability to meet present and fu- ture needs.

Thus the goals of economic and social development must be defined in terms of sustainability in all countries – developed or developing, market-oriented or centrally planned. Interpretations will vary, but must share certain general features and must flow from a consensus on the basic concept of sustainable development and on a broad strategic framework for achieving it. The report goes on to argue that ―the indus- trial world has already used much of the planet‘s ecological capital. This inequality is the planet‘s main ‗environmental‘ problem; it is also its main ‗development‘ problem‖. In the concept of sustainable develop- ment the report combines the resource economist‘s notion of ―sustaina- ble yield‖ with the recognition that developing countries of the world are entitled to economic growth and prosperity.

The notion of sustainable yield involves thinking of forests, rivers, oceans and other ecosystems, including the natural species living in them, as a stock of ―ecological capital‖ from which all kinds of goods and services flow. Provided the flow of such goods and services does not reduce the capacity of the capital itself to maintain its productivity, the use of the systems in question is regarded as sustainable. Thus, the report argues that ―maximum sustainable yield must be defined after taking into account system-wide effects of exploitation‖ of ecological capital

There are clear philosophical, political and economic precursors to the Brundtland concept of sustainability. For example, John Stuart Mill distinguished between the ―stationary state‖ and the ―progressive state‖


and argued that at the end of the progressive state lies the stationary state, since ―the increase of wealth is not boundless‖. Mill also recog- nized a debt to the gloomy prognostications of Thomas Malthus, who had conjectured that population tends to increase geometrically while food resources at best increase only arithmetically, so that demand for food will inevitably outstrip the supply. Reflection on Malthus led Mill to argue for restraining human population growth:

Even in a progressive state of capital, in old countries, a conscien- tious or prudential restraint on population is indispensable, to prevent the increase of numbers from outstripping the increase of capital, and the condition of the classes who are at the bottom of society from being deteriorated.

Such warnings resonate with more recent pessimism about increasing human population and its impact on the poorest people, as well as on loss of biodiversity, fresh water scarcity, overconsumption and climate change. In their controversial work The Population Bomb, Paul and Anne Ehrlich, argue that without restrictions on population growth, in- cluding the imposition of mandatory birth control, the world  faced

―mass starvation‖ in the short term. In a subsequent defence of their ear- ly work, the Ehrlichs declared that the most serious flaw in their original analysis ―was that it was much too optimistic about the future‖, and comment that ―Since The Bomb was written, increases in greenhouse gas flows into the atmosphere, a consequence of the near doubling of the human population and the near tripling of global consumption, indi- cate that the results will likely be catastrophic climate disruption caused by greenhouse heating‖. It was also in 1968 that Garrett Hardin pub- lished his much cited article on the ―tragedy of the commons‖ showing that common resources are always subject to degradation and extinction in the face of the rational pursuit of self interest. For Hardin, the increas- ing pressure on shared resources, and increasing pollution, are inevitable results of the fact that ―there is no technical solution to the population problem‖. The problem may be analysed from the perspective of the so- called prisoner‘s dilemma. Despite the pessimism of writers at the time, and the advocacy of setting limits to population growth, there was also an optimism that echoes Mill‘s own view that a ―stationary state‖ would not be one of misery and decline, but rather one in which humans could aspire to more equitable distribution of available and limited resources. This is clear not only among those who recognize limits to economic growth but also among those who champion the move to a steady state


economy or at least want to see more account taken of ecology in eco- nomics.

The Brundtland report puts less emphasis on limits than do Mill, Malthus and these more recent writers. It depicts sustainability as a chal- lenge and opportunity for the world to become more socially, politically and environmentally fair. In pursuit of intergenerational justice, it sug- gests that there should be new human rights added to the standard list, for example, that ―All human beings have the fundamental right to an environment adequate for their health and well being‖. The report also argues that ―The enjoyment of any right requires respect for the similar rights of others, and recognition of reciprocal and even joint responsibil- ities. States have a responsibility towards their own citizens and other states‖. Since the report‘s publication, many writers have supported and defended the view that global and economic justice require that nations which had become wealthy through earlier industrialization and envi- ronmental exploitation should allow less developed nations similar or equivalent opportunities for development especially in term of access to environmental resources. As intended by the report the idea of sustaina- ble development has become strongly integrated into the notion of envi- ronmental conservation. The report has also set the scene for a range of subsequent international conferences, declarations, and protocols many of them maintaining the emphasis on the prospects for the future of hu- manity, rather than considering sustainability in any wider sense.

Some early commentators on the notion of sustainable development have been critical of the way the notion mixes together moral ideas of justice and fairness with technical ideas in economics. The objection is that sustainability as, in part, an economic and scientific notion, should not be fused with evaluative ideals. This objection has not generally been widely taken up. Mark Sagoff has observed that environmental policy ―is most characterized by the opposition between instrumental values and aesthetic and moral judgments and convictions‖. Some non- anthropocentric environmental thinkers have found the language of eco- nomics unsatisfactory in its implications since it already appears to as- sume a largely instrumental view of nature. The use of notions such as

―asset‖, ―capital‖ and even the word ―resources‖ in connection with nat- ural objects and systems has been identified by some writers as instru- mentalizing natural things which are in essence wild and free.

The objection is that such language promotes the tendency to think of natural things as mere resources for humans or as raw materials with


which human labour could be mixed, not only to produce consumable goods, but also to generate human ownership. If natural objects and sys- tems have intrinsic value independent of their possible use for humans, as many environmental philosophers have argued, then a policy ap- proach to sustainability needs to consider the environment and natural things not only in instrumental and but also in intrinsic terms to do jus- tice to the moral standing that many people believe such items possess. Despite its acknowledgment of there being ―moral, ethical, cultural, aes- thetic, and purely scientific reasons for conserving wild beings‖, the strongly anthropocentric and instrumental language used throughout the Brundtland report in articulating the notion of sustainable development can be criticised for defining the notion too narrowly, leaving little room for addressing sustainability questions directly concerning the Earth‘s environment and its non-human inhabitants: should, and if so, how should, human beings reorganise their ways of life and the social- political structures of their communities to allow sustainability and equi- ty not only for all humans but also for the other species on the planet?

The preservation concern for nature and non-human species is ad- dressed to some extent by making a distinction between weaker and stronger conceptions of sustainability. The distinction emerged from considering the question: what exactly does sustainable development seek to sustain? Is the flow of goods and services from world markets that is to be maintained, or is it the current – or some future – level of consumption? In answering such questions, proponents of weak sustain- ability argue that it is acceptable to replace natural capital with human- made capital provided that the latter has equivalent functions. If, for ex- ample, plastic trees could produce oxygen, absorb carbon and support animal and insect communities, then they could replace the real thing, and a world with functionally equivalent artificial trees would seem just as good as one with real or natural trees in it.

For weak sustainability theorists, the aim of future development should be to maintain a consistently productive stock of capital on which to draw, while not insisting that some portion of that capital be natural. Strong sustainability theorists, by contrast, generally resist the substitution of human for natural capital, insisting that a critical stock of natural things and processes be preserved. By so doing, they argue, we maintain stocks of rivers, forests and biodiverse systems, hence provid- ing maximum options – options in terms of experience, appreciation, values, and ways of life – for the future human inhabitants of the planet.


The Brundtland report can also be seen as advocating a form of strong sustainability in so far as it recommends that a ―first priority is to estab- lish the problem of disappearing species and threatened ecosystems on political agendas as a major resource issue‖.

Furthermore, despite its instrumental and economic language, the re- port in fact endorses a wider moral perspective on the status of and our relation to nature and non-human species, evidenced by its statement that ―the case for the conservation of nature should not rest only with development goals. It is part of our moral obligation to other living be- ings and future generations‖. Implicit in the statement is not only a strong conception of sustainability but also a non-anthropocentric con- ception of the notion. Over time, strong sustainability has come to be focused not only on the needs of human and other living things but also on their rights. In a further development, the discourses on forms of sus- tainability have generally given way in the last decade to a more ambig- uous usage, in which the term ―sustainability‖ functions to bring people into a debate rather than setting out a clear definition of the terms of the debate itself. As globalization leads to greater integration of world economies, the world after the Brundtland report has seen greater frag- mentation among viewpoints, where critics of globalization have gener- ally used the concept of sustainability in a plurality of different ways. Some  have  argued that  ―sustainability‖, just  like  the word ―nature‖ it- self, has come to mean very different things, carrying different symbolic meaning for different groups, and reflecting very different interests. For better or for worse, such ambiguity can on occasion allow different par- ties in negotiations to claim a measure of agreement.

The preservation of opportunities to live well, or at least to have a minimally acceptable level of well being, is at the heart of population ethics and many contemporary conceptions of sustainability. Many peo- ple believe such opportunities for the existing younger generations, and also for the yet to arrive future generations, to be under threat from con- tinuing environmental destruction, including loss of fresh water re- sources, continued clearing of wild areas and a changing climate. Of these, climate change has come to prominence as an area of intense pol- icy and political debate, to which applied philosophers and ethicists have much to contribute. An early exploration of the topic by John Broome shows how the economics of climate change could not be di- vorced from considerations of intergenerational justice and ethics, and this has set the scene for subsequent discussions and analyses.


More than a decade later, when Stephen Gardiner analyses the state of affairs surrounding climate change in an article entitled ―A Perfect Moral Storm‖, his starting point is also that ethics plays a fundamental role in all discussions of climate policy. But he argues that even if diffi- cult ethical and conceptual questions facing climate change could be answered, it would still be close to politically and socially impossible to formulate, let alone to enforce, policies and action plans to deal effec- tively with climate change. This is due to the multi-faceted nature of a problem that involves vast numbers of agents and players. At a global level, there is first of all the practical problem of motivating shared re- sponsibilities in part due to the dispersed nature of greenhouse gas emis- sions which makes the effects of increasing levels of atmospheric car- bon and methane not always felt most strongly in the regions where they originate. Add to this the fact that there is an un-coordinated and also dispersed network of agents – both individual and corporate – responsi- ble for greenhouse gas emissions, and that there are no effective institu- tions that can control and limit them. But this tangle of issues consti- tutes, Gardiner argues, only one strand in the skein of quandaries that confronts us. There is also the fact that by and large only future genera- tions will carry the brunt of the impacts of climate change, explaining why current generations have no strong incentive to act. Finally, it is evident that our current mainstream political, economic, and ethical models are not up to the task of reaching global consensus, and in many cases not even national consensus, on how best to design and implement fair climate policies.

These considerations lead Gardiner to take a pessimistic view of the prospects for progress on climate issues. His view includes pessimism about technical solutions, such as geoengineering as the antidote to cli- mate problems, echoing the concerns of others that further domination of and large scale interventions in nature may turn out to be a greater evil than enduring a climate catastrophe. A key point in Gardiner‘s analysis is that the problem of climate change involves a tangle of is- sues, the complexity of which conspires to encourage buck-passing, weakness  of  will,  distraction  and  procrastination  ,  ―mak  us  extremely vulnerable to moral corruption‖. Because of the grave risk of serious harms to future generations, our failure to take timely mitigating actions on climate isseus can be seen as a serious moral failing, especially in the light of our current knowledge and understanding of the problem. Sum- marizing widespread frustration over the issue, Rolston writes: ―All this


inability to act effectively in the political arena casts a long shadow of doubt on whether, politically or technologically, much less ethically, we humans are anywhere near being smart enough to manage the planet‖. In the face of such pessimism about the prospects for securing any ac- tion to combat climate change other writers have cautioned against giv- ing in to defeatism and making self-fulfilling prophecies. These latter behaviours are always a temptation when we confront worrying truths and insufficient answers. Whatever the future holds, many thinkers now believe that solving the problems of climate change is an essential in- gredient in any credible form of sustainable development and that the alternative to decisive action may result in the diminution not only of nature and natural systems, but also of human dignity itself.

 





































Neurophilosophy

The term ``neurophilosophy'' is often used either implicitly or explic- itly for characterizing the investigation of philosophical theories in rela- tion to neuroscientific hypotheses. The exact methodological principles and systematic rules for a linkage between philosophical theories and neuroscientific hypothesis, however, remain to be clarified. The present contribution focuses on these principles, as well as on the relation be- tween ontology and epistemology and the characterization of hypothesis in neurophilosophy. Principles of transdisciplinary methodology include the `principle of asymmetry', the `principle of bi-directionality' and the

`principle of transdisciplinary circularity'. The `principle of asymmetry' points to an asymmetric relationship between logical and natural condi- tions. The `principle of bi-directionality' claims for the necessity of bi- directional linkage between natural and logical conditions. The `princi- ple of transdisciplinary circularity' describes systematic rules for mutual comparison and cross-conditional exchange between philosophical theo- ry and neuroscientific hypotheses. The relation between ontology and epistemology no longer is determined by ontological presuppositions i.e.

``ontological primacy''. Instead, there is correspondence between differ- ent `epistemological capacities' and different kinds of ontology which consecutively results in ``epistemic primacy'' and ``ontological plural- ism''. The present contribution concludes by rejecting some  so-called

`standard-arguments' including the `argument of circularity', the `argu- ment of categorical fallacy', the `argument of validity' and the `argument of necessity'.


7.1.47. Photonics

Photonics is the physical science of light (photon) generation, detec- tion, and manipulation through emission, transmission, modulation, sig- nal processing, switching, amplification, and detection/sensing. Though covering all light's technical applications over the whole spectrum, most photonic applications are in the range of visible and near-infrared light. The term photonics developed as an outgrowth of the first practical sem- iconductor light emitters invented in the early 1960s and optical fibers developed in the 1970s. The word 'photonics' is derived from the Greek word "photos" meaning light; it appeared in the late 1960s to describe a research field whose goal was to use light to perform functions that tra- ditionally fell within the typical domain of electronics, such as tele- communications, information processing, etc.

Photonics as a field began with the invention of the laser in 1960. Other developments followed: the laser diode in the 1970s, optical fi- bers for transmitting information, and the erbium-doped fiber amplifier. These inventions formed the basis for the telecommunications revolu- tion of the late 20th century and provided the infrastructure for the In- ternet. Though coined earlier, the term photonics came into common use in the 1980s as fiber-optic data transmission was adopted by telecom- munications network operators. At that time, the term was used widely at Bell Laboratories. Its use was confirmed when the IEEE Lasers and Electro-Optics Society established an archival journal named Photonics Technology Letters at the end of the 1980s.

During the period leading up to the dot-com crash circa 2001, pho- tonics as a field focused largely on optical telecommunications. Howev- er, photonics covers a huge range of science and technology applica- tions, including laser manufacturing, biological and chemical sensing, medical diagnostics and therapy, display technology, and optical com- puting. Further growth of photonics is likely if current silicon photonics developments are successful.

Photonics is closely related to optics. Classical optics long preceded the discovery that light is quantized, when Albert Einsteinfamously ex- plained the photoelectric effect in 1905. Optics tools include the refract- ing lens, the reflecting mirror, and various optical components and in- struments developed throughout the 15th to 19th centuries. Key tenets of classical optics, such as Huygens Principle, developed in the 17th century, Maxwell's Equations and the wave equations, developed in the


19th, do not depend on quantum properties of light. Photonics is related to quantum optics, optomechanics, electro-optics, optoelectronics and quantum electronics. However, each area has slightly different connota- tions by scientific and government communities and in the marketplace. Quantum optics often connotes fundamental research, whereas photon- ics is used to connote applied research and development.

The term photonics more specifically connotes:

· The particle properties of light,

· The potential of creating signal processing device technologies using photons,

· The practical application of optics, and

· An analogy to electronics.

The term optoelectronics connotes devices or circuits that comprise both electrical and optical functions, i.e., a thin-film semiconductor de- vice. The term electro-optics came into earlier use and specifically en- compasses nonlinear electrical-optical interactions applied, e.g., as bulk crystal modulators such as the Pockels cell, but also includes advanced imaging sensors typically used for surveillance by civilian or govern- ment organizations.

Photonics also relates to the emerging science of quantum infor- mation and quantum optics, in those cases where it employs photonic methods. Other emerging fields include optomechanics, which involves the study of the interaction between light and mechanical vibrations of mesoscopic or macroscopic objects; opto-atomics, in which devices in- tegrate both photonic and atomic devices for applications such as preci- sion timekeeping, navigation, and metrology; polaritonics, which differs from photonics in that the fundamental information carrier is a polari- ton, which is a mixture of photons and phonons, and operates in the range of frequencies from 300 gigahertz to approximately 10 terahertz.

Applications of photonics are ubiquitous. Included are all areas from everyday life to the most advanced science, e.g. light detection, tele- communications, information processing, photonic computing, lighting, metrology, spectroscopy, holography, medicine, military technology, laser material processing, visual art, biophotonics, agriculture, and ro- botics.

Just as applications of electronics have expanded dramatically since the first transistor was invented in 1948, the unique applications of pho- tonics continue to emerge. Economically important applications for semiconductor photonic devices include optical data recording, fiber


optic telecommunications, laser printing (based on xerography), dis- plays, and optical pumping of high-power lasers. The potential applica- tions of photonics are virtually unlimited and include chemical synthe- sis, medical diagnostics, on-chip data communication, laser defense, and fusion energy, to name several interesting additional examples.

· Consumer equipment: barcode scanner, printer, CD/DVD/Blu- ray devices, remote control devices

· Telecommunications: optical fiber communications, optical down converter to microwave

· Medicine: correction of poor eyesight, laser surgery, surgical endoscopy, tattoo removal

· Industrial manufacturing: the use of lasers for welding, drilling, cutting, and various methods of surface modification

· Construction: laser leveling, laser rangefinding, smart structures

· Aviation: photonic gyroscopes lacking mobile parts

· Military: IR sensors, command and control, navigation, search and rescue, mine laying and detection

· Entertainment: laser shows, beam effects, holographic art

· Information processing

· Metrology: time and frequency measurements, rangefinding

· Photonic computing: clock distribution and communication be- tween computers, printed circuit boards, or within optoelectronic inte- grated circuits; in the future: quantum computing

Microphotonics and nanophotonics usually includes photonic crys- tals and solid state devices. The science of photonics includes investiga- tion of the emission, transmission, amplification, detection, and modula- tion of light.

Light sources used in photonics are usually far more sophisticated than light bulbs. Photonics commonly uses semiconductor light sources like light-emitting diodes, superluminescent diodes, and lasers. Other light sources include single photon sources, fluorescent lamps, cathode ray tubes, and plasma screens. Note that while CRTs, plasma screens, and organic light-emitting diode displays generate their own light, liquid crystal displays like TFT screens require a backlight of either cold cath- ode fluorescent lamps or, more often today, LEDs.

Characteristic for research on semiconductor light sources is the fre- quent use of III-V semiconductors instead of the classical semiconduc- tors like silicon and germanium. This is due to the special properties of III-V semiconductors that allow for the implementation of light emitting


devices. Examples for material systems used are gallium arsenide and aluminium gallium arsenide or other compound semiconductors. They are also used in conjunction with silicon to produce hybrid silicon la- sers.

Light can be transmitted through any transparent medium. Glass fi- ber or plastic optical fiber can be used to guide the light along a desired path. In optical communications optical fibers allow for transmission distances of more than 100 km without amplification depending on the bit rate and modulation format used for transmission. A very advanced research topic within photonics is the investigation and fabrication of special structures and "materials" with engineered optical properties. These include photonic crystals, photonic crystal fibers and metamateri- als.

Optical amplifiers are used to amplify an optical signal. Optical am- plifiers used in optical communications are erbium-doped fiber amplifi- ers, semiconductor optical amplifiers, Raman amplifiers and optical par- ametric amplifiers. A very advanced research topic on optical amplifiers is the research on quantum dot semiconductor optical amplifiers.

Photodetectors detect light. Photodetectors range from very fast pho- todiodes for communications applications over medium speed charge coupled devices for digital cameras to very slow solar cells that are used for energy harvesting from sunlight. There are also many other photode- tectors based on thermal, chemical, quantum, photoelectric and other effects.

Modulation of a light source is used to encode information on a light source. Modulation can be achieved by the light source directly. One of the simplest examples is to use a flashlight to send Morse code. Another method is to take the light from a light source and modulate it in an ex- ternal optical modulator.

An additional topic covered by modulation research is the modula- tion format. On-off keying has been the commonly used modulation format in optical communications. In the last years more advanced modulation formats like phase-shift keying or even orthogonal frequen- cy-division multiplexing have been investigated to counteract effects like dispersion that degrade the quality of the transmitted signal. Pho- tonics also includes research on photonic systems. This term is often used for optical communication systems. This area of research focuses on the implementation of photonic systems like high speed photonic


networks. This also includes research on optical regenerators, which improve optical signal quality.

Photonic integrated circuits are optically active integrated semicon- ductor photonic devices which consist of at least two different function- al blocks, (gain region and a grating based mirror in a laser). These de- vices are responsible for commercial successes of optical communica- tions and the ability to increase the available bandwidth without signifi- cant cost increases to the end user, through improved performance and cost reduction that they provide. The most widely deployed PICs are based on Indium phosphide material system. Silicon photonics is an ac- tive area of research.

 






Nanotech

Nanotechnology is manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest, widespread description of nano- technology referred to the particular technological goal of precisely ma- nipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the Na- tional Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclu- sive of all types of research and technologies that deal with the special properties of matter which occur below the given size threshold. It is therefore common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to the broad range of research and ap- plications whose common trait is size. Because of the variety of poten- tial applications, governments have invested billions of dollars in nano- technology research. Until 2012, through its National Nanotechnology Initiative, the USA has invested 3.7 billion dollars, the European Union has invested 1.2 billion and Japan 750 million dollars.

Nanotechnology as defined by size is naturally very broad, including fields of science as diverse as surface science, organic chemistry, mo- lecular biology, semiconductor physics, microfabrication, molecular engineering, etc. The associated research and applications are equally diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from


developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.

Scientists currently debate the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in nanomedicine, nanoelec- tronics, biomaterials energy production, and consumer products. On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicityand environmental im- pact of nanomaterials, and their potential effects on global economics, as well as speculation about various doomsday scenarios. These con- cerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.

The concepts that seeded nanotechnology were first discussed in 1959 by renowned physicist Richard Feynman in his talk There's Plenty of Room at the Bottom, in which he described the possibility of synthe- sis via direct manipulation of atoms. The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known.

Inspired by Feynman's concepts, K. Eric Drexler used the term "nan- otechnology" in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbi- trary complexity with atomic control. Also in 1986, Drexler co-founded The Foresight Institute (with which he is no longer affiliated) to help increase public awareness and understanding of nanotechnology con- cepts and implications.

Thus, emergence of nanotechnology as a field in the 1980s occurred through convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework for nanotechnology, and high-visibility experimental advances that drew additional wide- scale attention to the prospects of atomic control of matter. In the 1980s, two major breakthroughs sparked the growth of nanotechnology in modern era.

First, the invention of the scanning tunneling microscopein 1981 which provided unprecedented visualization of individual atoms and bonds, and was successfully used to manipulate individual atoms in 1989. The microscope's developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratoryreceived a Nobel Prize in Physics in 1986. Binnig, Qu ate and Gerber also invented the analogous atomic force microscope that year.


Second, Fullerenes were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won the 1996 Nobel Prize in Chemistry. Controversies emerged regarding the definitions and poten- tial implications of nanotechnologies, exemplified by the Royal Socie- ty's report on nanotechnology. Challenges were raised regarding the fea- sibility of applications envisioned by advocates of molecular nanotech- nology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003.

Meanwhile, commercialization of products based on advancements in nanoscale technologies began emerging. These products are limited to bulk applications of nanomaterials and do not involve atomic control of matter. Some examples include the Silver Nano platform for using silver nanoparticles as an antibacterial agent, nanoparticle-based transparent sunscreens, carbon fiber strengthening using silica nanoparticles, and carbon nanotubes for stain-resistant textiles. Governments moved to promote and fund research into nanotechnology, such as in the U.S. with the National Nanotechnology Initiative, which formalized a size-based definition of nanotechnology and established funding for research on the nanoscale, and in Europe via the European Framework Programmes for Research and Technological Development.

By the mid-2000s new and serious scientific attention began to flour- ish. Projects emerged to produce nanotechnology roadmaps which cen- ter on atomically precise manipulation of matter and discuss existing and projected capabilities, goals, and applications. Nanotechnology is the engineering of functional systems at the molecular scale. This covers both current work and concepts that are more advanced. In its original sense, nanotechnology refers to the projected ability to construct items from the bottom up, using techniques and tools being developed today to make complete, high performance products.

One nanometer is one billionth, or 10−9, of a meter. By comparison, typical carbon-carbon bond lengths, or the spacing between these atoms in a molecule, are in the range 0.12 – 0.15 nm, and a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular life-forms, the bacteria of the genus Mycoplasma, are around 200 nm in length. By convention, nanotechnology is taken as the scale range 1 to 100 nm following the definition used by the National Nanotechnology Initiative in the US. The lower limit is set by the size of atoms since nanotechnology must build its devices from atoms and molecules. The upper limit is more or less arbitrary but is around the size below which


phenomena not observed in larger structures start to become apparent and can be made use of in the nano device. These new phenomena make nanotechnology distinct from devices which are merely miniaturised versions of an equivalent macroscopic device; such devices are on a larger scale and come under the description of microtechnology.

To put that scale in another context, the comparative size of a na- nometer to a meter is the same as that of a marble to the size of the earth. Or another way of putting it: a nanometer is the amount an aver- age man's beard grows in the time it takes him to raise the razor to his face. Two main approaches are used in nanotechnology. In the "bottom- up" approach, materials and devices are built from molecular compo- nents which assemble themselves chemically by principles of molecular recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-level control.

Areas of physics such as nanoelectronics, nanomechanics, nanopho- tonics and nanoionics have evolved during the last few decades to pro- vide a basic scientific foundation of nanotechnology.

Several phenomena become pronounced as the size of the system de- creases. These include statistical mechanical effects, as well as quantum mechanical effects, for example the "quantum size effect" where the electronic properties of solids are altered with great reductions in parti- cle size. This effect does not come into play by going from macro to micro dimensions. However, quantum effects can become significant when the nanometer size range is reached, typically at distances of 100 nanometers or less, the so-called quantum realm. Additionally, a number of physical (mechanical, electrical, optical, etc.) properties change when compared to macroscopic systems. One example is the increase in sur- face area to volume ratio altering mechanical, thermal and catalytic properties of materials. Diffusion and reactions at nanoscale, nanostruc- tures materials and nanodevices with fast ion transport are generally re- ferred to nanoionics. Mechanical properties of nanosystems are of inter- est in the nanomechanics research. The catalytic activity of nanomateri- als also opens potential risks in their interaction with biomaterials.

Materials reduced to the nanoscale can show different properties compared to what they exhibit on a macroscale, enabling unique appli- cations. For instance, opaque substances can become transparent; stable materials can turn combustible; insoluble materials may become solu- ble. A material such as gold, which is chemically inert at normal scales, can serve as a potent chemical catalyst at nanoscales. Much of the fasci-


nation with nanotechnology stems from these quantum and surface phe- nomena that matter exhibits at the nanoscale.

Modern synthetic chemistry has reached the point where it is possi- ble to prepare small molecules to almost any structure. These methods are used today to manufacture a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the ques- tion of extending this kind of control to the next-larger level, seeking methods to assemble these single molecules into supramolecular assem- blies consisting of many molecules arranged in a well defined manner.

These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into some useful conformation through a bottom-up approach. The con- cept of molecular recognition is especially important: molecules can be designed so that a specific configuration or arrangement is favored due to non-covalentintermolecular forces. The Watson–Crick basepairing rules are a direct result of this, as is the specificity of an enzyme being targeted to a single substrate, or the specific folding of the protein itself. Thus, two or more components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole.

Such bottom-up approaches should be capable of producing devices in parallel and be much cheaper than top-down methods, but could po- tentially be overwhelmed as the size and complexity of the desired as- sembly increases. Most useful structures require complex and thermo- dynamically unlikely arrangements of atoms. Nevertheless, there are many examples of self-assembly based on molecular recognition in bi- ology, most notably Watson–Crick basepairing and enzyme-substrate interactions. The challenge for nanotechnology is whether these princi- ples can be used to engineer new constructs in addition to natural ones.

Molecular nanotechnology, sometimes called molecular manufactur- ing, describes engineered nanosystems operating on the molecular scale. Molecular nanotechnology is especially associated with the molecular assembler, a machine that can produce a desired structure or device at- om-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to, and should be clearly distinguished from, the conventional technologies used to manu- facture nanomaterials such as carbon nanotubes and nanoparticles.

When the term "nanotechnology" was independently coined and popularized by Eric Drexler it referred to a future manufacturing tech-


nology based on molecular machine systems. The premise was that mo- lecular scale biological analogies of traditional machine components demonstrated molecular machines were possible: by the countless ex- amples found in biology, it is known that sophisticated, stochastically optimised biological machines can be produced.

It is hoped that developments in nanotechnology will make possible their construction by some other means, perhaps using biomimetic prin- ciples. However, Drexler and other researchers have proposed that ad- vanced nanotechnology, although perhaps initially implemented by bi- omimetic means, ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechani- cal functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification. The physics and engineering perfor- mance of exemplar designs were analyzed in Drexler's book Nanosys- tems.

In general it is very difficult to assemble devices on the atomic scale, as one has to position atoms on other atoms of comparable size and stickiness. Another view, put forth by Carlo Montemagno, is that future nanosystems will be hybrids of silicon technology and biological mo- lecular machines. Richard Smalley argued that mechanosynthesis are impossible due to the difficulties in mechanically manipulating individ- ual molecules.

This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003. Though biology clearly demonstrates that molecular machine systems are possible, non-biological molecular ma- chines are today only in their infancy. Leaders in research on non- biological molecular machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley Laboratories and UC Berkeley. They have con- structed at least three distinct molecular devices whose motion is con- trolled from the desktop with changing voltage: a nanotube nanomotor, a molecular actuator, and a nanoelectromechanical relaxation oscillator. See nanotube nanomotor for more examples.

An experiment indicating that positional molecular assembly is pos- sible was performed by Ho and Lee at Cornell University in 1999. They used a scanning tunneling microscope to move an individual carbon monoxide molecule to an individual iron atom sitting on a flat silver crystal, and chemically bound the CO to the Fe by applying a voltage.


The nanomaterials field includes subfields which develop or study mate- rials having unique properties arising from their nanoscale dimensions.

· Interface and colloid science has given rise to many materials which may be useful in nanotechnology, such as carbon nanotubes and other fullerenes, and various nanoparticles and nanorods. Nanomaterials with fast ion transport are related also to nanoionics and nanoelectron- ics.

· Nanoscale materials can also be used for bulk applications; most present commercial applications of nanotechnology are of this fla- vor.

· Progress has been made in using these materials for medical ap- plications; see Nanomedicine.

· Nanoscale materials such as nanopillars are sometimes used in solar cells which combats the cost of traditional silicon solar cells.

· Development of applications incorporating semiconductor na- noparticles to be used in the next generation of products, such as display technology, lighting, solar cells and biological imaging; see quantum dots.

· Recent application of nanomaterials include a range of biomedi- cal applications, such as tissue engineering, drug delivery, and biosen- sors.

These seek to arrange smaller components into more complex as- semblies.

· DNA nanotechnology utilizes the specificity of Watson–Crick basepairing to construct well-defined structures out of DNA and other nucleic acids.

· Approaches from the field of "classical" chemical synthesis al- so aim at designing molecules with well-defined shape.

· More generally, molecular self-assembly seeks to use concepts of supramolecular chemistry, and molecular recognition in particular, to cause single-molecule components to automatically arrange themselves into some useful conformation.

· Atomic force microscope tips can be used as a nanoscale "write head" to deposit a chemical upon a surface in a desired pattern in a pro- cess called dip pen nanolithography. This technique fits into the larger subfield of nanolithography.

These seek to create smaller devices by using larger ones to direct their assembly.


· Many technologies that descended from conventional solid-state silicon methods for fabricating microprocessors are now capable of cre- ating features smaller than 100 nm, falling under the definition of nano- technology. Giant magnetoresistance-based hard drives already on the market fit this description, as do atomic layer deposition techniques. Peter Grünberg and Albert Fert received the Nobel Prize in Physics in 2007 for their discovery of Giant magnetoresistance and contributions to the field of spintronics.

· Solid-state techniques can also be used to create devices known as nanoelectromechanical systems or NEMS, which are related to mi- croelectromechanical systems or MEMS.

· Focused ion beams can directly remove material, or even depos- it material when suitable precursor gasses are applied at the same time. For example, this technique is used routinely to create sub-100 nm sec- tions of material for analysis in Transmission electron microscopy.

· Atomic force microscope tips can be used as a nanoscale "write head" to deposit a resist, which is then followed by an etching process to remove material in a top-down method.

These seek to develop components of a desired functionality without regard to how they might be assembled.

· Magnetic assembly for the synthesis of anisotropic superpara- magnetic materials such as recently presented magnetic nanochains.

· Molecular scale electronics seeks to develop molecules with useful electronic properties. These could then be used as single- molecule components in a nanoelectronic device. For an example see rotaxane.

· Synthetic chemical methods can also be used to create synthetic molecular motors, such as in a so-called nanocar.

· Bionics or biomimicry seeks to apply biological methods and systems found in nature, to the study and design of engineering systems and modern technology. Biomineralization is one example of the sys- tems studied.

· Bionanotechnology is the use of biomolecules for applications in nanotechnology, including use of viruses and lipid assemblies. Nano- cellulose is a potential bulk-scale application.

These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an agenda along which inquiry might progress. These often take a big-picture view of nanotechnology, with


more emphasis on its societal implications than the details of how such inventions could actually be created.

· Molecular nanotechnology is a proposed approach which in- volves manipulating single molecules in finely controlled, deterministic ways. This is more theoretical than the other subfields, and many of its proposed techniques are beyond current capabilities.

· Nanorobotics centers on self-sufficient machines of some func- tionality operating at the nanoscale. There are hopes for applying nano- robots in medicine, but it may not be easy to do such a thing because of several drawbacks of such devices. Nevertheless, progress on innovative materials and methodologies has been demonstrated with some patents granted about new nanomanufacturing devices for future commercial applications, which also progressively helps in the development towards nanorobots with the use of embedded nanobioelectronics concepts.

· Productive nanosystems are "systems of nanosystems" which will be complex nanosystems that produce atomically precise parts for other nanosystems, not necessarily using novel nanoscale-emergent properties, but well-understood fundamentals of manufacturing. Be- cause of the discrete nature of matter and the possibility of exponential growth, this stage is seen as the basis of another industrial revolution. Mihail Roco, one of the architects of the USA's National Nanotechnolo- gy Initiative, has proposed four states of nanotechnology that seem to parallel the technical progress of the Industrial Revolution, progressing from passive nanostructures to active nanodevices to complex na- nomachines and ultimately to productive nanosystems.

· Programmable matter seeks to design materials whose proper- ties can be easily, reversibly and externally controlled though a fusion of information science and materials science.

· Due to the popularity and media exposure of the term nanotech- nology, the words picotechnology and femtotechnologyhave been coined in analogy to it, although these are only used rarely and infor- mally.

Nanomaterials can be classified in 0D, 1D, 2D and 3D nanomateri- als. The dimensionality play a major role in determining the characteris- tic of nanomaterials including physical, chemical and biological charac- teristics. With the decrease in dimensionality, an increase in surface-to- volume ratio is observed. This indicate that smaller dimensional nano- materials have higher surface area compared to 3D nanomaterials. Re-


cently, two dimensional nanomaterials are extensively investigated for electronic, biomedical, drug delivery and biosensor applications.

There are several important modern developments. The atomic force microscope and the Scanning Tunneling Microscope are two early ver- sions of scanning probes that launched nanotechnology. There are other types of scanning probe microscopy. Although conceptually similar to the scanning confocal microscope developed by Marvin Minsky in 1961 and the scanning acoustic microscope developed by Calvin Quate and coworkers in the 1970s, newer scanning probe microscopes have much higher resolution, since they are not limited by the wavelength of sound or light.

The tip of a scanning probe can also be used to manipulate nanostructures (a process called positional assembly). Feature-oriented scanning methodology may be a promising way to implement these na- nomanipulations in automatic mode. However, this is still a slow pro- cess because of low scanning velocity of the microscope.

Various techniques of nanolithography such as optical lithography, X-ray lithography dip pen nanolithography, electron beam lithography or nanoimprint lithography were also developed. Lithography is a top- down fabrication technique where a bulk material is reduced in size to nanoscale pattern.

Another group of nanotechnological techniques include those used for fabrication of nanotubes and nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography, electron beam lithogra- phy, focused ion beam machining, nanoimprint lithography, atomic lay- er deposition, and molecular vapor deposition, and further including molecular self-assembly techniques such as those employing di-block copolymers. The precursors of these techniques preceded the nanotech era, and are extensions in the development of scientific advancements rather than techniques which were devised with the sole purpose of cre- ating nanotechnology and which were results of nanotechnology re- search.

The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are made. Scan- ning probe microscopy is an important technique both for characteriza- tion and synthesis of nanomaterials. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide


self-assembling structures. By using, for example, feature-oriented scanning approach, atoms or molecules can be moved around on a sur- face with scanning probe microscopy techniques. At present, it is expen- sive and time-consuming for mass production but very suitable for la- boratory experimentation.

In contrast, bottom-up techniques build or grow larger structures at- om by atom or molecule by molecule. These techniques include chemi- cal synthesis, self-assembly and positional assembly. Dual polarisation interferometry is one tool suitable for characterisation of self assembled thin films. Another variation of the bottom-up approach is molecular beam epitaxy or MBE. Researchers at Bell Telephone Laboratories like John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and im- plemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE allows scientists to lay down atomically precise layers of atoms and, in the process, build up complex structures. Important for research on sem- iconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics. However, new therapeutic products, based on responsive nanomaterials, such as the ultradeforma- ble, stress-sensitive Transfersome vesicles, are under development and already approved for human use in some countries.

As of August 21, 2008, the Project on Emerging Nanotechnologies estimates that over 800 manufacturer-identified nanotech products are publicly available, with new ones hitting the market at a pace of 3–4 per week. The project lists all of the products in a publicly accessible online database. Most applications are limited to the use of "first generation" passive nanomaterials which includes titanium dioxide in sunscreen, cosmetics, surface coatings, and some food products; Carbon allotropes used to produce gecko tape; silver in food packaging, clothing, disin- fectants and household appliances; zinc oxide in sunscreens and cosmet- ics, surface coatings, paints and outdoor furniture varnishes; and cerium oxide as a fuel catalyst.

Further applications allow tennis balls to last longer, golf balls to fly straighter, and even bowling balls to become more durable and have a harder surface. Trousers and socks have been infused with nanotechnol- ogy so that they will last longer and keep people cool in the summer. Bandages are being infused with silver nanoparticles to heal cuts faster. Video game consoles and personal computers may become cheaper,


faster, and contain more memory thanks to nanotechnology. Nanotech- nology may have the ability to make existing medical applications cheaper and easier to use in places like the general practitioner's office and at home. Cars are being manufactured with nanomaterials so they may need fewer metals and less fuel to operate in the future.

Scientists are now turning to nanotechnology in an attempt to devel- op diesel engines with cleaner exhaust fumes. Platinum is currently used as the diesel engine catalyst in these engines. The catalyst is what cleans the exhaust fume particles. First a reduction catalyst is employed to take nitrogen atoms from NOx molecules in order to free oxygen. Next the oxidation catalyst oxidizes the hydrocarbons and carbon monoxide to form carbon dioxide and water. Platinum is used in both the reduction and the oxidation catalysts. Using platinum though, is inefficient in that it is expensive and unsustainable. Danish company InnovationsFonden invested DKK 15 million in a search for new catalyst substitutes using nanotechnology. The goal of the project, launched in the autumn of 2014, is to maximize surface area and minimize the amount of material required. Objects tend to minimize their surface energy; two drops of water, for example, will join to form one drop and decrease surface area. If the catalyst's surface area that is exposed to the exhaust fumes is max- imized, efficiency of the catalyst is maximized. The team working on this project aims to create nanoparticles that will not merge. Every time the surface is optimized, material is saved. Thus, creating these nanopar- ticles will increase the effectiveness of the resulting diesel engine cata- lyst – in turn leading to cleaner exhaust fumes – and will decrease cost. If successful, the team hopes to reduce platinum use by 25%.

Nanotechnology also has a prominent role in the fast developing field of Tissue Engineering. When designing scaffolds, researchers at- tempt to the mimic the nanoscale features of a Cell's microenvironment to direct its differentiation down a suitable lineage. For example, when creating scaffolds to support the growth of bone, researchers may mimic osteoclastresorption pits. Researchers have successfully used DNA ori- gami-based nanobots capable of carrying out logic functions to achieve targeted drug delivery in cockroaches. It is said that the computational power of these nanobots can be scaled up to that of a Commodore 64.

An area of concern is the effect that industrial-scale manufacturing and use of nanomaterials would have on human health and the environ- ment, as suggested by nanotoxicology research. For these reasons, some groups advocate that nanotechnology be regulated by governments.


Others counter that overregulation would stifle scientific research and the development of beneficial innovations. Public health research agen- cies, such as the National Institute for Occupational Safety and Health are actively conducting research on potential health effects stemming from exposures to nanoparticles. Some nanoparticle products may have unintended consequences. Researchers have discovered that bacterio- static silver nanoparticles used in socks to reduce foot odor are being released in the wash. These particles are then flushed into the waste wa- ter stream and may destroy bacteria which are critical components of natural ecosystems, farms, and waste treatment processes. Public delib- erations on risk perception in the US and UK carried out by the Center for Nanotechnology in Society found that participants were more posi- tive about nanotechnologies for energy applications than for health ap- plications, with health applications raising moral and ethical dilemmas such as cost and availability.

Experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, have testified that suc- cessful commercialization depends on adequate oversight, risk research strategy, and public engagement. Berkeley, California is currently the only city in the United States to regulate nanotechnology; Cambridge, Massachusetts in 2008 considered enacting a similar law, but ultimately rejected it. Relevant for both research on and application of nanotech- nologies, the insurability of nanotechnology is contested. Without state regulation of nanotechnology, the availability of private insurance for potential damages is seen as necessary to ensure that burdens are not socialised implicitly.

Nanofibers are used in several areas and in different products, in eve- rything from aircraft wings to tennis rackets. Inhaling airborne nanopar- ticles and nanofibers may lead to a number of pulmonary diseases, e.g. fibrosis. Researchers have found that when rats breathed in nanoparti- cles, the particles settled in the brain and lungs, which led to significant increases in biomarkers for inflammation and stress response and that nanoparticles induce skin aging through oxidative stress in hairless mice.

A major study published more recently in Nature Nanotechnology suggests some forms of carbon nanotubes – a poster child for the "nano- technology revolution" – could be as harmful as asbestos if inhaled in sufficient quantities. In the absence of specific regulation forthcoming from governments, Paull and Lyons have called for an exclusion of en-


gineered nanoparticles in food. A newspaper article reports that workers in a paint factory developed serious lung disease and nanoparticles were found in their lungs.

Calls for tighter regulation of nanotechnology have occurred along- side a growing debate related to the human health and safety risks of nanotechnology. There is significant debate about who is responsible for the regulation of nanotechnology. Some regulatory agencies currently cover some nanotechnology products and processes – by "bolting on" nanotechnology to existing regulations – there are clear gaps in these regimes. Davies has proposed a regulatory road map describing steps to deal with these shortcomings.

Stakeholders concerned by the lack of a regulatory framework to as- sess and control risks associated with the release of nanoparticles and nanotubes have drawn parallels with bovine spongiform encephalopa- thy, thalidomide, genetically modified food, nuclear energy, reproduc- tive technologies, biotechnology, and asbestosis. Dr. Andrew Maynard, chief science advisor to the Woodrow Wilson Center's Project on Emerging Nanotechnologies, concludes that there is insufficient funding for human health and safety research, and as a result there is currently limited understanding of the human health and safety risks associated with nanotechnology. As a result, some academics have called for strict- er application of the precautionary principle, with delayed marketing approval, enhanced labelling and additional safety data development requirements in relation to certain forms of nanotechnology.

The Center for Nanotechnology in Society has found that people re- spond to nanotechnologies differently, depending on application – with participants in public deliberations more positive about nanotechnolo- gies for energy than health applications – suggesting that any public calls for nano regulations may differ by technology sector.

 














Additive Manufacturing

Alhough media likes to use the term ―3D Printing‖ as a synonym for all Additive Manufacturing processes, there are actually lots of individ- ual processes which vary in their method of layer manufacturing. Indi- vidual processes will differ depending on the material and machine technology used. Hence, in 2010, the American Society for Testing and Materials group ―ASTM F42 – Additive Manufacturing‖, formulated a set of standards that classify the range of Additive Manufacturing pro- cesses into 7 categories.


Vat polymerisation uses a vat of liquid photopolymer resin, out of which the model is constructed layer by layer.

Material jetting creates objects in a similar method to a two dimen- sional ink jet printer. Material is jetted onto a build platform using either a continuous or Drop on Demand approach.

The binder jetting process uses two materials; a powder based mate- rial and a binder. The binder is usually in liquid form and the build ma- terial in powder form. A print head moves horizontally along the x and y axes of the machine and deposits alternating layers of the build material and the binding material. Fuse deposition modelling is a common mate- rial extrusion process and is trademarked by the company Stratasys. Material is drawn through a nozzle, where it is heated and is then depos- ited layer by layer. The nozzle can move horizontally and a platform moves up and down vertically after each new layer is deposited.

The Powder Bed Fusion process includes the following commonly used printing techniques: Direct metal laser sintering, Electron beam melting, Selective heat sintering, Selective laser melting and Selective laser sintering.

Sheet lamination processes include ultrasonic additive manufacturing and laminated object manufacturing. The Ultrasonic Additive Manufac- turing process uses sheets or ribbons of metal, which are bound together using ultrasonic welding.

Directed Energy Deposition covers a range of terminology: ‗Laser engineered net shaping, directed light fabrication, direct metal deposi- tion, 3D laser cladding‘ It is a more complex printing process commonly used to repair or add additional material to existing components.

 


Дата: 2019-07-24, просмотров: 271.