Ethical and Social Aspects of Technology
Поможем в ✍️ написании учебной работы
Поможем с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой

It was not until the twentieth century that the development of the eth- ics of technology as a systematic and more or less independent subdis- cipline of philosophy started. This late development may seem surpris- ing given the large impact that technology has had on society, especially since the industrial revolution.

A plausible reason for this late development of ethics of technology is the instrumental perspective on technology. This perspective implies, basically, a positive ethical assessment of technology: technology in- creases the possibilities and capabilities of humans, which seems in general desirable. Of course, since antiquity, it has been recognized that the new capabilities may be put to bad use or lead to human hubris. Of- ten, however, these undesirable consequences are attributed to the users of technology, rather than the technology itself, or its developers. This vision is known as the instrumental vision of technology resulting in the so-called neutrality thesis. The neutrality thesis holds that technology is a neutral instrument that can be put to good or bad use by its users. Dur- ing the twentieth century, this neutrality thesis met with severe critique, most prominently by Heidegger and Ellul, who have been mentioned in this context in Section 2.0, but also by philosophers from the Frankfurt School.

The scope and the agenda for ethics of technology to a large extent depend on how technology is conceptualized. The second half of the twentieth century has witnessed a richer variety of conceptualizations of technology that move beyond the conceptualization of technology as a neutral tool, as a world view or as a historical necessity. This includes conceptualizations of technology as a political phenomenon, as a social activity, as a cultural phenomenon, as a professional activity, and as a cognitive activity.

Despite this diversity, the development in the second half of the twentieth century is characterized by two general trends. One is a move away from technological determinism and the assumption that technolo- gy is a given self-contained phenomenon which develops autonomously


to an emphasis on technological development being the result of choices (although not necessarily the intended result). The other is a move away from ethical reflection on technology as such to ethical reflection of specific technologies and to specific phases in the development of tech- nology. Both trends together have resulted in an enormous increase in the number and scope of ethical questions that are asked about technol- ogy. The developments also imply that ethics of technology is to be ad- equately empirically informed, not only about the exact consequences of specific technologies but also about the actions of engineers and the process of technological development. This has also opened the way to the involvement of other disciplines in ethical reflections on technology, such as Science and Technology Studies and Technology Assessment.

Not only is the ethics of technology characterized by a diversity of approaches, it might even be doubted whether something like a subdis- cipline of ethics of technology, in the sense of a community of scholars working on a common set of problems, exists. The scholars studying ethical issues in technology have diverse backgrounds and they do not always consider themselves (primarily) ethicists of technology. To give the reader an overview of the field, three basic approaches or strands that might be distinguished in the ethics of technology will be discussed. Both cultural and political approaches build on the traditional philos- ophy and ethics of technology of the first half of the twentieth century. Whereas cultural approaches conceive of technology as a cultural phe- nomenon that influences our perception of the world, political ap- proaches conceive of technology as a political phenomenon, i.e. as a phenomenon that is ruled by and embodies institutional power relations

between people.

Cultural approaches are often phenomenological in nature or at least position themselves in relation to phenomenology as post- phenomenology. Examples of philosophers in this tradition are Don Ih- de, Albert Borgmann, Peter-Paul Verbeek and Evan Selinger. The ap- proaches are usually influenced by developments in STS, especially the idea that technologies contain a script that influences not only people‘s perception of the world but also human behavior, and the idea of the absence of a fundamental distinction between humans and non-humans, including technological artifacts.

Political approaches to technology mostly go back to Marx, who as- sumed that the material structure of production in society, in which technology is obviously a major factor, determined the economic and


social structure of that society. Similarly, Langdon Winner has argued that technologies can embody specific forms of power and authority. According to him, some technologies are inherently normative in the sense that they require or are strongly compatible with certain social and political relations. Railroads, for example, seem to require a certain au- thoritative management structure. In other cases, technologies may be political due to the particular way they have been designed. Some politi- cal approaches to technology are inspired by pragmatism and, to a lesser extent, discourse ethics. A number of philosophers, for example, have pleaded for a democratization of technological development and the in- clusion of ordinary people in the shaping of technology.

Although political approaches have obviously ethical ramifications, many philosophers who have adopted such approaches do not engage in explicit ethical reflection on technology. An interesting recent excep- tion, and an attempt to consolidate a number of recent developments and to articulate them into a more general account of what an ethics of tech- nology should look like, is the collection of essays Pragmatist ethics for a technological culture. In this book, the authors plead for a revival of the pragmatist tradition in moral philosophy because it is better fit to deal with a number of moral issues in technology. Instead of focusing on how to reach and justify normative judgments about technology, a pragmatist ethics focuses on how to recognize and trace moral problems in the first place. Moreover, the process of dealing with these problems is considered more important than the outcome.

 



Engineering ethics

Engineering ethics is a relatively new field of education and re- search. It started off in the 1980s in the United States, merely as an edu- cational effort. Engineering ethics is concerned with ‗the actions and decisions made by persons, individually or collectively, who belong to the profession of engineering‘. According to this approach, engineering is a profession, in the same way as medicine is a profession.

Although there is no agreement on how a profession exactly should be defined, the following characteristics are often mentioned:

· A profession relies on specialized knowledge and skills that re- quire a long period of study;

· The occupational group has a monopoly on the carrying out of the occupation;


· The assessment of whether the professional work is carried out in a competent way is done by, and it is accepted that this can only be done by, professional peers;

· A profession provides society with products, services or values that are useful or worthwhile for society, and is characterized by an ide- al of serving society;

· The daily practice of professional work is regulated by ethical standards, which are derived from or relate to the society-serving ideal of the profession.

Typical ethical issues that are discussed in engineering ethics are professional obligations of engineers as exemplified in, for example, codes of ethics of engineers, the role of engineers versus managers, competence, honesty, whistle-blowing, concern for safety and conflicts of interest.

Recently, a number of authors have pleaded for broadening the tradi- tional scope of engineering ethics. This call for a broader approach de- rives from two concerns. One concern is that the traditional micro- ethical approach in engineering ethics tends to take the contexts in which engineers have to work for given, while major ethical issues per- tain to how this context is ‗organized‘. Another concern is that the tradi- tional micro-ethical focus tends to neglect issues relating to the impact of technology on society or issues relating to decisions about technolo- gy. Broadening the scope of engineering ethics would then, among oth- ers, imply more attention for such issues as sustainability and social jus- tice.

The last decades have witnessed an increase in ethical inquiries into specific technologies. One of the most visible new fields is probably computer ethics, but biotechnology has spurred dedicated ethical inves- tigations as well. More traditional fields like architecture and urban planning have also attracted specific ethical attention (Fox 2000). More recently, nanotechnology and so-called converging technologies have led to the establishment of what is called nanoethics. Apart from this, there has been a debate on the ethics of nuclear deterrence.

Obviously the establishment of such new fields of ethical reflection is a response to social and technological developments. Still, the ques- tion can be asked whether the social demand is best met by establishing new fields of applied ethics. This issue is in fact regularly discussed as new fields emerge. Several authors have for example argued that there is no need for nanoethics because nanotechnology does not raise any really


new ethical issues. The alleged absence of newness here is supported by the claim that the ethical issues raised by nanotechnology are a variation on, and sometimes an intensification of, existing ethical issues, but hard- ly really new, and by the claim that these issues can be dealt with the existing theories and concepts from moral philosophy. For an earlier, similar discussion concerning the supposed new character of ethical is- sues in computer engineering, see.

The new fields of ethical reflection are often characterized as applied ethics, that is, as applications of theories, normative standards, concepts and methods developed in moral philosophy. For each of these ele- ments, however, application is usually not straightforward but requires a further specification or revision. This is the case because general moral standards, concepts and methods are often not specific enough to be ap- plicable in any direct sense to specific moral problems. ‗Application‘ therefore often leads to new insights which might well result in the re- formulation or at least refinement of existing normative standards, con- cepts and methods. In some cases, ethical issues in a specific field might require new standards, concepts or methods. Beauchamp and Childress for example have proposed a number of general ethical principles for biomedical ethics.

These principles are more specific than general normative standards, but still so general and abstract that they apply to different issues in bi- omedical ethics. In computer ethics, existing moral concepts relating to for example privacy and ownership has been redefined and adapted to deal with issues which are typical for the computer age. New fields of ethical application might also require new methods for, for example, discerning ethical issues that take into account relevant empirical facts about these fields, like the fact that technological research and develop- ment usually takes place in networks of people rather than by individu- als.

The above suggests that different fields of ethical reflection on spe- cific technologies might well raise their own philosophical and ethical issues. Even if this is true, it is not clear whether this justifies the devel- opment of separate subfields or even subdisciplines. It might well be argued that a lot can be learned from interaction and discussion between these fields and a fruitful interaction with the two other strands dis- cussed above (cultural and political approaches and engineering ethics). Currently, such interaction in many cases seems absent, although there are of course exceptions.


We now turn to the description of some themes in the ethics of tech- nology. We focus on a number of general themes that provide an illus- tration of general issues in the ethics of technology and the way these are treated.

One important general theme in the ethics of technology is the ques- tion whether technology is value-laden. Some authors have maintained that technology is value-neutral, in the sense that technology is just a neutral means to an end, and accordingly can be put to good or bad use. This view might have some plausibility in as far as technology is con- sidered to be just a bare physical structure. Most philosophers of tech- nology, however, agree that technological development is a goal- oriented process and that technological artifacts by definition have cer- tain functions, so that they can be used for certain goals but not, or far more difficulty or less effectively, for other goals. This conceptual con- nection between technological artifacts, functions and goals makes it hard to maintain that technology is value-neutral. Even if this point is granted, the value-ladenness of technology can be construed in a host of different ways. Some authors have maintained that technology can have moral agency. This claim suggests that technologies can autonomously and freely ‗act‘ in a moral sense and can be held morally responsible for their actions.

The debate whether technologies can have moral agency started off in computer ethics but has since broadened. Typically, the authors who claim that technologies (can) have moral agency often redefine the no- tion of agency or its connection to human will and freedom. A disad- vantage of this strategy is that it tends to blur the morally relevant dis- tinctions between people and technological artifacts. More generally, the claim that technologies have moral agency sometimes seems to have become shorthand for claiming that technology is morally relevant. This, however, overlooks the fact technologies can be value-laden in other ways than by having moral agency. One might, for example, claim that technology enables and constrains certain human actions and the attainment of certain human goals and therefore is to some extent value- laden, without claiming moral agency for technological artifacts.

Responsibility has always been a central theme in the ethics of tech- nology. The traditional philosophy and ethics of technology, however, tended to discuss responsibility in rather general terms and were rather pessimistic about the possibility of engineers to assume responsibility for the technologies they developed. Ellul, for example, has character-


ized engineers as the high priests of technology, who cherish technology but cannot steer it. Hans Jonas has argued that technology requires an ethics in which responsbility is the central imperative because for the first time in history we are able to destroy the earth and humanity. In engineering ethics, the responsibility of engineers is often discussed in relation to code of ethics that articulate specific responsibilities of engi- neers. Such codes of ethics stress three types of responsibilities of engi- neers: (1) conducting the profession with integrity and honesty and in a competent way, (2) responsibilities towards employers and clients and

(3) responsibility towards the public and society. With respect to the latter, most US codes of ethics maintain that engineers ‗should hold par- amount the safety, health and welfare of the public‘.

As has been pointed out by several authors, it may be hard to pin- point individual responsibility in engineering. The reason is that the conditions for the proper attribution of individual responsibility that have been discussed in the philosophical literature are often not met by individual engineers. For example, engineers may feel compelled to act in a certain way due to hierarchical or market constraints, and negative consequences may be very hard or impossible to predict beforehand. The causality condition is often difficult to meet as well due to the long chain from research and development of a technology till its use and the many people involved in this chain. Davis nevertheless maintains that despite such difficulties individual engineers can and do take responsi- bility.

One issue that is at stake in this debate is the notion of responsibility. Davis, and also for example Ladd, argue for a notion of responsibility that focuses less on blame and stresses the forward-looking or virtuous character of assuming responsibility. But many others focus on back- ward-looking notions of responsibility that stress accountability, blameworthiness or liability, for example has pleaded for a notion of responsibility in engineering that is more like the legal notion of strict liability, in which the knowledge condition for responsibility is seriously weakened. Doorn compares three perspectives on responsibility ascrip- tion in engineering – a merit-based, a right-based and a consequentialist perspective – and argues that the consequentialist perspective, which applies a forward-looking notion of responsibility, is most powerful in influencing engineering practice.

The difficulty of attributing individual responsibility may lead to the Problem of Many Hands. The term was first coined by Dennis Thomp-


son in an article about the responsibility of public officials. The term is used to describe problems with the ascription of individual responsibil- ity in collective settings. Doorn has proposed a procedurals approach, based on Rawls‘ reflective equilibrium model, to deal with the PMH; other ways of dealing with the PMH include the design of institutions that help to avoid it or an emphasis on virtuous behavior in organiza- tions.

In the last decades, increasingly attention is paid not only to ethical issues that arise during the use of a technology, but also during the de- sign phase. An important consideration behind this development is the thought that during the design phase technologies, and their social con- sequences, are still malleable whereas during the use phase technologies are more or less given and negative social consequences may be harder to avoid or positive effects harder to achieve.

In computer ethics, an approach known as Value-Sensitive Design has been developed to explicitly address the ethical nature of design. VSD aims at integrating values of ethical importance in engineering de- sign in a systematic way. The approach combines conceptual, empirical and technical investigations. There is also a range of other approaches aimed at including values in design. ‗Design for X‘ approaches in engi- neering aim at including instrumental values (like maintainability, relia- bility and costs) but they also include design for sustainability, inclusive design, and affective design. Inclusive design aims at making designs accessible to the whole population including, for example, handicapped people and the elderly. Affective design aims at designs that evoke posi- tive emotions with the users and so contributes to human well-being.

If one tries to integrate values into design one may run into the prob- lem of a conflict of values. The safest car is, due to its weight, not likely to be the most sustainability. Here safety and sustainability conflict in the design of cars. Traditional methods in which engineers deal with such conflicts and make trade-off between different requirements for design include cost-benefit analysis and multiple criteria analysis. Such methods are, however, beset with methodological problems like those discussed in Section discusses various alternatives for dealing with val- ue conflicts in design including the setting of thresholds (satisficing), reasoning about values, innovation and diversity.


7.1.34. Technological risks

The risks of technology are one of the traditional ethical concerns in the ethics of technology. Risks raise not only ethical issues but other philosophical issues, such as epistemological and decision-theoretical issues as well.

Risk is usually defined as the product of the probability of an unde- sirable event and the effect of that event, although there are also other definitions around. In general it seems desirable to keep technological risks as small as possible. The larger the risk, the larger either the likeli- ness or the impact of an undesirable event is. Risk reduction therefore is an important goal in technological development and engineering codes of ethics often attribute a responsibility to engineers in reducing risks and designing safe products. Still, risk reduction is not always feasible or desirable. It is sometimes not feasible, because there are no absolute- ly safe products and technologies. But even if risk reduction is feasible it may not be acceptable from a moral point of view. Reducing risk of- ten comes at a cost. Safer products may be more difficult to use, more expensive or less sustainable. So sooner or later, one is confronted with the question: what is safe enough? What makes a risk (un) acceptable?

The process of dealing with risks is often divided into three stages: risk assessment, risk evaluation and risk management. Of these, the se- cond is most obviously ethically relevant. However, risk assessment already involves value judgments, for example about which risks should be assessed in the first place. An important, and morally relevant, issue is also the degree of evidence that is needed to establish a risk. In estab- lishing a risk on the basis of a body of empirical data one might make two kinds of mistakes. One can establish a risk when there is actually none (type I error) or one can mistakenly conclude that there is no risk while there actually is a risk (type II error). Science traditionally aims at avoiding type I errors. Several authors have argued that in the specific context of risk assessment it is often more important to avoid type II errors. The reason for this is that risk assessment not just aims at estab- lishing scientific truth but has a practical aim, i.e. to provide the knowledge on basis of which decisions can be made about whether it is desirable to reduce or avoid certain technological risks in order to pro- tect users or the public.

Risk evaluation is carried out in a number of ways. One possible ap- proach is to judge the acceptability of risks by comparing them to other risks or to certain standards. One could, for example, compare techno-


logical risks with naturally occurring risks. This approach, however, runs the danger of committing a naturalistic fallacy: naturally occurring risks may (sometimes) be unavoidable but that does not necessarily make them morally acceptable. More generally, it is often dubious to judge the acceptability of the risk of technology A by comparing it to the risk of technology B if A and B are not alternatives in a decision.

A second approach to risk evaluation is risk-cost benefit analysis, which is based on weighing the risks against the benefits of an activity. Different decision criteria can be applied if a (risk) cost benefit analysis is carried out. A third approach is to base risk acceptance on the consent of people who suffer the risks after they have been informed about these risks (informed consent). A problem of this approach is that technologi- cal risks usually affect a large number of people at once. Informed con- sent may therefore lead to a ‗society of stalemates‘.

Several authors have proposed alternatives to the traditional ap- proaches of risk evaluation on the basis of philosophical and ethical ar- guments. Shrader-Frechette has proposed a number of reforms in risk assessment and evaluation procedures on the basis of a philosophical critique of current practices. Roeser argues for a role of emotions in judging the acceptability of risks. Hansson has proposed the following alternative principle for risk evaluation: ‗Exposure of a person to a risk is acceptable if and only if this exposure is part of an equitable social system of risk-taking that works to her advantage‘. Hansson‘s proposal introduces a number of moral considerations in risk evaluation that are traditionally not addressed or only marginally addressed. These are the consideration whether individuals profit from a risky activity and the consideration whether the distribution of risks and benefits is fair.

Some authors have criticized the focus on risks in the ethics of tech- nology. One strand of criticism argues that we often lack the knowledge to reliably assess the risks of a new technology before it has come into use. We often do not know the probability that something might go wrong, and sometimes we even do not know, or at least not fully, what might go wrong and what possible negative consequences may be. To deal with this, some authors have proposed to conceive of the introduc- tion of new technology in society as a social experiment and have urged to think about the conditions under which such experiments are morally acceptable. Another strand of criticism states that the focus on risks has led to a reduction of the impacts of technology that are considered. Only impacts related to safety and health, which can be calculated as risks,


are considered, whereas ‗soft‘ impacts, for example of a social or psy- chological nature, are neglected, thereby impoverishing the moral eval- uation of new technologies.

 









Дата: 2019-07-24, просмотров: 249.