Neuroethics



Neuroethics





Research in normal and abnormal brain function raises challenging ethical issues that have spawned a field of scholarship called “neuroethics.”1 Although this classification is of recent vintage, ethical problems have been discussed since the early days of neuroscience research. How will studies of brain function that reveal personality traits affect a person’s privacy? How will knowledge of brain function contribute to answering age-old philosophical questions of whether humans have free will and what factors comprise personal identity? Is the mind simply the brain? Is morality an evolutionarily produced brain function? Newer ethical questions have been introduced by advances in neuroimaging and neurotherapeutics. How will researchers handle incidental findings discovered on functional magnetic resonance imaging (fMRI) scans when healthy people volunteer as controls in research studies? Should fMRI be used for criminal justice purposes such as lie detection or prediction of violent behavior? Can neuroimaging abnormalities mitigate personal responsibility for antisocial acts? How will neural prostheses and neural transplantation affect personal identity and human nature? Should treatments be offered by physicians to enhance normal cognitive, affective, or neuromuscular function? I briefly address these ethical and philosophical questions in this chapter.

The definition of the word “neuroethics” is evolving. At a conference held in San Francisco in May 2002, sponsored by the Dana Founda-tion and co-hosted by Stanford University and the University of California San Francisco, speakers and commentators attempted to map the boundaries of the emerging field of neuroethics.2 William Safire, the New York Times journalist and Chairman of the Dana Founda-tion, delivered the opening address. Safire defined neuroethics as the branch of bioethics that raises unique questions because it deals with human consciousness, the centerpiece of human nature. He predicted that human personality and behavior will be changed with future advances in neuroscience, a fact that raises essential ethical issues that demand attention now. Safire suggested that it is these issues that comprise the proper focus of neuroethics.3 In an influential book published in 2005, the cognitive neuroscientist Michael Gazzaniga defined neuroethics more broadly as “the examination of how we want to deal with the social issues of disease, normality, mortality, lifestyle, and the philosophy of living informed by our understanding of underlying brain mechanisms.”4

Adina Roskies observed that the word “neuroethics” could refer either to the ethical issues raised by neuroscience research or to the neurobiological basis of human ethical behavior.5 Nearly all scholars currently using the term, however, intend the former meaning. In a recent article, Antonio Damasio addressed the latter meaning and summarized the evidence for the neural mechanisms underpinning moral behavior.6 Damasio cited natural human experiments, beginning with the celebrated 19th century case of Phineas Gage, who survived a massive penetrating injury to both frontal lobes, but developed an “immoral” personality.7
Damasio then discussed more recent experimental paradigms that used fMRI to assess brain areas that are activated in normal persons during moral reflection and forming judgments.8

The question of who coined the term “neuroethics” remains a subject of debate. In his welcoming address at the 2002 San Francisco conference on neuroethics, Zach Hall of the University of California San Francisco attributed the word to William Safire, explaining that Safire had used it in a conversation with him in 2000 or 2001.9 However, the word “neuroethics” was used earlier. In a 1989 article in Neurologic Clinics, the neurologist-ethicist Ronald Cranford used the term “neuroethical” to describe the unique clinical-ethical problems encountered by neurologists in practice, such as those involving patients with brain death, the vegetative state, dementia, paralysis, or respiratory dependency. Cranford also used the term “neuroethicist” to describe neurologists who are trained in clinical ethics and who, therefore, can make a dual contribution when they serve as members of hospital ethics committees. Cranford argued that neurologists can help analyze neuroethical dilemmas from their unique perspective bridging the gap between clinical neurology and clinical ethics.10 I heard Cranford use the term “neuroethics” in this clinical context during the 1980s.

Most scholars who have used the term “neuroethics” since 2002 restrict it to the ethical issues raised by neuroscience research.11 Although there is no reason for categorically limiting “neuroethics” to research issues (and some recent neuroethics textbook editors also have included clinical topics12), in this chapter I use the term in the conventional sense to describe the unique ethical problems introduced by neuroscience research. The most logical rhetorical solution would be to permit its usage for both research and clinical ethics topics by describing the former as “research neuroethics” and the latter as “clinical neuroethics.” Thus, this chapter considers research neuroethics whereas chapters 11, 12, 13, 14, 15, 16, 17 and 18 consider clinical neuroethics.

Although a young discipline, the emerging field of neuroethics has generated a new professional society that conducts scholarly meetings13 and a new dedicated professional journal.14 The Society for Neuroscience, the premier scientific organization dedicated to neuroscience research, now includes the study of ethical issues in its mission statement.15 The scholars engaged in all these neuroethics activities generally restrict their purview to the ethical and philosophical issues raised by neuroscience research. The Dana Foundation in New York remains a leading sponsor of scholars working in this area.


NEURO-ENHANCEMENT

One of the earliest and most enduring neuroethics issues centers on the propriety of using neuropharmacology and other neurotechnologies to improve normal human function: the so-called enhancement debate. The traditional focus of medical practice has been to treat disease and disability with a goal of cure or at least re-establishment of normal functioning. The enhancement debate is controversial because it takes individuals who have normal functioning and asks if it is desirable or justified to use medical means to improve their functioning to levels above normal. Of course, people have used drugs such as alcohol, nicotine, and caffeine for this purpose for centuries. The ethical issue centers on whether providing requested enhancements for the healthy is a proper activity of the profession of medicine.

It is helpful to conceptualize the ethical issues inherent in enhancement by inspecting a few prototypic cases: (1) A normal student asks a physician to write a prescription for amphetamine for the purpose of improving his concentration and attention so that he can score higher on a standardized test; (2) A young mother follows the advice of the psychiatrist Peter Kramer: it is desirable for everyone to optimize mood by taking an antidepressant drug such as Prozac.16 She requests a prescription although she is not depressed; or (3) A competitive amateur swimmer asks a physician to prescribe erythropoietin to improve his exercise endurance. These cases clearly would be classified as enhancement and not therapy.

There are other cases, however, in which the distinction between enhancement and therapy is less clear. People with short stature, in some
studies, are not as successful in life as people of normal stature. Many parents of children with short stature wish them to receive growth hormone treatments to achieve normal stature to improve their appearance and confidence because they believe it will add to their overall health and well-being.17 Whether one considers the prescription of growth hormone in this setting as treatment or enhancement is ambigu-ous and will be answered differently by physicians with different values.18

In a review of the enhancement debate as it pertains to neurologists, Anjan Chatterjee pointed out that there are three general areas in which enhancement technologies could be used in clinical neurology: (1) improving normal motor skills and movement; (2) improving normal cognitive function, concentration, attention and memory; and (3) improving normal mood and affect.19 Chatterjee chose the infelicitous term “cosmetic neurology” to refer to these enhancement categories. I believe that the adjective “cosmetic” is misleading in this context because it erroneously suggests that the enhancements in question are intended to improve appearance or beauty whereas, in reality, they are prescribed with the intent to improve function. (The use of biotechnology for truly cosmetic purposes raises its own set of ethical problems, but they are different from those under consideration here.20) Chatterjee pointed out that although the ethical problems of enhancement technologies are serious, the “hand-wringing of ethicists” over them is unlikely to restrain their development, which he regards as inevitable. He therefore warned neurologists to become aware of these technologies and consider their position regarding prescribing them because they will soon find themselves being asked to prescribe one or more of these enhancement agents by normal persons seeking “better brains.”

Enhancement of motor activity can be accomplished with anabolic steroids, amphetamines, erythropoietin, and other agents. Enhancement of cognitive activity can be produced by amphetamines and other stimulants, cholinesterase inhibitors, donepezil, and newer agents affecting cyclic AMP, glutamate receptors, and NMDA receptors. Mood and affect can be enhanced with selective serotonin reuptake inhibitors, corticotrophin releasing factor, neuropeptide agonists and antagonists, and other agents.21 Non-pharmaceutical enhancement technologies could include transcranial magnetic stimulation.

The ethical issues of enhancement have been debated extensively.22 In a critical review, Erik Parens pointed out that much of the problem results from a lack of clarity in the distinction between enhancement and treatment. He argues that the distinction is justified in some cases but not in others.23 In a more recent analysis, he commented that the differences between supporters and critics of the enhancement-treatment distinction have been overblown and they have more in common than they think.24 Those claiming an important distinction point out that the purpose of the medical profession is to treat disease and disability with the goal of improving deficient function to normal levels. Critics assert that enhancement is not an appropriate use of medical services. Those criticizing the importance of any distinction between enhancement and therapy generally embrace a broader vision of health and disease in which technologically induced improvement of function is considered therapeutic, irrespective of the starting point.

Martha Farah classified the ethical issues of brain enhancement into three general categories: (1) health issues involving safety, side effects, and unintended consequences;(2) social effects on those who do and do not choose enhancement and how one group affects the other; and (3) philosophical issues in which brain enhancement “challenges our understanding of personal effort and accomplishment, autonomy, and the value of people as opposed to things.”25 The safety issue is especially important given that it would be difficult to justify a serious side effect of a brain enhancement treatment given that the person taking it had no illness in the first place. Although some have argued that the doctrine of informed consent can dispose of this concern, that claim is not true when parents permit enhancement treatments of their children. Even scholars who advocated that society should permit the widespread availability of brain enhancement to consenting adults become protective when it comes to administering such treatment to children.26


The question of social effects centers on concerns about fairness. From a public policy and distributive justice perspective, how can we justify improving the lives of certain normal people and not others, because the induced improvement will provide them with an unfair advantage in a competitive society. The unfairness is magnified when the enhancement technologies are expensive and only wealthy people can afford them, thereby promoting or securing their privileged status.27 For example, in some American communities, the threshold for diagnosing attention deficit hyperactivity disorder (ADHD) and prescribing stimulant treatment has been lowered to include normal children.28 In one report, over 30% of boys in some schools took Ritalin or other stimulant drugs for alleged ADHD.29 These data raise social questions: are we witnessing direct or indirect coercion of healthy people into treatment and widespread physician prescribing for brain enhancement rather than for disease?

The philosophical criticisms center on the nature of human beings and how, eventually, they might be ineluctably altered if the concept of enhancement were accepted unconditionally. Have people who have received brain enhancement had their humanness changed in some fundamental way? Some bioethics scholars worry that with repeated rounds of increasing enhancements over many generations, the very characteristics that make people human eventually will be lost.30 Supporters counter that if cognitive enhancement drugs were safe and effective, “they would produce significant societal gains” and “competent adults should be free to decide whether or not to use [them].”31 The most zealous advocates of enhancement, “transhumanists,” hope enhancement technology will achieve a “posthuman” future of enhanced intellect, freedom from disease, immunity to aging, increased pleasure, and novel states of consciousness.32


NEUROIMAGING: INCIDENTAL RESEARCH FINDINGS

The development of increasingly sophisticated functional neuroimaging techniques, particularly fMRI, has introduced a set of interesting neuroethical problems. One of the earliest questions involved how researchers should handle incidental findings disclosed on research fMRIs of normal volunteers who serve as control subjects.33 Consider the following two prototypic cases. (1) A normal person volunteered to serve as a control in a fMRI study of normal language function and his fMRI disclosed what might be a mass lesion in the brain. Should the researcher disclose this finding to him? How? Who is responsible for notification? What should the subject be told? Who arranges further assessment or treatment? (2) A young woman suffered a subarachnoid hemorrhage from a giant aneurysm. A year earlier she had served as a normal volunteer in a fMRI study of handedness and cerebral dominance. She now asks the researcher if her aneurysm could or should have been detected a year earlier on her research fMRI. What is the responsibility of researchers or a research scan to screen for clinical abnormalities? Should research scans always be interpreted by qualified radiologists or is the researcher’s review sufficient? What MRI sequences are necessary for a researcher to declare a scan normal or not worthy of comment? What is the extent of the clinical duty, if any, that researchers owe their normal volunteer subjects?

It is helpful first to know if this is a common problem. Judy Illes and colleagues measured the frequency and severity of incidental findings on brain MRIs of research volunteers who were believed to be neurologically healthy. They classified incidental findings into four categories of seriousness: no referral, routine, urgent, or immediate. They found the overall incidence of incidental findings to be 6.6%. Older patients and men were more likely to show abnormalities. All of the findings in the older cohort were classified as routine but 75% (3 out of 4) of the incidental findings in the younger cohort were classified as urgent.34 In a study seeking the preferences for notification of healthy control subjects who had participated in neuroimaging studies, Matthew Kirschen and colleagues found that 97% of these subjects wished to have incidentally discovered findings communicated to them irrespective of their potential clinical significance and 59% preferred that the communication be conducted by a physician affiliated with the research team.35


In an editorial, Robert Grossman and I offered several comments about the study by Illes et al.36 We observed that because fMRI research is performed by psychologists and other neuroscientists who are not skilled in clinical image interpretation and because the research scan sequences often are not thorough, many potential incidental findings will not even be discovered using the study protocol. If all research scans required clinical standards to be met for completeness of sequences performed and required trained radiologists to interpret them, the added costs to the research protocol would be large and would create an impediment to research. If a scan abnormality is seen in a research protocol, who should receive the result and in what setting? To what extent should researchers receive training about those incidental findings that are present normally and therefore create no reason for concern or further assessment? We suggested that however these questions ultimately are answered, the process of consent and disclosure should follow established ethical guidelines for research. The informed consent process and form should itemize all the risks of participating including: (1) increased anxiety resulting from learning an abnormality may be present; (2) financial costs resulting from additional testing that may become necessary to better define the purported abnormality; and (3) health risks resulting from additional tests to further clarify the purported abnormality. The consent form also should clarify if the scan sequences are adequate for clinical purposes, whether the scan will be interpreted by a trained clinician, who would notify the patient of a possible abnormality and how, and who would be responsible for paying for further medical care that was required because of the finding.

The Working Group on Incidental Findings in Brain Imaging Research published a consensus statement in an article in Science in 2006.37 They indicated that their findings and conclusions did not represent the official position of any agency and were intended simply to further the ongoing discussion. They concluded: (1) it is ethically desirable for suspicious incidental findings to be disclosed to the research subject because of the subject’s right to know despite the lack of professional guidelines addressing whether incidental findings should be disclosed, how, and by whom; (2) the potentially harmful consequences of false-positive reports on normal volunteers has not been adequately studied; (3) wide variability exists about when and how incidental findings are reported to human subjects; (4) vulnerable subjects may require special assistance in arranging follow-up medical consultations; (5) it is desirable, when possible, to have a physician validate the presence of a suspected incidental finding; (6) it is desirable to have a physician communicate the incidental finding to the subject, when possible; (7) institutional review boards should require clarification of the complete process for handling the incidental findings; (8) there is no ethical requirement for the researcher to obtain clinical scans on the subject; and (9) research needs to be performed to study the costs and benefits of identifying incidental findings and referring subjects for appropriate medical follow-up.


NEUROIMAGING: THOUGHTS AND PREFERENCES

The widespread use of new and accessible noninvasive neuroimaging techniques, such as fMRI, raises social and legal neuroethical issues including privacy of thought, prediction of violence or disease, lie detection, and personal responsibility for behavior.38 As discussed in chapter 17, there now are extra layers of confidentiality and privacy protection required for genetic test results because of their potential for abuse and unjustified discrimination if unauthorized persons were to gain access to them. A similar risk potential exists for fMRI data that might represent private human thoughts. Data on how a person thinks could influence how that person is treated by society, raising the potential for unjustified discrimination. Early experiments in the cognitive neuroscience of morality attempted to localize the brain regions activated during moral reflection and deliberation.39 More recent studies by Michael Koenigs and colleagues showed that previous
damage to the ventromedial prefrontal cortex (an area necessary to generate normal social emotions) leads to an overuse of utilitarian reasoning and difficulty in distinguishing right from wrong actions.40 It is conceivable that once normal patterns of activation for moral capacity have been established, a person having an abnormal pattern may be subject to sanctions or discrimination in employment, education, housing, insurance, or the criminal justice system.

Psychologists have used functional neuroimaging to assess social attitudes and preferences. In a much discussed study of racial attitudes, Elizabeth Phelps and colleagues used fMRI to study the level of amygdala activation (thought to represent fear) in white volunteers who viewed photographs of unfamiliar black men’s faces and unfamiliar white men’s faces. They also performed two psychological tests indirectly measuring subjects’ racial attitudes. They found a moderately strong correlation between the degree of amygdala activation on fMRI when the subjects viewed the black men’s faces and high negative (prejudicial) scores on the subjects’ tests of racial attitudes.41

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Aug 2, 2016 | Posted by in NEUROLOGY | Comments Off on Neuroethics

Full access? Get Clinical Tree

Get Clinical Tree app for offline access