Psychiatry and psychedelia

Timothy Leary, the influential psychologist who popularized the use of LSD in the 1960s, is a polarizing figure in the debate on psychiatry and psychedelic drugs. While revered by some as the father of psychedelic research, he is reviled by others for his unconventional research methods, which culminated in Leary's lab, the Harvard Psiloybin Project, being shut down by the University in 1963. Following this set back, Leary established a private laboratory in a mansion in upstate New York where he continued conducting studies on his friends and followers. Leary's previously unseen recordings and notes from this period have recently been purchased and published by the New York Public Library, for the first time allowing us insight into the ground-breaking work done with these then novel substances. The backlash to Leary and his colleagues' extremist views regarding drug use, religion and politics (Richard Nixon at one point called Leary "the most dangerous man in America") potentially discredited any real benefits psychedelics may have in therapy for over 40 years. Experimentation and research with LSD, psilocybin (commonly known as hallucinogenic or "magic" mushrooms) and other psychedelic substances has been highly stigmatized and virtually impossible to conduct in a responsible laboratory setting. However, the restrictions against such research have gradually been lifted, and new studies are cautiously popping up heralding the clinical benefits of drugs like psilocybin and MDMA, or ecstasy.  These drugs are thought to help individuals who suffer from severe anxiety, post-traumatic stress disorder (PTSD), obsessive compulsive disorder (OCD) and depression when paired with talk therapy methods. The effectiveness of these drugs is thought to lie in their ability to engender feelings of empathy, love and connectedness, fostering a sense of unity and compassion for oneself and fellow man. These feelings may potentially create an easier, more open environment for patients to discuss their concerns in, while safely being guided by clinicians on a therapeutic trip.

Some of the most notable research coming out of this renaissance is being conducted at John's Hopkins University, led by Dr. Roland Griffiths. Griffiths has shown that psilocybin can be used to effectively reduce levels of depression and anxiety in terminally ill cancer patients, anecdotally helping some patients to accept and come to terms with their approaching mortality.

Another influential use for this type of research is in patients with PTSD. With the recent announcement that antipsychotic drugs, frequently prescribed to help treat PTSD in combat veterans, are no more successful than placebos at improving symptoms, it seems that a new method is needed to help those suffering from severe trauma. Antidepressants are already known to be ineffective at treating PTSD, and doctors were hoping that antipsychotic medication, a stronger mood affecter, would be more successful at treating the associated symptoms, such as flashbacks, memory suppression, outbursts of anger, anxiety, anhedonia and depression. However, after six months of treatment only 5% of patients who received the drug had recovered, a number that was statistically negligible and not significantly different from those who had received a placebo.

Some researchers are now looking at more unconventional methods, such as psychedelics, as potential treatments for PTSD. Clinical researchers both in the US and in the Netherlands have shown that MDMA can be effective at reducing PTSD symptoms in survivors of rape or other traumatic events. Neurologically, MDMA stimulates serotonin in the brain, a neurochemical linked to feelings of  happiness and whose depletion is commonly attributed to depression. This activation takes place throughout the brain, but much of it is focused in the dorsal lateral prefrontal cortex (dlPFC), a region involved in higher order cognition, memory and associative learning. Simultaneously, there seems to be a decrease in amygdala activity, an area involved in fear and emotion. Taken together, these two changes in neural activity are thought to increase memory and rational, cognitive coping of the traumatic event, while down-playing the aversive negative emotions connected to it. Therefore, an individual would be able to replay the more painful details of a memory and rationally analyze and come to terms with them, facilitated by a boost in mood from serotonin and disconnected from the typical painful affective responses. This could potentially help a patient "relearn" their associations with this memory, thus allowing them to lose the negative and recreate positive affective associations for these recalled experiences.

However, just as there are side effects with any drug, so there are too with MDMA. The most notable and potentially harmful one is a resulting decrease in serotonin after the high has worn off. When the brain is flooded with a neurochemical it regulates itself to become less receptive to this neurotransmitter, adapting to re-obtain homeostasis in the chemical levels in the brain. The brain therefore becomes relatively depleted of serotonin over the next few days after taking MDMA, and after multiple uses (or abuses) of the drug this effect can become pervasive and long-lasting. While the amounts of depletion are not particularly severe after minimal use, in a patient who is already struggling with depression or anxiety this temporary loss could be potentially devastating.

Banning potentially valuable clinical research because of social concerns and constraints only hurts scientific progression and the community at large. However it is important to keep in mind that these psychedelic substances are powerful drugs with potentially very severe consequences. They should be investigated as their benefits to clinical populations could be immense, but they should still be used carefully as much is still unknown (just as much as unknown about most drugs, prescription or otherwise) about their mechanisms and effects. Responsible research is the best way to investigate the therapeutic possibilities of these drugs, and the existence of methodical record taking like Leary's can only help us in our quest to understand these substances and their effects on the mind.

The neuronal defense

There's been a lot of discussion recently about structural and hormonal changes in the brain being to blame for misbehavior, whether it's a philandering husband (or senator) or a psychopath. To some extent these are valid arguments; higher testosterone levels have been linked to sensation seeking and greater sexual desire, and abnormalities in the limbic system, particularly the amygdala which processes fear and emotion, and the frontal cortex, which is in charge of inhibition and rational thought, are often seen in persons who commit crimes. However, to use these structural phenomena as excuses or arguments as in, "My brain made me do it", is the same as proclaiming, "Yes, I did this". Obviously there are times of rare and extenuating circumstances when an individual's actions are truly no longer under their own control, such as in the case of a tumor in the frontal lobe changing the temperament and personality of an individual. However, for the vast majority of individuals, we are our brains, and saying you are "pre-wired" to cheat or fight or steal is not an excuse. If anything, it is a greater indication for potential recidivism and an added incentive for either punishment or preventative measures. Excess testosterone is not a pathology like schizophrenia or mental retardation, which can be used as defenses in court for criminal actions. Additionally, if you blame chemicals like testosterone or a lack of oxytocin for misbehavior, then what is to stop us from exonerating people who commit crimes because they are on a synthetic drug, like crack cocaine or PCP? And, seeing as how presumably not all men with increased testosterone cheat and not all individuals with abnormal amygdalas commit crimes or become sociopaths, it is difficult to argue that your brain and neurotransmitters make you do something when these same conditions do not compel another to a similar path.

David Eagleman's article in The Atlantic is a particularly insightful and eloquent investigation into both sides of this issue, which I highly recommend. Instead of focusing on the question of guilt and the implications that recent advances in neuroscience and neuroimaging have on culpability, Eagleman wisely shifts his focus to sentencing and the more constructive ways to incorporate our new crude knowledge of the brain into the justice system. He suggests concentrating on the potential for recidivism and reform when determining sentencing, instead of retribution. Drug courts have already started shifting towards this perspective, supported by the recent initiative by the Global Commission on Drug Policy, marking the 40 year anniversary of the War on Drugs. Not only is it important to help drug users receive treatment instead of punishment, our economy simply can not accommodate the deluge of drug related crimes into the penal system, most strikingly demonstrated by the decision in California this month to release 3,000 prisoners before their sentences were up due to a lack of resources.

Child criminal courts have also dealt with this issue of neuroanatomical defenses for quite some time, as it is widely established that the frontal cortex is the last area of the brain to finish developing, not reaching full maturation until the mid-20s. Countless juvenile defenders have used this argument to insist that their client was not a rational individual at the time of their crime, and therefore should not be held accountable for their impulsive and illegal actions. While this is certainly a valid point, and one that is typically taken into consideration when distributing sentencing, with prolonged punishments thought to be excessive and insensitive to changes the individual will undergo in the next 5-10 years, it is important to bear in mind that not all 15 year-olds commit crimes. Therefore, this universal neural stage of adolescence that we all pass through is not necessarily a credible criminal defense, as otherwise all teenagers would be running rampant and wreaking even more havoc than they already do. Also, there are innumerable studies citing the increased risk of offense in impoverished or violent areas, yet this is not used as an excuse for a crime when poor behavior is conducted. It is absolutely a reason to reform the social system that creates these pockets of poverty and risk, but it does not compel juries to acquit defenders of their crimes simply because of the neighborhood they were raised in.

At some point, people must take responsibility for their actions and face up to the consequences, not blaming an integral part of them for going rogue and acting out of character. When you make a decision it is your brain acting and your neurons firing; you can not excuse an action because you claim you could not control these impulses. There is no outside force urging you to act or not, for that is your own will being administered and carried out. Eagleman's idea of a spectrum of culpability is a sensible one that I support, and I fully agree that in the vast majority of offenses reform and rehabilitation should be the goal, rather than retribution, however this still leaves the topic rife with ambiguity for where do you draw the line? At what point we will stand up and take responsibility for our own actions?

(Thanks to Tristan Smith for The Atlantic article.)

Pathologizing the norm: Follow-up

For those of you who are interested in this debate, there's a great new two-part article in the New York Review of Books by Marcia Angell questioning "The Epidemic of Mental Illness". The articles summarize three new books concerned about the prescription frenzy we are in the midst of and how this reliance on psychoactive medication came about. She addresses the problem of dealing with psychiatric disorders as chemical imbalances and the dubious efficacy of these drugs at actually improving symptoms at all. I highly recommend this read, as well as the second part in the series on "The Illusions of Psychiatry", for anyone concerned about our mental health system. One of the most resounding points she makes in this second piece is the perpetual expansion of the diagnoses listed in the American Psychological Association's Diagnostics and Statistical Manual (DSM). With every publication of the DSM there are more and more "disorders" that we have pathologized and created, and with the upcoming publication of the DSM-V it is certain that there will be a slew of new problems we can claim for ourselves and put a name to. Angell succinctly describes this problem stating, "Unlike the conditions treated in most other branches of medicine, there are no objective signs or tests for mental illness—no lab data or MRI findings—and the boundaries between normal and abnormal are often unclear. That makes it possible to expand diagnostic boundaries or even create new diagnoses, in ways that would be impossible, say, in a field like cardiology."

Finally, she brings to task the drug companies who are more involved in psychiatric treatment than in any other medical field. This applies not only to clinicians or psychiatrists with private practices, but also the research institutions, hospitals, universities, policy makers, patient advocacy groups, educational organizations and the APA itself.

Angell's writing takes a good hard look at the system of mental health, and while at time she makes some uncomfortable points, these are important questions that need to be addressed.

(Thanks to Emily Barnet for the Angell articles.)

When is euthanasia an ethical option?

Dr. Jack Kevorkian, the infamous suicide doctor from the 1990s, passed away last week at the age of 83. While his methods and criteria were at times questionable, seeking out publicity and media attention for his credo and often working out of his old beat-up Volkswagen van, he did crucially bring the topic of euthanasia to national attention. The debate about the ethics and humaneness of this procedure will continue to grow ever more important as the health of the baby boomer generation begins to deteriorate, and unfortunately must be discussed as a viable option in the course of treatment. Largely instigated by Dr. Kevorkian's efforts, the Oregon Death with Dignity Act was passed in 1997, long before health care reform and the so-called "death lists" dispute. Thus far, the Act has assisted 525 individuals with ending their own lives, and similar laws have been passed and upheld in Washington state and Montana. There are stringent criteria in the laws determining who is eligible, and in Oregon this includes whether the patient suffers from a mental illness or not. On the surface this seems like a necessary, sensible and humane criteria for the law. However, upon closer inspection it raises problems with feasibility, for who suffering from a terminal illness wanting to end there lives is not depressed? The Beck Depression Inventory (BDI) is a widely used and respected neuropsychological questionnaire used to assess depressive symptoms. While it is by no means as comprehensive as the DSM-IV diagnostics, it does provide a relatively sufficient snapshot of an individual's current mood and state of mind. Going by these questions though, it seems doubtful whether anyone in a position to take advantage of Oregon's Act would qualify. The BDI asks questions about recent weight loss, insomnia, sexual interest, thoughts of suicide, and general mood and interest in life. Surely someone who is terminally ill and considering ending their lives would not be as interested in sex, food and the day-to-day goings on around them. Assisted suicide is not merely another option for those who are contemplating it, it is a last resort.

Along these same lines, another contentious patient group to consider in this debate are those diagnosed with Alzheimer's disease or dementia. Anyone who has a family member suffering from these disorders knows how debilitating, humiliating and de-humanizing they are. It is difficult to imagine that individuals in the late stages of Alzheimer's take much satisfaction or joy from their lives, and represented both anecdotally and artistically there are numerous cases of patients ending their lives while they still maintain some semblance of control over them. However, these patients are also widely deemed ineligible to provide informed consent for medical procedures, and are thereby explicitly excluded from the above laws. This creates another problem in which those individuals who may be most likely to elect for this assistance are not eligible to obtain it.

I realize that this is an incredibly sensitive and controversial topic, however it does need to be discussed as both the future of our medical system and the health of our parents and grandparents deteriorates. No one wants to think that they will need to consider this decision for themselves or their family members, yet this issue must be addressed as the demand for health care increases and the supply dwindles.

Apart from the logistical question of treatment availability, the much larger issue at stake here is the humaneness and ethics of this approach. Every patient suffering from a debilitating terminal illness should have the right to determine their own course of treatment, including end of care plans. Do not resuscitate or DNR orders are commonplace in hospitals and hospice care, yet the active rather than passive implications of these orders are much more difficult to instigate and carry out. And when the patient no longer has full mental or emotional capacity this decision becomes all the more tenuous and ethically and emotionally demanding. Yet watching a loved one's mind and body deteriorate is torturous for all parties involved. Explicit end of life plans should be detailed by every individual as they age and discussed with family members in the case of an emergency.

Currently Belgium, Colombia, Luxembourg, The Netherlands and Switzerland all allow physician assisted suicide in some form, and there is a growing underground tourism industry to these countries for this specific reason. Perhaps as the demand for this type of treatment increases policies adequately addressing it will follow.

(Thanks to Steve Smith for the idea for this post.)

Pathologizing the norm

Going in to any introductory psychology course, students are warned that the basic education they are gaining does not make them experts in the field. They are cautioned against diagnosing friends and family members with their scant knowledge and are reminded that there are many nuances of both personality and personality disorders that they are far from privy to. A stirring op-ed piece in the New York Times recently highlighted the perils of the common citizen diagnosing themselves and their loved ones with Alzheimer’s disease or dementia. However, more and more it seems that clinicians and researchers in the field of psychology and psychiatry are at risk of making this same mistake by pathologizing natural neuropsychological slips and common cognitive errors. Neuropsychological assessments involve a series of challenging, and at times painstaking, tests of memory, decision-making and cognitive flexibility, among other executive functions. Standardized ranges are provided for these scores from the wider population, much as for an IQ test. These assessments are particularly useful in neurological patient populations (such as victims of a stroke or a brain tumor) and the elderly to assess cognitive decline, just as the DSM-IV (Diagnostic and Statistical Manual of Mental Disorders) and MMPI (Minnesota Multiphasic Personality Inventory) are helpful in a therapist or clinician’s office. However, these tests, as well as “significant” real-life examples are now being used as evidence of disorder in normal individuals.

Nowadays, misplacing your car keys can be seen as a precursor to dementia, and blanking on an old acquaintance’s name is indicative of Alzheimer’s. Likewise, niche expertise is an example of savantism and social awkwardness a sign of long undiagnosed Asperger’s syndrome, which is just a short step away from autism on the spectrum.

But this is what we have to remember and what is getting lost in this dichotomous system of diagnoses: all of these disorders or impairments lie on a spectrum. And the ultimate litmus test for a disorder is not how poor one’s verbal recall is, but instead how much distress this impairment causes. The world of psychiatric and neuropsychological diagnoses is far from clear-cut and these classifications must be based on more than just behavior. The perception and attitude of the patient must be taken into account, including whether this person even considers themselves to be a patient in the first place.

Similarly, over the past twenty years the diagnosis of ADD/ADHD (attention deficit / attention deficit hyperactivity disorder) has risen dramatically, as has the subsequent backlash against over-diagnosing and over-medicating society’s children. Before running to the doctor's office or the prescription pad, it is important to remember that kids are squirmy, and that no one, college students and professors alike, can maintain disciplined attention during a tedious lecture.

Everyone experiences memory loss as they age, just as we all feel sadness over the course of our natural cycle of emotions. Unhappiness is a universal human feeling that everyone must go through from time to time, and is not indicative of the pervasive demoralizing morose of true depression. Emotion, attention and memory are all fluctuating human traits and must be remembered as just that, natural and transient. Our culture is so eager for a quick fix, to get rid of any feelings of discomfort and receive instant release. But sometimes it is important to experience these sentiments, to sit and work through our problems and wrestle with our shortcomings. This is in no way meant to minimize the tribulations that accompany these very real disorders, but to serve as a reminder that all of us are flawed, mentally, physically and emotionally, and if we pathologize these feelings, these struggles, then we may  miss out on the robustness of life.