Monday, July 09, 2018

Too Bad To Be True: Famous Psych Experiments And How They Lied

Towards the end of my relatively brief sojourn in a small editorial services company in the early 1990s, I had an official conversation with a management consultant hired to help us to--well, I was never quite sure what he was hired for.

 But he was a genial and intelligent older guy.  In the course of our conversation he told me about what is known as the Stanford Prison Experiment: for a simulation of a prison, students were split into two groups: guards and prisoners.  The social scientist conducting the experiment took the role of superintendent, and his assistant played the warden.  (More details of the experiment are on the wikipedia page.)

During the experiment the guards became authoritarian and even sadistic.  The prisoners became passive, and turned on each other.  What this all meant, the consultant said, was that we inevitably become our roles in an organization.  No matter how we think we will behave, our position in the power structure dictates how we actually will behave.   There could be no doubt of this, he said, because the experiment has been replicated many times.  He seemed to suggest he'd taken part in a similar experiment.

Shortly thereafter I left the organization, though not entirely because of this conversation, or the explanation it supposedly provided for how some people in the organization actually were behaving.  It was only recently that I learned that almost everything the consultant told me was untrue, overblown or based on fraud.

The Stanford Prison Experiment, a classic in psychology texts, was itself a fraud.  According to this recent article in Vox: "But its findings were wrong. Very wrong. And not just due to its questionable ethics or lack of concrete data — but because of deceit."
The guards were coached, a prisoner was acting.  It was fixed.  But this blatant fraud is only one of many, many lavishly publicized experiments called into question or proven to be bullshit.  The Vox article has many links to the various problems.

As supposedly scientific experiments, the biggest problem has been how few of these startling conclusions could be replicated in subsequent experiments.  (The consultant was quite wrong in saying the Stanford study was often replicated.  It was never successfully replicated.)

But even studies that apparently were replicated are rife with problems that belie their conclusions, including the most famous: the Miligram experiments in which test subjects repeatedly gave what they thought was an electric shock to a person they thought was another test subject, because they were told to do so.  Results showed a high percentage of participants kept on giving shocks even when the receiver was evidently in pain.

That initial experiment has also been called into question: "In 2012, Australian psychologist Gina Perry investigated Milgram's data and writings and concluded that Milgram had manipulated the results, and that there was "troubling mismatch between (published) descriptions of the experiment and evidence of what actually transpired. She wrote that "only half of the people who undertook the experiment fully believed it was real and of those, 66% disobeyed the experimenter.""

But this experiment apparently was replicated many times with the same conclusions--or was it?  Critics point out that failed attempts to replicate were unlikely to have been published.  A statistical study of the later experiments in various places and under various conditions showed that the percentage of those who gave the full shock treatment varied from 28% to 91%, which suggests at the very least that time, place and choice of test subjects matters a great deal.

A lot of these psych experiments just don't pass the smell test--the experiments are poorly designed, and the subject pool is too small, too limited (mostly white college kids) to merit universal conclusions.  (If you don't believe me, that's the more seasoned analysis of the eminent professor of psychology Jerome Kagan.)

Yet those universal conclusions and even more universal extrapolations are made, and not just by management consultants.  They are made above all by best-selling authors like Daniel Kahneman,  Robert Sapolsky and Dan Ariely.

In a review of Ariely's 2008 book Predictably Irrational for the San Francisco Chronicle--a review I consider now to be one of the best I wrote--I noted some of the psychological experiments he wrote about--which, as it happens, includes at least one thoroughly debunked since, as noted in the above Vox article.  I expressed my doubts about their conclusions (due especially to the age, racial and cultural biases inherent in testing mostly or only college students) but I also questioned the author's overall assertion: "While Ariely's stated goal is to understand the decision-making processes behind behavior ("yours, mine, and everybody else's"), he may be overreaching in the applicability of his conclusions. "We all make the same types of mistakes over and over, because of the basic wiring of our brains," he writes, but he presents no evidence of this causal relationship."

That is, not only aren't the experimental conclusions valid, but the reason he gives, while typical, is totally unsupported by other evidence: it's because of the wiring of our brains.

The metaphor of our minds or brains as computers, with certain aspects being "hard-wired," is deceptive enough when not taken so literally, though indeed most people who use the metaphor do take it this literally.

Our brains may be like computers in some respects, but mostly, not at all. And our brains or minds are definitely not actual computers, any more than they were telephone exchanges or steam engines or clockworks, which were the metaphors (often taken literally) of previous times.

There are, for one thing, no wires in our heads, hard or otherwise. It's amazing how glibly psychologists and economists assert they know anything about "wiring" in everybody's brains.  I doubt they know much about the wiring in their cars.


The Miligram experiments have been an obsession with me for years because of a personal connection.  They were conducted at Yale for years, from the early 1960s into the early 1970s.  I was in New Haven in 1970, and answered an ad in the newspaper for participants in an experiment--the ad was nearly identical to the ones from the 60s to entice participants to the Miligram experiments.

Those experiments depended on deception, beginning with their purpose.  Subjects volunteered for what they thought was an experiment in memory.  I was enticed by the ad--mostly by the money offered ($25 sticks in my mind), as I was at loose ends at the time.  So I called the number but instead of making an appointment immediately, I asked questions first.  I got only vague answers and I smelled deception, so I didn't participate.

Later I read accounts of these experiments, at least one of which claimed that nobody had refused to give the electric shocks as ordered. (The actual percentage registered in the first series was 65% compliance.)  I was furious. If I had participated, I certainly would have refused.  When I told my story to a prominent social scientist, he cautioned that I could not know how I would actually behave in the circumstances.

I may not know how I would respond under all circumstances, but I certainly do know how I would have responded under those circumstances.  If for no other reason, it was 1970 and I was 24!  I had long hair, I was a veteran peace activist and Vietnam war protester with a record of defying authority, including college science departments.  What's the likelihood that I would inflict pain on an innocent person because some asshole in a white coat told me I had to?

New Haven Green 1970
And of course it wouldn't just be me.  There were students on the streets angrily rebelling against college administrations, against participation of science departments in the military industrial complex, plus hippies and Yippies and so on gleefully defying authority in general.  The idea that any of them would sit there obediently pressing the button is ludicrous.

But then, they were unlikely to volunteer in the first place.  One also suspects that if they did, they likely would have been screened out.  Which is the larger issue in the test subject problem: who are these people who volunteer to be test subjects and why?  And conversely, who wouldn't even consider it?

What is the mindset of someone who volunteers, or who is doing it for the money?  If they do it for the money, aren't they predisposed to do what they're told?  And even volunteers--why would somebody volunteer for an experiment and then refuse to take part in it?  A volunteer would more likely have faith in the experimenters, in their expertise and in the scientific experiment.  These folks were self-selected to obey.

If the results were otherwise valid, some extrapolations might be possible--to volunteer soldiers following orders, or those who agree ideologically with the authority figures, or perhaps even with the social contract of doing the work you are assigned for the money you earn.  But not the kind of universal conclusions typically made because of what these experiments purport to prove.

Similarly, that people may violate their own morality at the behest of authority figures, or that people may find themselves committing acts because of the role or situation they would not have believed they would commit, are phenomena that have happened under real world conditions.  Atrocities are real, and happen with alarming frequency. But these experiments, however sensational, are not necessary to confirm this.  They just don't seem to be very enlightening on the question of why, let alone who, when or where.

Instead, they lead us farther from possible insights, and suggest that such behavior is determined for us all.  Hard-wired.  That is perniciously false.

Behavioral psychology is probably at the apex of its power and acceptance, at the same time as its methods are falling apart.  Other kinds of experiments are also being called into question, based for example on faulty statistical methodology or deception,  as well as the still poorly understood phenomenon of confirmation bias applied to experimental design and findings.

But a kind of confirmation bias on a larger scale is the most troubling, because such experiments confirm an ideology of determinism, of only mechanistic explanations, that dominates science.  In the life sciences particularly, the bias is towards the destructive side of human nature: violent instincts, individual competition, fights to the death.  It is against--and often doesn't see--social and conciliatory instincts, cooperation and empathy.  These were ideologically judged long ago to be evolutionary losers, selected out by the struggle for existence. There is perhaps less of this now, but it still seems to be the prevailing ideology.

These inflated conclusions supposedly confirmed by science are especially dangerous, because we begin to base our beliefs about society and ourselves on them, and therefore our behaviors and expectations.

Our societies and our lives within them are based on cooperation, conciliation, compassion and a shared sense of fairness. As individuals as well as groups,
we are mixed creatures, we are complex, as life and the world are complex.  In literature, if characters are internally compelled to act contrary to their morality we call it tragedy, not hard-wired or human nature.  It is part of a much more complex human nature.  We can to some extent govern our behavior through educated self-awareness and through culture.   If science can't acknowledge that, it isn't telling the truth.

No comments: