Bad Science - Critical summary review - Ben Goldacre
×

New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!

I WANT IT! 🤙
70% OFF

Operation Rescue is underway: 70% OFF on 12Min Premium!

New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!

590 reads ·  0 average rating ·  0 reviews

Bad Science - critical summary review

Bad Science Critical summary review Start your free trial
Science

This microbook is a summary/original review based on the book: Bad Science: Quacks, Hacks, and Big Pharma Flacks

Available for: Read online, read in our mobile apps for iPhone/Android and send in PDF/EPUB/MOBI to Amazon Kindle.

ISBN: 9780865479180

Publisher: Farrar Straus and Giroux

Critical summary review

Merriam-Webster Dictionary defines science as the “system of knowledge covering general truths or the operation of general laws especially as obtained and tested through scientific method.” The scientific method, in turn, involves careful observations, formulating hypotheses based on such observations, measurement-based testing of deductions drawn from these hypotheses and, finally, elimination or refinement of the hypotheses based on the experimental findings. In short, science requires not only intelligence but also induction, intuition, and integrity – which is why it is a field reserved for the selected few.

Unfortunately, in the ultra-democratic world of today, scientists constantly find themselves “outnumbered and outgunned by vast armies of individuals who feel entitled to pass judgement on matters of evidence – an admirable aspiration – without troubling themselves to obtain a basic understanding of the issues.” In “Bad Science,” British physician and academic Ben Goldacre exposes the failings of these individuals and the cognitive biases that trick us into believing them, particularly in the field of medicine. So, get ready to be enlightened!

Homeopathy and the dilution problem

In the late 18th century, at a time when “doctors” were arbitrary authority figures and mainstream medicine was bloodletting, a German physician by the name of Samuel Hahnemann “decided – and there’s no better word for it – that if he could find a substance which would induce the symptoms of a disease in a healthy individual, then it could be used to treat the same symptoms in a sick person.” Afterward, he took some Cinchona bark, experienced symptoms similar to malaria and declared it a cure for the disease. Thus, homeopathy – or, the pseudoscience of “like curing like” – was born.

However, since Hahnemann soon realized that chemicals and herbs have genuine, sometimes dangerous, effects on the body he devised the second principle of homeopathy. If properly diluted, he opined, the “spirit-like medicinal powers” of a substance are enhanced and “potentized,” and its side effects reduced. “In fact,” writes Goldacre, “he went further than this: the more you dilute a substance, the more powerful it becomes at treating the symptoms it would otherwise induce.” Eventually, he settled on the most suitable ratio: 30C. That means that for a homeopathic remedy to work, the original substance should be diluted – in pure water, ethanol or sugar – by one drop in a hundred, 30 times over.

Even in the 19th century, there were many philosophers and intellectuals scoffing away  these kinds of claims as sheer nonsense, but in the two centuries since Hahnemann’s death, homeopathy has become a billion-dollar industry. Somehow, it doesn’t matter that Hahnemann didn’t know anything about the physiological processes going on inside our bodies or that he based his model upon outdated medicinal theories such as “humorism” and “miasms.” Strangely enough, it matters even less that the dilutions he advocated make the remedies chemically indistinguishable from the diluent. A single image speaks volumes about the veracity of this last claim. A 30C dilution is the equivalent of putting one molecule of a substance in a sphere of water with a diameter of 150 million kilometers! That is the distance from the earth to the sun. It takes light eight minutes to cover it.

Placebos and fair trials

Homeopathy isn’t only theoretically unsound – it doesn’t work in practice as well. In controlled trials, homeopathic dilutions perform about as well – or, better yet, as bad – as placebo pills, because, essentially, that’s precisely what they are. People believe they work – and spend thousands of dollars on pure water or sugar – because whenever someone feels bad, they usually get better after some time. And homeopaths are smart enough to tell those who feel worse after their treatment that the remedy sometimes increases symptoms before making them better. Isn’t a scar or two a small price to pay to take the hair of the dog that bit you?

We just said that homeopathic remedies work as well as placebos. But don’t placebos work? Surprisingly, the answer is “yes” – but only to an extent and only in cases where other treatments are available and can deal with the symptoms. Put simply, placebos are not intended to work: they are designed to have no therapeutic value at all. Just like all homeopathic remedies, most placebos are sugar pills. The fact that they encourage the body’s chemical processes for relieving pain says something about the relationship between our bodies and our brains, but it doesn’t say what many quacks would tell you – namely, that “it’s all in the mind.” On the contrary, even though placebos are potent and do alleviate some symptoms, no placebo has ever been demonstrated to have a tangible impact on any disease that may cause these symptoms.

Precisely because of this, sugar pills are often used in medicinal trials. To test the efficacy of any treatment, doctors give placebos to one group of patients, and the drug that is being tested to another. However, a fair and controlled trial doesn’t stop there. It must also be blinded and randomized. Randomization is a method by which patients are assigned to the placebo or the intervention group on chance alone. Blinding further minimizes the differences between the groups and the risks of errors by making sure that neither the experimenters nor the patients know if they got the homeopathy sugar pill or the real drug. Placebos, blinding and randomization are essential elements of the scientific method. Without them, we wouldn’t know if we are measuring the actual effects of certain drugs on people’s health, or the effects of our numerous expectations and biases. Let’s turn our attention to them.

Why clever people believe stupid things

Just like there are optical illusions, there are cognitive illusions as well. They occur because our mind is programmed to simplify complex problems for the sake of efficiency, which often leads to false beliefs. In Goldacre’s opinion, cognitive illusions “cut to the core of why we do science, rather than basing our beliefs on intuition informed by a ‘gist’ of a subject acquired through popular media: because the world does not provide you with neatly tabulated data on interventions and outcomes.” In other words, the scientific method has value because it protects us from our inherent and inherited biases. Here are the few most important:

  • Randomness. We have an innate ability to make something out of nothing and an overpowering propensity to find patterns in random and even sparse sequences of information. This often leads to conclusions which sound obviously correct, but which are in fact, not. Sporting and gambling streaks are good examples – both of them are products of chance. 
  • Regression to the mean. Regression to the mean is “the phenomenon whereby, when things are at their extremes, they are likely to settle back down to the middle, or ‘regress to the mean.’” That’s why some people think that homeopathy works. Most of them would have recovered even without going to a homeopath. That’s because when things are at their worst they generally get better. However, once someone visits a homeopath, they can’t help but ascribe their improvement to the treatment. The reason is that our minds are oversensitive to causal relationships and easily confuse simple regression with causation.
  • The bias toward positive evidence. In addition to seeing patterns in randomness and causal relationships where there are none, we also have “an innate tendency to seek out and overvalue evidence that confirms a given hypothesis.” This is dangerous because it means that, unless controlled, researchers would ask questions that are in favor of their hypothesis and, in the process, elicit precisely the information they need to confirm it.
  • Biased by a prior belief. When faced with evidence that goes against our preexisting views, we are more interested into finding flaws in it than adjusting our belief system. 
  • Availability. Our attention is always drawn to the exceptional; we are, in other words, vulnerable to drama. That’s why we are more afraid of sharks at the beach than of driving to the coast. That’s why when information about “miracle cures” or success stories is made available, it becomes disproportionally prominent. Our mind can’t understand that prominence and availability does not turn an anecdote into a fact. Hence, the need for science.
  • Social influences. This is the most self-evident flaw of all: “our values are socially reinforced by conformity and by the company we keep.” Partly because we spend more time with people who validate our beliefs, and partly because we expose ourselves to situations where those beliefs are apparently confirmed. Either way, “communal reinforcement” – as this process is called – is what makes testimonials more powerful than scientific evidence, and things such as astrology and religion so perseverant.

Bad media and even worse pharma

“It’s not safe to let our intuitions and prejudices run unchecked and unexamined,” writes Goldacre. “It’s in our interest to challenge these flaws in intuitive reasoning wherever we can, and the methods of science and statistics grew up specifically in opposition to these flaws.” Unfortunately, modern media grew up specifically in line with them. A few decades ago, media moguls realized that people are more interested in the sensational than they are in the truth and they started selling the extraordinary, the wacky, and the dramatic for profit, thus reinforcing – rather than countering – our biases. Worse yet, they also began printing stories about scientific research that hasn’t been published or even peer-reviewed! To use a sports analogy, in the world of science, that’s the same as announcing as final the score of an NBA game after the second quarter!

In many countries of the world – the United States especially – newspapers and TV programs aggravate another, very real, problem by publishing Big Pharma ads without any discrimination. The problem with this is that big pharmaceutical corporations, just like all other corporations, have a duty to maximize their profits but, as Goldacre notes, this duty “often sits uncomfortably with the notion of caring for people.” Between the two, they often choose the former, spending only a small amount of their profits on research and development of new drugs and billions of dollars on marketing and administration. 

Even so, drug research is expensive, so unconfirmed hypotheses are costly. If you spend a million dollars to test a new drug and the trials demonstrate that it doesn’t work any better than an old one, you get yourself a million-dollar burn. Unfortunately, that’s not a price pharmaceutical companies are willing to pay in the name of science. Hence, the ads. After all, if you can market the new drug as more successful than it actually is, you have a chance to cover the losses.

Since big pharma is not a big fan of losing money – in addition to advertising slightly different versions of older drugs as new medicines – it is also neglecting Third World diseases and countries. Most reprehensibly, despite spending 31% of their profits on marketing, U.S. companies are not licensing AIDS drugs to developing-world countries, even though in some of them, such as Botswana,  a quarter of the population is HIV-positive. As a result, capitalist hacks such as German pseudo-doctor Matthias Rath were able to make millions by traveling through Africa and promoting vitamin pills and nutritional supplements as treatments for HIV/AIDS. Tragically, Rath found the right audience in Thabo Mbeki, an HIV denier and the President of South Africa between 1999 and 2008. During this period, the combined efforts of Mbeki, Rath and their colleagues might have resulted in no less than 300,000 unnecessary deaths. Ultimately, that’s what bad science leads to.

Final notes

Shortlisted for the 2009 Samuel Johnson Prize, “Bad Science” is an irreverent, funny and unflinching critique of pseudoscience. It should be a required reading for high school students and policy makers. Really.

12min tip

Don’t trust homeopaths, nutritionists, journalists or Big Pharma ads. Trust the scientific method. It’s the only thing designed to reveal the truth.

Sign up and read for free!

By signing up, you will get a free 7-day Trial to enjoy everything that 12min has to offer.

Who wrote the book?

Ben Goldacre is a British physician, academic, and science writer. Since 2015, he has been a senior clinical research fellow at the Nuffield Department of Primary Care Health Sciences in Oxford. A champion of open data, he is... (Read more)

Start learning more with 12min

6 Milllion

Total downloads

4.8 Rating

on Apple Store and Google Play

91%

of 12min users improve their reading habits

A small investment for an amazing opportunity

Grow exponentially with the access to powerful insights from over 2,500 nonfiction microbooks.

Today

Start enjoying 12min's extensive library

Day 5

Don't worry, we'll send you a reminder that your free trial expires soon

Day 7

Free Trial ends here

Get 7-day unlimited access. With 12min, start learning today and invest in yourself for just USD $4.14 per month. Cancel before the trial ends and you won't be charged.

Start your free trial

More than 70,000 5-star reviews

Start your free trial

12min in the media