Hot Best Seller

Science Fictions: The Epidemic of Fraud, Bias, Negligence and Hype in Science

Availability: Ready to download

A major exposé that reveals the absurd and shocking problems that pervade and undermine contemporary science. So much relies on science. But what if science itself can’t be relied on? Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for advice. Science Fictions reveals the disturbing flaws that undermine our understanding of A major exposé that reveals the absurd and shocking problems that pervade and undermine contemporary science. So much relies on science. But what if science itself can’t be relied on? Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for advice. Science Fictions reveals the disturbing flaws that undermine our understanding of all of these fields and more. While the scientific method will always be our best and only way of knowing about the world, in reality the current system of funding and publishing science not only fails to safeguard against scientists’ inescapable biases and foibles, it actively encourages them. From widely accepted theories about ‘priming’ and ‘growth mindset’ to claims about genetics, sleep, microbiotics, as well as a host of drugs, allergies and therapies, we can trace the effects of unreliable, overhyped and even fraudulent papers in austerity economics, the anti-vaccination movement and dozens of bestselling books – and occasionally count the cost in human lives. Stuart Ritchie was among the first people to help expose these problems. In this vital investigation, he gathers together the evidence of their full and shocking extent – and how a new reform movement within science is fighting back. Often witty yet deadly serious, Science Fictions is at the vanguard of the insurgency, proposing a host of remedies to save and protect this most valuable of human endeavours from itself.


Compare

A major exposé that reveals the absurd and shocking problems that pervade and undermine contemporary science. So much relies on science. But what if science itself can’t be relied on? Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for advice. Science Fictions reveals the disturbing flaws that undermine our understanding of A major exposé that reveals the absurd and shocking problems that pervade and undermine contemporary science. So much relies on science. But what if science itself can’t be relied on? Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for advice. Science Fictions reveals the disturbing flaws that undermine our understanding of all of these fields and more. While the scientific method will always be our best and only way of knowing about the world, in reality the current system of funding and publishing science not only fails to safeguard against scientists’ inescapable biases and foibles, it actively encourages them. From widely accepted theories about ‘priming’ and ‘growth mindset’ to claims about genetics, sleep, microbiotics, as well as a host of drugs, allergies and therapies, we can trace the effects of unreliable, overhyped and even fraudulent papers in austerity economics, the anti-vaccination movement and dozens of bestselling books – and occasionally count the cost in human lives. Stuart Ritchie was among the first people to help expose these problems. In this vital investigation, he gathers together the evidence of their full and shocking extent – and how a new reform movement within science is fighting back. Often witty yet deadly serious, Science Fictions is at the vanguard of the insurgency, proposing a host of remedies to save and protect this most valuable of human endeavours from itself.

30 review for Science Fictions: The Epidemic of Fraud, Bias, Negligence and Hype in Science

  1. 5 out of 5

    BlackOxford

    Scientific Meta-Hype Here’s a rough summary of Science Fictions: 1. There is no officially established procedure called ‘scientific method,’ by which to judge the quality of research results. 2. The process by which the results of scientific research are validated for consideration by the scientific community cannot ensure the reliability of these results either. 3. Consequently what circulates at any given time as scientific fact is mostly wrong or misleading. It takes time to discover errors. 4. S Scientific Meta-Hype Here’s a rough summary of Science Fictions: 1. There is no officially established procedure called ‘scientific method,’ by which to judge the quality of research results. 2. The process by which the results of scientific research are validated for consideration by the scientific community cannot ensure the reliability of these results either. 3. Consequently what circulates at any given time as scientific fact is mostly wrong or misleading. It takes time to discover errors. 4. Steps can be taken, mostly by non-scientists and a new kind of science, to reduce if not eliminate the amount of junk science currently being produced. In other words science works, when it does, not because of how experimentation, theorisation, and analysis are carried out, or how the findings of individual scientists are publicised or criticised by colleagues, or because these findings are proven wrong, but because most of what is publicised will eventually be ignored as irrelevant. This Ritchie finds disturbing. A key word in the above is ‘eventually.’ For science to be science, everything that is known is tentative. And centuries of scientific experience shows that everything known at any time will be ignored at some future time except as a kind of intellectual fossil. This is as close to an accurate existential definition of Science as one is likely to get. I don’t think Stuart Ritchie would disagree with this assessment. Science, like politics, is extremely messy. That is to say, Science is inherently inefficient (I use caps to designate the modern institution). It does not progress according to any definable logic since it is constantly reviewing the logic it has previously adopted. Therefore, looking back from any point in time, the resources engaged in scientific efforts - money, talent, time, administration - have largely been spent in a demonstrably fruitless way. This waste is essentially what Ritchie is writing about. A large part of his book is devoted to the errors, frauds, and bloopers in scientific research ranging from his own field of psychology to cancer research and molecular physics. Eventually these mistakes mostly are not refuted but buried by further research. In the meantime the scientific community has wasted effort. And, he says, this waste has serious impact because of delay in acquiring important knowledge for health, social policy, and the general well-being of society. The waste can be reduced, he says, and he has suggestions about how to do that. Ritchie calls our current situation a “crisis.” He believes the existing institutions of Science are “corrupt.” He cites compelling evidence that “any given published scientific article is more likely to be false than true.” There have been, he says, “over 18,000 retractions in the scientific literature since the 1970’s,” largely due to forgery, conflict of interest, self-promotion or even criminal intentions. In cancer research Ritchie cites a study in 2017 which: “… scoured the literature for studies using known misidentified cell lines found an astonishing 32,755 papers that used so-called impostor cells, and over 500,000 papers that cited those contaminated studies” So serious business. Perhaps the UK government, which was purportedly “following the science assiduously” at the outset of the COVID pandemic in 2020 should have read Ritchie’s book immediately it was published. That might have saved some lives, relieved marital strife during lockdown, and avoided the immensely costly track and trace boondoggle. So what is it that the world should do to lessen the incidence of junk science, avoidably stupid science, not to mention criminal science? Surely this is an issue deserving of further investigation by the proper authorities. Well a part of Ritchie’s solution is something somewhat more trivial than the problem he describes. Essentially his first recommendation is that SCIENTISTS MUST BECOME MORE VIGILANT. In more detail, this means brushing up a bit on their statistics, taking their job as peer reviewers of professional articles more seriously, mitigating the hype surrounding unusual research findings, being more watchful for professional fads, and being a little more suspicious of whatever they read in print. Hardly revolutionary, and somewhat condescending. “Become more vigilant” is about all he can say to fellow scientists if he wants to maintain credibility. Anything else, like government supervision or professional regulations about how to conduct proper science, would destroy science itself. So he directs his next directives to non-scientists - universities, research institutes, journal editors and foundations. He would like them to stop providing incentives to scientists and academics that promote a lack of vigilance - number of published articles, citation intensity, implicit funding demands to overstate expected research results, organisational promotion etc. But it is at this point that Ritchie’s ship of a new science runs aground and founders. He admits that scientists themselves are complicit in the web of incentives he abhors. In fact they want them: “What’s particularly disconcerting is that the people entangled in this thicket of worthless numbers are scientists: they’re supposed to be the very people who are most au fait with statistics, and most critical of their misuse. And yet somehow, they find themselves working in a system where these hollow and misleading metrics are prized above all else. ” Of course they are. So Ritchie’s killer app is an extraordinary proposal for the establishment of an essentially new profession of the “meta-scientist,” that is a group of scientists who study the work of other scientists. Part of this proposal are suggestions for new journals devoted to this meta-science, including the reporting of results of research flops, so called null result studies, which didn’t lead anywhere. He also wants public “pre-registration” of research intentions and expectations, as well as “Open Source” free access to registered research and its results. He thereby cleverly keeps scientific regulation in the family, as it were, away from politicians, government bureaucrats, and the un-lettered masses. Ah yes, Dr. Ritchie, may one ask who controls the controllers? Will the world need meta-meta-science in a few years time. And isn’t your idea of pre-registration just a teensy bit bureaucratic and of unproven scientific worth. It’s an idea that may be suitable for big government-funded drug studies simply because of the fortunes to be gained. But for evaluating the reaction of mice to increased testosterone, for example, such regulation seems highly inappropriate. Then there’s the issue of the scientific police who would enforce the registration and supervision of research. Would their approval be necessary for changing a study’s direction mid-stream? And would the penalties for non-compliance be civil or criminal do you think? Is it too much to assert that the condition in which science finds itself today is no different than it found itself when the Royal Society was founded in 1660, or for that matter in the ancient groves of Grecian academe. In fact I’m willing to bet that there are proportionately fewer scientific hacks in the world today than there has ever been thanks to modern procedures of accreditation and the spread of information through modern technology. So what is the point of Ritchie’s proposals? Every example of error or malfeasance that Ritchie cites is an instance of the current community of scientists exposing and discounting flakey results. More will certainly be uncovered. Isn’t that the important fact - they will be uncovered? Not as fast as Ritchie would like apparently. But then can he demonstrate scientifically how much quicker good results will be available? And at what cost? And given that eventually all scientific conclusions will be subject to correction, is it possible that he’s just blowing smoke?

  2. 5 out of 5

    Gavin

    Wonderful introduction to meta-science. I've been obsessively tracking bad science since I was a teen, and I still learned loads of new examples. (Remember that time NASA falsely declared the discovery of an unprecedented lifeform? Remember that time the best university in Sweden completely cleared their murderously fraudulent surgeon?) Science has gotten a bit fucked up. But at least we know about it, and at least it's the one institution that has a means and a track record of unfucking itself. R Wonderful introduction to meta-science. I've been obsessively tracking bad science since I was a teen, and I still learned loads of new examples. (Remember that time NASA falsely declared the discovery of an unprecedented lifeform? Remember that time the best university in Sweden completely cleared their murderously fraudulent surgeon?) Science has gotten a bit fucked up. But at least we know about it, and at least it's the one institution that has a means and a track record of unfucking itself. Ritchie is a master at handling controversy, at producing satisfying syntheses - he has the unusual ability to take the valid points from opposing factions. So he'll happily concede that "science is a social construct" - in the solid, trivial sense that we all should concede it is. He'll hear out someone's proposal to intentionally bring political bias into science, and simply note that, while it's well-intentioned, we have less counterproductive options. Don't get the audiobook: Ritchie is describing a complex system of interlocking failures. I need diagrams for that sort of thing. Ritchie is fair, funny, and actually understands the technical details. Supercedes my previous fave pop-meta-scientist, Ben Goldacre.

  3. 4 out of 5

    Julia

    This is one of the most important books I’ve read in the past few years. Ritchie skillfully examines the problems plaguing modern science, looks at the motivations that cause them, and posits solutions. Science Fictions drives home the importance for skepticism in all things, even science.

  4. 5 out of 5

    Andy

    This is an important topic, and the author does an excellent job explaining problems like p-hacking. But these issues are nothing new to scientists, so the main value of this book is if it engages and clearly explains things for the general public. And there, I’m afraid the author may end up just increasing confusion by trying to turn everyone into a scientist. In terms of solutions to bad science, I wonder if we don’t need to start by addressing the underlying culture of corruption and incompet This is an important topic, and the author does an excellent job explaining problems like p-hacking. But these issues are nothing new to scientists, so the main value of this book is if it engages and clearly explains things for the general public. And there, I’m afraid the author may end up just increasing confusion by trying to turn everyone into a scientist. In terms of solutions to bad science, I wonder if we don’t need to start by addressing the underlying culture of corruption and incompetence, of which bad science is just one symptom. Detroit: An American Autopsy Nerd addendum: With nutritional research, for example, he makes a good point that the news media do a bad job of hyping all these small or shoddy or irrelevant studies. His immediate solution is to teach us all how to read a scientific paper, and then whenever you hear about an interesting study in the news, you should go and somehow (even illegally) get a copy of the study and analyze it for validity. That seems nuts and unfair. According to the book, doctors and scientists and editors of scientific journals are widely incapable of this, so how is every citizen going to master this skill? And why should you? I think if people (scientists, doctors or otherwise) are really interested in nutritional epidemiology, they should go deep and read Gary Taubes, e.g. That gives you an understanding of the research literature going back decades, explaining what is wrong with the original studies that are often cited, and giving the implications in plain language. Then if you want, you can look up a few of the studies that he has detailed and you’ll be able to know what to look for and to verify whether they say what he says they say. You have to know stuff to learn stuff. What matters is not the latest news item, but the overall weight of the best available evidence. Another problem with his commentary on nutritional epidemiology is that he goes on from there to warn in general about all observational epidemiology, without pointing to when observational epidemiology does supply robust actionable evidence (trans fats, lung cancer, SIDS, etc., etc.). Other books to consider: .

  5. 5 out of 5

    Alvaro de Menard

    In 1945, Robert Merton wrote: There is only this to be said: the sociology of knowledge is fast outgrowing a prior tendency to confuse provisional hypothesis with unimpeachable dogma; the plenitude of speculative insights which marked its early stages are now being subjected to increasingly rigorous test. Then, 16 years later: After enjoying more than two generations of scholarly interest, the sociology of knowledge remains largely a subject for meditation rather than a field of sustained and metho In 1945, Robert Merton wrote: There is only this to be said: the sociology of knowledge is fast outgrowing a prior tendency to confuse provisional hypothesis with unimpeachable dogma; the plenitude of speculative insights which marked its early stages are now being subjected to increasingly rigorous test. Then, 16 years later: After enjoying more than two generations of scholarly interest, the sociology of knowledge remains largely a subject for meditation rather than a field of sustained and methodical investigation. [...] these authors tell us that they have been forced to resort to loose generalities rather than being in a position to report firmly grounded generalizations. In 2020, the sociology of science is stuck more or less in the same place. I am being unfair to Ritchie (who is a Merton fanboy), because he has not set out to write a systematic account of scientific production—he has set out to present a series of captivating anecdotes, and in those terms he has succeeded admirably. And yet, in the age of progress studies surely one is allowed to hope for more. If you've never heard of Daryl Bem, Brian Wansink, Andrew Wakefield, John Ioannidis, or Elisabeth Bik, then this book is an excellent introduction to the scientific misconduct that is plaguing our universities. The stories will blow your mind. For example you'll learn about Paolo Macchiarini, who left a trail of dead patients, published fake research saying he healed them, and was then protected by his university and the journal Nature for years. However, if you have been following the replication crisis, you will find nothing new here. The incidents are well-known, and the analysis Ritchie adds on top of them is limited in ambition. The book begins with a quick summary of how science funding and research work, and a short chapter on the replication crisis. After that we get to the juicy bits as Ritchie describes exactly how all this bad research is produced. He starts with outright fraud, and then moves onto the gray areas of bias, negligence, and hype: it's an engaging and often funny catalogue of misdeeds and misaligned incentives. The final two chapters address the causes behind these problems, and how to fix them. The biggest weakness is that the vast majority of the incidents presented (with the notable exception of the Stanford prison experiment) occurred in the last 20 years or so. And Ritchie's analysis of the causes behind these failures also depends on recent developments: his main argument is that intense competition and pressure to publish large quantities of papers is harming their quality. Not only has there been a huge increase in the rate of publication, there’s evidence that the selection for productivity among scientists is getting stronger. A French study found that young evolutionary biologists hired in 2013 had nearly twice as many publications as those hired in 2005, implying that the hiring criteria had crept upwards year-on-year. [...] as the number of PhDs awarded has increased (another consequence, we should note, of universities looking to their bottom line, since PhD and other students also bring in vast amounts of money), the increase in university jobs for those newly minted PhD scientists to fill hasn’t kept pace. By only focusing on recent examples, Ritchie gives the impression that the problem is new. But that's not really the case. One can go back to the 60s and 70s and find people railing against low standards, underpowered studies, lack of theory, publication bias, and so on. Imre Lakatos, in an amusing series of lectures at the London School of Economics in 1973, said that "the social sciences are on a par with astrology, it is no use beating about the bush." Let's play a little game. Go to the Journal of Personality and Social Psychology (one of the top social psych journals) and look up a few random papers from the 60s. Are you going to find rigorous, replicable science from a mythical era when valiant scientists followed Mertonian norms and were not incentivized to spew out dozens of mediocre papers every year? No, you're going to find exactly the same p<.05, tiny N, interaction effect, atheoretical bullshit. The only difference being the questionable virtue of low productivity. If the problem isn't new, then we can't look for the causes in recent developments. If Ritchie had moved beyond "loose generalities" to a more systematic analysis of scientific production I think he would have presented a very different picture. The proposals at the end mostly consist of solutions that are supposed to originate from within the academy. But they've had more than half a century to do that—it feels a bit naive to think that this time it's different. Finally, is there light at the end of the tunnel? ...after the Bem and Stapel affairs (among many others), psychologists have begun to engage in some intense soul-searching. More than perhaps any other field, we’ve begun to recognise our deep-seated flaws and to develop systematic ways to address them – ways that are beginning to be adopted across many different disciplines of science. Again, the book is missing hard data and analysis. I used to share his view (surely after all the publicity of the replication crisis, all the open science initiatives, all the "intense soul searching", surely things must change!) but I have now seen some data which makes me lean in the opposite direction. Ritchie's view of science is almost romantic: he goes on about the "nobility" of research and the virtues of Mertonian norms. But the question of how conditions, incentives, competition, and even the Mertonian norms themselves actually affect scientific production is an empirical matter that can and should be investigated systematically. It is time to move beyond "speculative insights" and onto "rigorous testing", exactly in the way that Merton failed to do.

  6. 5 out of 5

    J.J.

    First of all the title slaps, this is the kind of word play you want in a popular science book title. Ritchie grabs your attention with some spicy cases of scientific fraud, but follows up with other pernicious problems that lead science astray. He goes on to suggest changes to the way research is conducted, funded, reviewed and published to right some of these wrongs. A worthwhile read (or listen) for researchers or mere muggles like myself.

  7. 5 out of 5

    A

    8.5/10. Quite the revelatory look at "The Science", this book is. Ritchie shows how the ideal of objectivity in scientific publishing is not matched by actual practice. The hallmark of science is its replicability. If a finding is not replicable, then it must be presumed to be due to randomness or error — either of which is not reliable for understanding the world, treating illnesses, understanding the human mind, or diagnosing economic ills. The problem is that scientific studies are not replica 8.5/10. Quite the revelatory look at "The Science", this book is. Ritchie shows how the ideal of objectivity in scientific publishing is not matched by actual practice. The hallmark of science is its replicability. If a finding is not replicable, then it must be presumed to be due to randomness or error — either of which is not reliable for understanding the world, treating illnesses, understanding the human mind, or diagnosing economic ills. The problem is that scientific studies are not replicating. ~60% of psychology studies do not replicate. If a make a bald proposition about the mind and flip a coin, I am more likely to be right than if I read a psychological study. In general, scientific studies have only a 50% replication rate. That is not good. Why is this the case? Firstly, no one can even attempt to replicate most studies. 99% of all pre-clinical medical trial papers do not provide enough information to be replicated. When attempted, only 11% of the trials could replicate. But oftentimes, no one even tries to replicate them. Only .1% of economics studies and 1% of psychological studies have subsequent replication attempts. So we essentially have a head over heels rush to publish, with no one verifying anyone else's results. If verification is attempted, there is a 50% chance that it fails. This wastes at least $50 billion of research money per year. Useless studies, useless results. The drive for these results continues, however, because of academic incentives. Academics get promoted based on how many studies they publish and how much they get cited (or in China, get paid directly based on publication count). This leads to the statistical finagling of results to publish as much as possible, as well as citation rings where academics make contracts to cite one another. "I'll publish your paper if you cite three of mine". One can also choose one's peer reviewers, thus allowing you to choose your friends to simply get published. Statustical finagling works thuswise. Let's say you do a study with a large manifold of variables and get a null result for your initial hypothesis. This is bad, as scientific journals don't like null results. But you have lots of variables to work with! So you look at every possible combination of variable type and effect size, and finally find one that is statistically significant (p < .05 = 5% chance that without your treatment, a similar result would have been found). But the problem is that the more you look for statistical significance in every configuration of data, the more likely you will find one below the cutoff that is due to pure randomness. I tell the journals, "look, I found someone who shares my same birthday, how rare!", but if I talk to one million people, of course I will find someone with my same birthday. If this is not the case, then I can verbally spin my null result to seem as if I found something novel, impactful, or interesting. There are also many blatant cases of fraud, many committed by researchers at institutions of the highest prestige. Paolo Macchiarini found an amazing way to transplant tracheas. By reading his journal articles, you knew that he was having great success. He had done it on five different people and it was to revolutionize medicine. But then it turned out that four of them had died . . . one of them seven weeks before he published his paper. It turns out that Macchiarini didn't let people know. From publishing in the top Lancet journal and being lionized by his countrymen, Macchiarini fell from grace. This story is not an outlier. 15% of scientists admit to knowing someone who has faked data. 10% of cell line images are copied from old trials and studies. Medical research (1/3 of which is funded by pharmaceutical companies) can be spliced up salami style to make it seem like many papers support X treatment. 45% of currently used medical treatments, when properly reviewed, do not have sufficient evidence to warrant believing in their efficacy. Many ills abound in current-day science. As a practical measure, you should not trust popular science books and headlines about science. They will be exaggerated and hype studies out of all proportions, applying them to disparate sectors where their data do not apply. Act on old wisdom and passed-down maxims as opposed to the immediate diktats of "The Science". The Greco-Roman classics will get you far. Use Stoicism to master your passions and become stolid in your mind. Understand the Spartan and Platonic teaching that the mind's health reflects the body's health. Be at one with your nation and heritage, for your destiny is theirs. Keep natural — in all aspects. Man has always consumed meat. Eat fresh and chemical-free meat, as much as that is possible. Revive old advice from long-lost books. Those before us may have lived for a shorter time span, but they certainly were stronger and healthier (in both mind and body). Get outside. Bask in the sun. Talk with people without a mask. Be a human and do human things. Get away from modern "innovations" and revive tradition. Trust in the sages of old and ignore the charlatans of today. Most definitely, be wary of science and the fury of delusional passion about its supposed findings. It is much more corrupt than you think.

  8. 5 out of 5

    Sophia

    I highly recommend this book for anyone planning (or considering) to do science, either a bachelors, masters or more. It's a great overview of how science is actually practiced, and how it can so easily go wrong. I also recommend this to current scientists, because it's a humbling reminder of what we're doing wrong, and also a quick update on things we might have been taught as facts has actually been disproven in the meantime. The book is exceptionally well structured, very clear writing, very I highly recommend this book for anyone planning (or considering) to do science, either a bachelors, masters or more. It's a great overview of how science is actually practiced, and how it can so easily go wrong. I also recommend this to current scientists, because it's a humbling reminder of what we're doing wrong, and also a quick update on things we might have been taught as facts has actually been disproven in the meantime. The book is exceptionally well structured, very clear writing, very engaging, switching between as much information as needed to understand a given concept, then compelling examples, and discussion as to why it matters, what people might object to, etc. Really really good. However, the author fails to give proper due to the main strength of science: it's ability to self-correct. This book is described as an "expose", but in reality all of what he mentions has been known for decades, and in fact every single example he gives of fraud, negligence, bias, or unwarranted hype was not uncovered by external journalists but rather other scientists. It was the peers who read papers that looked suspicious and did more digging, or whole careers built around developing software and tools for automatically detecting plagiarism, statistical errors, etc. It was psychology itself that "wrote the book" on bias that was fundamental to exposing the biases of scientists themselves. And more often than not, it was just a future study that tried something better that should have worked but didn't that disproved a flimsy hypothesis. Sure; fraud, hype, bias, and negligence are dragging science down, but science isn't "broken", it's just inefficient. Wasting a lot of money on bad experiments and scientists needs to be avoided, but in the end, a better truth tends to bubble up regardless. Anyone who has had to defend science against religious diehards will be particularly aware of this. Also missing is proper consideration as to why these seemingly blindingly obvious problems have been going on for so long. As an insider, here are some of my answers: - All this p-hacking (trying different analyses until something is significant). Scientists are not neatly divided into those that immediately find their results because of how fantastically well they planned their study, and those that desperately try to make their rotten data significant. Every. Single. Study. has to fine tune their analysis once they get the data, not before. Unless you are in fact replicating something, you have absolutely no idea what the data will look like, and what's the most meaningful way to look at it. This means you can’t just tell scientists "stop p-hacking!", you need an approach that acknowledges this critical step. Fortunately, an honest one exists that can be borrowed from machine learning: splitting your data into a "training" and "testing" dataset, where you fine-tune your analysis pipeline on a small subset, then you actually rely on the results applied to a larger one, using only and exactly the pipeline you previously developed, without further tweaking. - The file drawer problem (null results not getting published). I think especially in the field of psychology, statistics courses are to blame for this; we don't reeeally understand how the stats work, so we rely on Important Things To Remember that we're taught by statisticians, and one of these is that "you can't prove a null hypothesis". This ends up getting interpreted in practice in "null results are not real results, because nothing was proven". We are actively discouraged from interpreting "absence of evidence as evidence of absence", but sometimes that is in fact exactly what we should be doing; for sure not with the same confidence and in the same way with which we interpret statistically significant positive results, but at some point, a study that should have found something but didn't is a meaningful indication that that thing might not in fact be there. A useful tool to help break through this narrow-minded focus on only positive results is a new statistical tool called similarity testing, where you test not whether two groups are different but whether they are statistically significantly "the same". This is a huge shift in mindset for many psychologists, who suddenly learn that you can in fact have a legitimate result that there was no difference to be found. Knowing this I suspect will make people less wary of null results in general. - Proper randomization (and generally the practicalities of data collection). The author at some point calls it a mistake that a trial on the Mediterranean Diet had assigned to the same family unit the same diet, thus breaking the randomization. For the love of God, does he not know how families work? You cannot honestly ask members of the same family to eat differently! Sure, the authors should have implemented proper statistical corrections for this, but sometimes you have to design experiments for reality, not a spherical world. - Reviewers nudging authors to cite them. This may seem like a form of blatant self-promotion, but it's worth mentioning that in reality, the peer reviewers were specifically selected as members of the EXACT SAME FIELD, and so odds are good that they have in fact published relevant work, and odds are even better that they are familiar with it enough to recommend it. That is not to say that some of it is for racking up citations, but this is not true unless proven otherwise, because legitimate alternative explanations exist. Other little detail not mentioned by the author is that good science is f*cking hard. For my current experiment, I need a passing understanding of electrical engineering to run the recording equipment, a basic understanding of signal processing and matrix mathematics to clean and analyze the data, a good understanding of psychology for experimental design, a deep understanding of neuroscience for the actual field I'm experimenting in, a solid grasp of statistics, sufficient English writing skills, separate coding skills for both experimental tasks and data analysis in two different languages, and suddenly a passing understanding of hospital-grade hygiene practices to deal with COVID! There's just SO MUCH that can go wrong, and a failure at any point is going to ruin everything else. It's exhausting to juggle all that, and honestly, it's amazing that we have any valid results coming out at all. The only real solution to this is to have larger teams; focus less on individual achievements. The more eyes you have on scripts, the fewer bugs there will be; the more assistants available to collect data, the fewer mishaps. The more people reading the paper beforehand, the fewer mistakes slip through. We need publications from labs, not author lists; it can be specified somewhere the exact contribution of each, but science needs to move away from this model of venerating the individual, because this is not the 19th century anymore: the best science comes from groups. On CVs, we shouldn’t write lists of publications, we should write project descriptions (and cite the paper as “further reading”, not as an end in and of itself). ~~~ Scientists need the wakeup call from this book. Journalists and interested laymen also greatly benefit from understanding why a healthy dose of scepticism is needed towards any single scientific result, and how scientists are humans too. But the take-home message that can transpire from this book and is not actually true, is that scientists are either incompetent or dishonest or both. The author repeatedly bashes poor science and science communication that has eroded public trust in science, but ironically this book is essentially highlighting this with neon letters and making sure trust in science is eroded. To some extent it is warranted, but the author could have done more to defend the institution where it is deserved, and as an insider, could have done more to talk about the realities an individual scientist faces when they make these poor decisions. It's worth mentioning that science has not gotten worse, we're still making discoveries, still disproving our colleagues, and still improving quality of life. We could just be doing it more efficiently.

  9. 5 out of 5

    muthuvel

    Nobel Laureate Economist Daniel Kahneman, in his work targeted to public audience 'Thinking, Fast and Slow (2011)' talks about the certainty of Priming effects through citing various psychological studies and thereby claimed certain stimulus can be produced without conscious guidance or intention and that which can be patterned. It was one of the widely read popular bestsellers in the genre but things of uncertainty were likely after a few years when the studies he cited were failed to replicate Nobel Laureate Economist Daniel Kahneman, in his work targeted to public audience 'Thinking, Fast and Slow (2011)' talks about the certainty of Priming effects through citing various psychological studies and thereby claimed certain stimulus can be produced without conscious guidance or intention and that which can be patterned. It was one of the widely read popular bestsellers in the genre but things of uncertainty were likely after a few years when the studies he cited were failed to replicate or published with inadequate data. He even acknowledged that the fact that he was wrong about his certainty. What happened here? "The books we’ve just discussed were by professors at Stanford, Yale and Berkeley, respectively. If even top-ranking scientists don’t mind exaggerating the evidence, why should anyone else?" Following Kahneman, we have similar claimed by NASA, pop science books like Why We Sleep, studies of austerity, mediterranean diets, publication biases and issues of hacking p-values, cherry picking, salami slicing, self citations, self plagiarism, creating ghost citations and review from ghost peers, coercive citations from accepting journals. Most of the people who's already in the field would know most of the replication crises discussed in the book but I guess mostly their guides would have calmed them down that it's okay to not being able to replicate scientific study due human error among other factors. It's a conditioning that's been practiced contradictory of the objectives set by the founding figures of scientific publishing community like Boyle. Afterall the practitioners of science in the end has the susceptibility of becoming more of an organized cult working for their incentives of various kinds from academic survival, personal fame and status to achieve the bureaucratic standards forgetting the basic tenets of what scientific research is all about. The last book I read was a work of a Wittgenstein student showcasing how Social Science was massacred by Social Scientists (Sociologists, Social Psychologists, Economists, Political Scientists to name a few..) where as this one does the same in the Natural Sciences. But rather not going philosophical, it's limited to how science is practiced today than what science actually is. Maybe there are no better methods to understanding the world but as Winch said it's better to stay vigil and question 'the extra scientific pretensions' of scientific communities which creates its own norms and beliefs in its culture of practicing Science. Science Fictions: The Epidemic of Fraud, Bias, Negligence and Hype in Science (2020) ~ Stuart Ritchie

  10. 4 out of 5

    Agne

    Extremely informative and well argued. I would suggest it to anyone who has any contact with science in their daily life (so... everyone). I loved the examples and statistics and that it's at the same time really approachable. For the layperson, it's pretty shocking to hear how null results and replication studies have been treated by even reputable journals. There are a bunch of solutions offered at the end. The only downside is that if an aspiring scientist were to read this book, they might t Extremely informative and well argued. I would suggest it to anyone who has any contact with science in their daily life (so... everyone). I loved the examples and statistics and that it's at the same time really approachable. For the layperson, it's pretty shocking to hear how null results and replication studies have been treated by even reputable journals. There are a bunch of solutions offered at the end. The only downside is that if an aspiring scientist were to read this book, they might throw in the towel before they even start. It sort of implies that it is basically impossible to do anything worthwhile in the sort of sets of study participants that junior level scientists have access to (alas, the dreaded P-value). Not being able to use your own ideas before you get that million-dollar grant after years of being small cog may discourage some. It's not the romantic ideal. But I guess it's for the best. *** “Science, the discipline in which we should find the harshest scepticism, the most pin-sharp rationality and the hardest-headed empiricism, has become home to a dizzying array of incompetence, delusion, lies and self-deception.”

  11. 5 out of 5

    Hana

    highly recommend to anyone interested in the behind-the-scenes of science. the scientific process is riddled with human flaws: there's fraud, bias, and negligence. the means of publishing papers to share the findings with the general public has in fact become the sole objective of scientific endeavours. we're hyping up the results instead of being humble about our knowledge. at some point in the book, Ritchie refers to "the natural selection of bad science" which is a pretty fitting description f highly recommend to anyone interested in the behind-the-scenes of science. the scientific process is riddled with human flaws: there's fraud, bias, and negligence. the means of publishing papers to share the findings with the general public has in fact become the sole objective of scientific endeavours. we're hyping up the results instead of being humble about our knowledge. at some point in the book, Ritchie refers to "the natural selection of bad science" which is a pretty fitting description for the on-going feedback loop. now, this all sounds very pessimistic. but next to skilfully describing the context surrounding all the flaws in the scientific system, Ritchie also shows a very strong notion of what science should stand for and how we can start getting closer to it. "Society takes science remarkably seriously. Scientists need to reciprocate by holding themselves to far higher standards." (also, a bunch of fun facts included)

  12. 4 out of 5

    Alex

    the audience of this book is clearly lay people with no idea what's going on in science and you know what, fine, this is probably decent for that audience, in the same way that middle school teaches you a lot things that aren't really correct but whatever, you have to start somewhere. this is a sloppy presentation of the mainline narrative that emerged from the replication crisis that is uncritical and myopic. the presentation of statistics is particularly painful. it is hard to take seriously a the audience of this book is clearly lay people with no idea what's going on in science and you know what, fine, this is probably decent for that audience, in the same way that middle school teaches you a lot things that aren't really correct but whatever, you have to start somewhere. this is a sloppy presentation of the mainline narrative that emerged from the replication crisis that is uncritical and myopic. the presentation of statistics is particularly painful. it is hard to take seriously an account that presents psychology's problems as if they are universal or new; that sees science as producing results that either correct or incorrect, rather than subject to uncertainty; that centers p-hacking and replicability as the fundamental problems; that offers no analysis that hasn't already been rehashed umpteen times; that fails to cover the vast, exciting and recent meta-scientific literature (from within psychology itself!); that... the list goes on. this book irritated the shit out of me

  13. 5 out of 5

    Chris Boutté

    Incredible book that I binged in a day. As an influencer who often references psychological studies but also knows how much bad science is out there, I’m always trying to learn more about this subject. This author did a great job not just giving examples of bad science, but he explains WHY it’s happening and offers solutions. Absolutely loved this book and hope some journalists read it as well before they keep reporting on hyped up science.

  14. 5 out of 5

    Emil O. W. Kirkegaard

    Great book. Similar to Chris Chambers 7 deadly sins book but wider in scope.

  15. 4 out of 5

    A.E. Bross

    This book isn't for the faint of heart. And I don't mean that anything is too intense or anything along those lines. Ritchie, who is also the narrator/reader in this audiobook, does an excellent job of explaining his perspective, the research behind it, and varying ways that the problem he is outlining can be alleviated (though there's no magic bullet method that can correct all the ills of fictions in scientific research/publishing). No, what makes it not for the faint of heart is that, once re This book isn't for the faint of heart. And I don't mean that anything is too intense or anything along those lines. Ritchie, who is also the narrator/reader in this audiobook, does an excellent job of explaining his perspective, the research behind it, and varying ways that the problem he is outlining can be alleviated (though there's no magic bullet method that can correct all the ills of fictions in scientific research/publishing). No, what makes it not for the faint of heart is that, once reading it, you can't un-know the fact that many, many scientists are taking more than their share of shortcuts in the world of academia, in order to placate the great beast that is 'publish or perish.' A really excellent book on the topic, the reader can't help but have their eyes opened to what's going on, and read all the more closely when they see the newest, hottest scientific paper making the rounds in the media.

  16. 5 out of 5

    Pete

    Science Fictions : The Epidemic of Fraud, Bias, Negligence and Hype in Science (2020) by Stuart Ritchie is an excellent book that looks at the many problems in science and what can be done to improve the situation. Ritchie is a Psychologist at King’s College London. Science Fictions goes through how science currently works and then details the replication crisis, where the replication of studies, particularly in psychology but also in other fields demonstrated serious problems with science as it Science Fictions : The Epidemic of Fraud, Bias, Negligence and Hype in Science (2020) by Stuart Ritchie is an excellent book that looks at the many problems in science and what can be done to improve the situation. Ritchie is a Psychologist at King’s College London. Science Fictions goes through how science currently works and then details the replication crisis, where the replication of studies, particularly in psychology but also in other fields demonstrated serious problems with science as it stands. The problems of outright fraud, bias, negligence and hype are then examined. P-hacking, dropping studies with null results, self-citing and chronic hype are all well described. The book has many examples of these problems. Ritchie also describes the Mertonian Norms that science should seek to uphold, those of universalism, disinterestedness, organised skepticism and communalism of sharing results. The book gets into why scientists engage in dubious activities, namely that they want to succeed and often they believe that their hypothesis is true, it just needs a bit of help. This is Noble Cause Corruption but Ritchie doesn’t use the term. The push to publish and increase scientists h-factor and for journals to up their impact factor is also outlined. Ritchie also describes how science can be improved. Automated checking for statistical errors, pre-registration, open data, publication in free access journals and credit being given for replication and for well obtained null results can also help. Ritchie also points out that science has had lots of success recently and even with the current problems it still achieves incredible things. Science Fictions is a fine book that is well thought through, well written and fun to read. It would be very hard not learn something from reading it.

  17. 4 out of 5

    Steve

    Really enjoyed this description of some of the big problems in science today. This is not in any way an anti-science book, Ritchie makes clear that he wants to improve science, not to dispense with it. Along with describing problems he also describes much of the process of science which I enjoyed. He spends a lot of time on the reproducibility crises, p-hacking and other statistical cheating, and many other issues that one hears about when science problems get in the news. This book has been wel Really enjoyed this description of some of the big problems in science today. This is not in any way an anti-science book, Ritchie makes clear that he wants to improve science, not to dispense with it. Along with describing problems he also describes much of the process of science which I enjoyed. He spends a lot of time on the reproducibility crises, p-hacking and other statistical cheating, and many other issues that one hears about when science problems get in the news. This book has been well reviewed in general publications, but I was curious how science journals would review it. The only review I found in a professional science periodical (in the ultra-prestigious Nature) was basically positive with a few criticisms.

  18. 4 out of 5

    Analia

    As a scientist in training, this book simultaneously made me feel incredibly validated and threw me into a deep crisis that left me googling grants, bookmarking open source statistical software, and looking up alternative careers. I think that's a sign of how important this book is though. I highly recommend this to anyone that ever interacts with research findings and their interpretation (meaning literally everyone, especially with COVID), because it will give you a realistic perspective of ho As a scientist in training, this book simultaneously made me feel incredibly validated and threw me into a deep crisis that left me googling grants, bookmarking open source statistical software, and looking up alternative careers. I think that's a sign of how important this book is though. I highly recommend this to anyone that ever interacts with research findings and their interpretation (meaning literally everyone, especially with COVID), because it will give you a realistic perspective of how science works. After reading this, you will have a solid idea of how to evaluate research presented to you from any outlet, which I feel is something we all struggle with when we're flooded with as much information as we are today.

  19. 5 out of 5

    Scott Lupo

    AVOID. Here are my reasons: -From the beginning, this author lost my trust. In the preface, the author mentions how him and some colleagues wrote a null paper on the psychic experiments by Daryl Bem and was "unceremoniously rejected from the journal that published the original." That leaves the reader thinking that he never got that study published and moves on to the next subject. WRONG. Read the notes and you will find it did get published, just not to his liking. -Read the notes. It's another AVOID. Here are my reasons: -From the beginning, this author lost my trust. In the preface, the author mentions how him and some colleagues wrote a null paper on the psychic experiments by Daryl Bem and was "unceremoniously rejected from the journal that published the original." That leaves the reader thinking that he never got that study published and moves on to the next subject. WRONG. Read the notes and you will find it did get published, just not to his liking. -Read the notes. It's another whole book back there with some of them paragraphs long. Many of them either refute what he originally said or alters the original meaning just enough to realize he's trying to pull something. It would be interesting to know how many people actually read citations or notes at the end of books (couldn't find anything on google). I would venture to say not many, which I think he purposefully relied on for his narrative. -Do you enjoy abusive relationships? Me neither. However, that is what this book is like. "I come to praise science, not to bury it; this book is anything but an attack on science itself, or its methods." The next paragraph then explains the only "fragile scrap of hope and reassurance that emerges from the Pandora's box of fraud, bias, negligence and hype" is that scientists have uncovered these things themselves. Throughout the book it's 'science is great but science also sucks really, really bad because...'. -He has a hard on for Daniel Kahneman. He really doesn't like him. -He conflates social sciences with ALL SCIENCE. Yes, social sciences are muddy and gray because it deals with human beings, who are muddy and gray in just about everything they do. Creating experiments is very difficult and interpreting results even more difficult. But to lump all of science into this category is foolish and leans towards trickery. Throughout the book he switches, sometimes within the same paragraph, from a social science to other sciences. -Fraudsters, charlatans, flimflammers, and hustlers all use certain phrases in their toolkit of shams. Some of those are things like "you know what I mean", "let's face it", "it should be noted", "that being said". These are the phrases of all those psychics we used to see on tv (John Edward, Miss Cleo, Sylvia Brown). These phrases purposefully leave the door open for interpretation and let the listener/reader fill in the blanks themselves. Yeah, that is fraudster 101 class right there. -He constantly brings up the oldest cases of science fraud and then tries to compare them to today's frauds. Every case he brings to light ends in one way: they were caught! Because that is what science does. Incredible claims require incredible evidence. He acts like it is the worst thing in the world that science actually caught these things. Apparently, it is never fast enough for the author or he thinks science should be absolutely free of any errors, full stop. I am unsure whether he truly understands the scientific process. -His conclusions on how to fix these things is paltry at best. In fact, many of his suggestions are already in use today! Others he admits would be impossible to do. My only conclusion to this book is that it is a thinly veiled hit piece on science. Every fraudster knows that if you include nuggets of truth in your parable, then it will seem like everything is truthful. That is exactly this book. He even talks about this in the book. That scientists have gotten so good at faking their results so that it doesn't look 'perfect' and people will buy it. The irony!! The author grandiosely overstates his hypothesis that there is an epidemic of fraud, bias, negligence, and hype in science. Many times I thought I was reading an Onion article made into a book because he uses all those things in this book. Okay, I have laid out my reasons but I also want to give credit when it is due. Science is not perfect and the process is not always efficient and it does not always incentivise the proper way. Welcome to the problem with scaling up and money. Sure, it would be great to have science run without any thought to money or resources. Science for the sake of science. Cool with me. Let's shoot for that and do what we can to get as close as we can to that ideal. But this is not the message in this book. I truly believe this author has problems with two things: social sciences and the philosophy of science (epistemology). He should consider writing on those subjects instead of attacking the whole of science. Especially in a dishonest way like this book. It gets a star for actually writing a book and a star for at least shedding light on some of the issues with scientific research sometimes. That's two stars. The same I gave to Michelle Malkin. Enough said.

  20. 4 out of 5

    Amirmansour Khanmohammad

    Just thinking about how many BS books that I have read as science, but reality is, they’re fiction, not science. A great and enjoyable read.

  21. 5 out of 5

    Anthony D'Ambrosio

    Pleasantly surprised by how often the phrase "salami slicing" appears Pleasantly surprised by how often the phrase "salami slicing" appears

  22. 4 out of 5

    Annas Jiwa Pratama

    Back in around 2017, I think that was when I was introduced proper to the crisis in psychology and the open science movement. I was doing my masters in health psychology, which in a small part was influenced by my then fascination with nudging and Brian Wansink’s research in particular. I felt a little betrayed when I found that he was a fraud, a little betrayed, but was mostly embarrassed. Since then, I’ve grown more and more jaded, knowing less and less about what in my field was true and wasn Back in around 2017, I think that was when I was introduced proper to the crisis in psychology and the open science movement. I was doing my masters in health psychology, which in a small part was influenced by my then fascination with nudging and Brian Wansink’s research in particular. I felt a little betrayed when I found that he was a fraud, a little betrayed, but was mostly embarrassed. Since then, I’ve grown more and more jaded, knowing less and less about what in my field was true and wasn’t, and I tried to stay updated about the reform movement and other crises in psych. This book feels like a summary of what I’ve observed (mostly via twitter and papers) over the years afterwards. Ritchie offers a broad overview of symptoms of bad science; frauds, biases, negligence, overselling findings, then presents possible causes and what’s currently being tested and rolled out to try and prevent scientific malfeasance. It’s a great introduction especially if you’ve heard about ‘replication crisis’ or ‘open science’ and wanted a glimpse of the kitchen. This is probably *the* intro book for meta-science, one that focuses mostly on why it is needed and why it might work out. If you are a practitioner or a scientist, as I (would like) to think I am, you probably won’t find anything too surprising. Nonetheless, it is still a good refresher and there are more practical sections too. (the ‘How to Read a Scientific Paper’ appendix feels like it would make a really good material for teaching students and laypersons how to apply skepticism when reading about science). I’d like to add however one thing the book: organize. I discovered a chapter of scientists here in Indonesia that are laying the groundwork for better science. Policy advocacy, networking with international communities, creating open science platforms, helping each other get on top of new methods, all around amazing stuff. Seeing people actually doing the work really makes you believe that things can get better. However, it is very important to note as well that we need to take the reform with the same skepticism that the book proposes we have for scientific findings in general. After all, meta-science is also a science, and the people running it have their own stakes and biases. The most popular review of the book in this site suggests that the approach Ritchie suggests are itself a kind of new ‘meta-hype’, just as unproven and that ‘flakey results’ will eventually be subject to correction anyways. I somewhat agree, in that it is important to critically look into meta-science and reforms as well. In psychology for example, the reform movement has been criticized for being overly focused on methodological reforms, ignoring that the field’s actual need is more foundational, and that the reforms themselves are lacking in a formal approach (as well as the fact that reformists are not always open and welcoming themselves). However, I feel that the latter argument is misguided. False findings aren’t ‘automatically’ found. As the book have presented, it takes a lot of work and sometimes invites a lot of pushbacks from powerful individuals and institutions. Corrections don’t magically happen. To dismiss this book on that basis is, to put it bluntly, kind of whack. Tangents • I learned that peer review is actually a rather new invention, which kind of makes sense when you think about the logistic. Kind of funny reading how Einstein fumed at the thought of getting his paper reviewed pre-publication. • Also, this book is actually rather nuanced if I’d say so myself. Did not think the discussion about Trump’s administration’s attack on climate science using the language of meta-science was going to be included but there it was. • Feels kind of bad to sub-tweet a Goodreads review lmao

  23. 5 out of 5

    Nick Mclean

    A brilliant and timely look at the problems that afflict contemporary science, and thoughtful, inventive solutions to those problems. Stuart Ritchie delves into a central problem in contemporary science, many major, often lauded studies fail to replicate on scale (or at all), and examine key causes of these problems: hype, bias, negligence, and fraud. Ritchie goes into detail about how perverse incentives in our contemporary system feed each one of these problems. The examples he uses to illustr A brilliant and timely look at the problems that afflict contemporary science, and thoughtful, inventive solutions to those problems. Stuart Ritchie delves into a central problem in contemporary science, many major, often lauded studies fail to replicate on scale (or at all), and examine key causes of these problems: hype, bias, negligence, and fraud. Ritchie goes into detail about how perverse incentives in our contemporary system feed each one of these problems. The examples he uses to illustrate his case are insightful, informative and unsettling. Ritchie doesn't merely analyze the problem, he also proposes innovative solutions to the problems he studies, and highlights promising work of many scientists and reformers. If nothing else this book is a useful guide for how to spot dubious reporting on scientific issues, and to sceptically analyze outsized claims. For those entering the field, this book should serve as a primer for how aspiring scientists can be constructive players in an evolving system. I agree with Ritchie that while declining public faith in some scientific fields is troubling; and we can improve how we communicate science; in the long term this is a "physician heal thyself" moment. I hope practicing and aspiring scientists read this essential book.

  24. 5 out of 5

    Daniel Hageman

    Highly recommend.

  25. 5 out of 5

    Ben

    My first ever audiobook. Well written and convincing. A little repetitive, which is sort of the nature of the beast. When he introduces yet another study with incredible results, obviously you know it's about to be debunked. My first ever audiobook. Well written and convincing. A little repetitive, which is sort of the nature of the beast. When he introduces yet another study with incredible results, obviously you know it's about to be debunked.

  26. 5 out of 5

    Cam

    Essential reading for graduate science researchers. Although, much of the material will hopefully be familiar to them. Ritchie writes clearly. He's likeable and scientifically and statistically literate, but doesn't take himself too seriously. He's a great science populariser even when he is denigrating science! Ritchie helped kick off the well-publicised replication crisis in social science in 2012 when he attempted and failed to replicate a para-psychology paper. The original paper by Bem purpo Essential reading for graduate science researchers. Although, much of the material will hopefully be familiar to them. Ritchie writes clearly. He's likeable and scientifically and statistically literate, but doesn't take himself too seriously. He's a great science populariser even when he is denigrating science! Ritchie helped kick off the well-publicised replication crisis in social science in 2012 when he attempted and failed to replicate a para-psychology paper. The original paper by Bem purported to show that we can study for a test after we have taken the test to improve our test results. Obvious nonsense, right? No surprises it failed to replicate. The major problem, as the original authors noted, is that their methods weren't all that different to many of the papers being published in social science. Essentially social science can't be trusted. Whether a study replicates doesn't correlate with how many citations it has. Truly remarkable. Ritchie does a nice job explaining to lay-audience concepts like p-value, statistical significance, and the common dodgy statistical methods such as p-hacking and harking. He also outlines how the issues are exacerbated by perverse incentives in academia such as publish or perish, and the need for results to be statistically significant and sexy. Ritchie also recounts some good narrative non-fiction around some of the most high-profile cases of fraud such as Diederik Stapel (the Bernie Madoff of science) and Paolo Machiarinni (who claimed he was healing people with risky procedures - as opposed to killing them!). Twitter user Alvaro De Menard is less optimistic than Ritchie. De Menard points out, with a systematic deep dive into social science paper's replicability, that this isn't a recent phenomenon. Any proposed way forward to fix the crisis within academia is unlikely to succeed.

  27. 4 out of 5

    Tristan Eagling

    Goodharts's law states "When a measure becomes a target, it ceases to be a good measure", which sums up the premise of the book perfectly. For centuries, science has tried to give value to subjective knowledge and academia relies on these often arbitrary metrics. But all we have done is create a system which can be gamed, and populated that system with clever (mostly) people who are heavily incentivized to game the system. As someone who has published scientific research, peer reviewed others and Goodharts's law states "When a measure becomes a target, it ceases to be a good measure", which sums up the premise of the book perfectly. For centuries, science has tried to give value to subjective knowledge and academia relies on these often arbitrary metrics. But all we have done is create a system which can be gamed, and populated that system with clever (mostly) people who are heavily incentivized to game the system. As someone who has published scientific research, peer reviewed others and worked for various funders, much of the author's criticisms of 'science' and how we incentivise it hits home. Nothing in this book was particularly surprising or new to me, but I had never considered the extent the combined effect of all these individual imperfections in our system are having on the quality of the science being produced (and how much time and money is being spent on nothing more than maybe furthering someone's career). The most fascinating part of the book was the reference to the field of meta-science (the science of science), which has started to quantity just how bad malpractice in science is and analyze aspects of the funder-scientist-journal relationships. This will be an uncomfortable read for many in the world of science or anyone who has advocated the finding of a popular scientist book to their friends. However, it is essential reading and hopefully will help us all get closer to that elusive concept of 'truth'.

  28. 4 out of 5

    Dan Giffney

    Very accessible critique of the current research and science-publishing system. It expresses an admiration for the scientific method whilst describing how it has been weakened by perverse incentives, producing an outcome- rather than process-orientated system. These outcomes are sometimes of low quality. In the best case scenario the research is repeated, causing animals to suffer and resources to be wasted. In the worst of cases, replications are discouraged by publishers, preserving incorrect Very accessible critique of the current research and science-publishing system. It expresses an admiration for the scientific method whilst describing how it has been weakened by perverse incentives, producing an outcome- rather than process-orientated system. These outcomes are sometimes of low quality. In the best case scenario the research is repeated, causing animals to suffer and resources to be wasted. In the worst of cases, replications are discouraged by publishers, preserving incorrect notions and wasting human lives in the pursuit of speed. My one gripe with the book is that it provides solutions to these problems that I have seen before and seen ignored for years as people who could make a difference by encouraging these policies end up embedded within the repetitive cycle. People doing considerate, well planned science are disadvantaged in the competitive job market that uses publication number not quality to rank applicants leading to a "natural selection of bad science". Are there other feasible ways to encourage honestly in the research community? Or do all dispersed, self-organising, "self-regulating" systems encourage unscrupulous and selfish behaviour as they outcompete all others.

  29. 5 out of 5

    Richard Thompson

    This was a terrific book about the problems that exist today in the accuracy of scientific publications, the causes of the problems and some good ideas about what can be done to improve the situation. It could have been written in an alarmist style with much finger pointing, but it wasn't. Sometimes the bare numbers are shocking in disclosing the size of the problem, though the author would be the first to admit that even that has to be taken with a grain of salt since numbers are always selecte This was a terrific book about the problems that exist today in the accuracy of scientific publications, the causes of the problems and some good ideas about what can be done to improve the situation. It could have been written in an alarmist style with much finger pointing, but it wasn't. Sometimes the bare numbers are shocking in disclosing the size of the problem, though the author would be the first to admit that even that has to be taken with a grain of salt since numbers are always selected and presented for impact, even when the story they tell is essentially true. I liked how the author bases his arguments on the underlying values of the scientific method and builds his program around ways to reinforce those values. I would recommend this book to anyone interested in the process and philosophy of science.

  30. 5 out of 5

    Mizuki

    This book is not usual 'science fiction', do not expect that. It is about scientific fraud, full of well-known cases. I'm quite familiar with these cases as a scientist, and even as a reader of many scientific literatures. Most of the points that the author says are painfully true - I know that is true, especially about the publication bias. Most scientists just want to publish their "best" data even if it is not reproducible well enough. On the other hand, I somewhat want to believe that most sci This book is not usual 'science fiction', do not expect that. It is about scientific fraud, full of well-known cases. I'm quite familiar with these cases as a scientist, and even as a reader of many scientific literatures. Most of the points that the author says are painfully true - I know that is true, especially about the publication bias. Most scientists just want to publish their "best" data even if it is not reproducible well enough. On the other hand, I somewhat want to believe that most scientists seek something truly important/useful. In my sense, we usually just publish our best results knowing these are not really practical, and in the background, we're continuously seeking something really practical even though it would not be published until fully validated.

Add a review

Your email address will not be published. Required fields are marked *

Loading...