• elashri 6 hours ago

    There is at least one thing wrong about this. This is an essay about a paper published a simulation based scenarios in medical research. It then try to generalize to "research" and avoid this very narrow support to the claim. I think this is something true and it should make us more cautious when deciding based on single studies. But things are different in other fields.

    Also this is called research. You don't know the answer before head. You have limitations in tech and tools you use. You might miss something, didn't have access to more information that could change the outcome. That is why research is a process. Unfortunately common science books talks only about discoveries, results that are considered fact but usually don't do much about the history of how we got there. I would like to suggest a great book called "How experiments end"[1] and enjoy going into details on how scientific conscious is built for many experiments in different fields (mostly physics).

    [1] https://press.uchicago.edu/ucp/books/book/chicago/H/bo596942...

    • a_bonobo 5 hours ago

      I remember criticism back from when this paper first came out: it went something like 'all this shows that using maths it is possible to construct a world where most published research findings are false.'

      • ants_everywhere 3 hours ago

        I think the best way to view this paper is as a sort of meta-analysis of a wider literature around null hypothesis testing and p-values. That literature goes back at least to the 1970s with the work of people like Paul Meehl and Gene Glass. But you can push it further back, like the 1957 Lindley Paradox that Ioannidis cites.

        Part of the reason this paper was impactful is that it was short and punchy, took aim at all of medicine rather than a smaller subfield, and didn't require as much mathematical understanding as other papers.

        I knew some of the big names that were hit by the replication crisis. And before that I spent some time trying to talk to psychology researchers at a top school about the problems with statistical testing. But they had limited knowledge of stats and didn't want to go out on a limb when everyone else in the field seemed okay with the status quo. A paper like this can be read by everyone and makes a forceful argument.

        > It then try to generalize to "research" and avoid this very narrow support to the claim

        This is a good point. The methods in medicine and the social sciences are especially weak and prone to these sorts of criticisms. In the physical sciences, often you can run enough iterations of the experiment to overwhelm any prior.

        > You have limitations in tech and tools you use. You might miss something, didn't have access to more information that could change the outcome. That is why research is a process.

        I totally agree. Science is basically a control system, or a root finding algorithm, or gradient descent. At any time t there is a gap between the best known science and the truth. But the point is that science converges to the truth over time, whereas no other alternative does.

        • thatguysaguy 6 hours ago

          I think it's clear that this paper has stood the rest of time over the last 20 years. Our estimates of how much published work fails to replicate or is outright fraudulent have only increased since then.

          • giantg2 5 hours ago

            [Please consider the following with an open mind]

            Just because a study doesn't replicate, doesn't make it false. This is especially true in medicine where the potential global subjects/population are very diverse. You can do a small study that suggests further research based on a small sample size, or even a case study. The next study might have a conflicting finding, but that doesn't make the first one false - rather a step in the overall process of gaining new information.

            • llamaimperative 5 hours ago

              I think it's much, much more powerful to think of "failure to replicate" as "failure to generalize."

              In lieu of actual fraud or a methodological mistake that wasn't represented/caught in peer review, it's still extremely difficult to control for all possible sources of variation. That's especially true as you go further "up the stack" from math -> physics -> chem -> bio -> psych -> social. It is absolutely possible to honestly conduct a very high quality experiment with a real finding, but fail to account for something like "on the way here, 80% of participants encountered a frustrating traffic jam."

              Their finding could be true for people who just encountered a traffic jam, and lack of replication would be due to an unsuccessful generalization from what they found.

              • scns 5 hours ago

                Dislike being a pedant but the stack was missing math up front

                • llamaimperative 5 hours ago

                  Haha, math strikes me as a bit different from the others... but I'll add it just for you ;)

                  • gopher_space 27 minutes ago

                    Dislike being a pedant but the stack is missing philosophy up front.

              • jahewson 5 hours ago

                > Just because a study doesn't replicate, doesn't make it false.

                But it also doesn’t make it not false. It makes the null hypothesis more likely to be true.

                • robwwilliams an hour ago

                  That is certainly one possible interpretation.

                  The other is the introduction or loss of critical cofactors or confounders that radically change environment and context.

                  Think of experiments of certain types before and after COVID-19.

              • tptacek 6 hours ago

                Outright research fraud is probably very rare; the cases we've heard about stick out, but people outside of academia usually don't have a good intuition for just how vast the annual output of the sciences are. Remember the famous PhD comic, showing how your thesis is going to be an infinitesimal fraction of the work of your field.

                • kelipso 5 hours ago

                  Research fraud is likely very rare but it's not about a few stories that show up about unreplicable studies that stick out. There was a study a few years where they tried to replicate a bunch of top cited psychology papers and the majority of the experiments were not replicated. Then people did the same for other disciplines afterwards and, while it wasn't as bad as psychology, there were plenty of papers they couldn't replicate.

                  • tptacek 4 hours ago

                    Every time this topic comes up I'm reminded of what Stefan Savage, a hero of mine, said about academic papers ("studies", in the sense we're discussing here): they are the beginnings of conversations, not the end. It shouldn't shock people that results in papers might not replicate; papers aren't infallible, which makes sense when you consider the process that produces them.

                    • robwwilliams an hour ago

                      That is a generous interpretation. But in many cases we try our best to dress up studies and tell good stories—-preferably stories with compelling positive statistics and with slick figures. The story telling often obscures the key data.

                  • dekhn 5 hours ago

                    Is incompetence fraud? Or just incompetence? I'm asking because a fair number of the molecular biologists who get caught by Elizabeth Bik for copy/pasting images of gels insist they just made honest mistakes (with some commentary about the atrocious nature of record-keeping in modern biology).

                    I alter Ionnides's conclusion to be instead: "Roughly 50% of papers in quantitative biological sciences contain at least one error serious enough to invalidate the conclusion" and "Roughly 75% of really interesting papers are missing at least one load-bearing method detail that reproducers must figure out on their own" (my own personal observations of the literature are consistent with these rates; I was always flabbergasted at people who just took Figure 3 as correct).

                    • kelnos 3 hours ago

                      > Is incompetence fraud? Or just incompetence?

                      Fraud requires intent; it's a word that describes what happened, but also the motivations of the people involved. Incompetence doesn't assume any intent at all; it's merely a description of the (lack of) ability of the people involved.

                      Incompetent people can certainly commit fraud (perhaps to try to cover up their incompetence), but that's by no means required.

                      > ...insist they just made honest mistakes

                      If they're lying about that, it's fraud; they're either covering up their unrealized incompetence with fraud, or trying to cover up their intended fraud with protestations of mere incompetence. If they really did make honest mistakes, then it's just garden-variety incompetence. (Or just... mistakes. To me, incompetence is when someone consistently makes mistakes often. One-time or few-time mistakes are just things that happen to people, no matter how good the are at what they do.)

                      • pfdietz an hour ago

                        The legal phrase I like is "knew or should have known". If there is a situation where you should have known something was wrong, it's as bad as if you really knew it was wrong. To hold otherwise incentivizes willful blindness and plausible deniability.

                      • kelipso 5 hours ago

                        There is no one hovering over scientists all the time ready to stick a hot poker in them when they make a mistake or get careless. I was in academia and my impression is there is a reluctance to double and triple check results to make sure they are right as long as the results match your instincts, whether it's time pressure, laziness, bias, or just being human.

                        • dekhn 4 hours ago

                          At least in my own mental model of publishing a paper (I've published only a few), I'd want my coauthors to stick hot pokers in my if I made a mistake or got careless. But then, my entire thesis was driven by a reproducible Makefile that downloaded the latest results from a supercomputer, re-ran the whole analysis, and wrote the latex necessary (at least partly to avoid making trivial mistakes). It was clear everything I was doing was just getting in the way of publishing high prestige papers.

                          • robwwilliams an hour ago

                            All too easy to understand your situation. NIH is finally but slowly waking up and is imposing more “onerous” (aka: essential and correct) data management and sharing (DMS) document. Every grant applicant now submits following these guidelines:

                            https://grants.nih.gov/grants/guide/notice-files/NOT-OD-24-1...

                            Unfortunately, not all NIH institutes understand how to evaluate and moderate this key new policy. Oddly enough the peer reviewers do NOT have access to DMS plans as of this year.

                        • llamaimperative 5 hours ago

                          > I'm asking because a fair number of the molecular biologists who get caught by Elizabeth Bik for copy/pasting images of gels insist they just made honest mistakes

                          You're talking about (almost certainly) fraudsters denying they committed fraud. The vast majority of non-replicable results have nothing to do with these types of errors, purposeful or not.

                    • throwoutway 2 hours ago

                      > But things are different in other fields.

                      Everyone claims its different in their field

                      • singleshot_ 2 hours ago

                        Strangely, we don't in my field!

                    • vouaobrasil 6 hours ago

                      > In this framework, a research finding is less likely to be true [...] where there is greater flexibility in designs, definitions, outcomes, and analytical modes

                      It's worth noting though that in many research fields, teasing out the correct hypotheses and all affecting factors are difficult. And, sometimes it takes quite a few studies before the right definitions are even found; definitions which are a prerequisite to make a useful hypothesis. Thus, one cannot ignore the usefulness of approximation in scientific experiments, not only to the truth, but to the right questions to ask.

                      Not saying that all biases are inherent in the study of sciences, but the paper cited seems to take it for granted that a lot of science is still groping around in the dark, and to expect well-defined studies every time is simply unreasonable.

                      • 3np 6 hours ago

                        This is only meaningful if "the replicaton crisis" is systematically addressed.

                      • SideQuark 4 hours ago

                        This paper, almost 20 years old, has plenty of follow-up work showing the claims in this original paper aren’t true.

                        One simple angle is Ioannidis simply makes up some parameters to show things could be bad. Later empirical work measuring those parameters found Ioannidis off by orders of magnitude.

                        One example https://arxiv.org/abs/1301.3718

                        There’s ample other published papers showing other holes in the claims.

                        https://scholar.google.com/scholar?cites=1568101778041879927...

                        Google scholar papers citing this

                        • dang 4 hours ago

                          Related. Others?

                          Why most published research findings are false (2005) - https://news.ycombinator.com/item?id=37520930 - Sept 2023 (2 comments)

                          Why most published research findings are false (2005) - https://news.ycombinator.com/item?id=33265439 - Oct 2022 (80 comments)

                          Why Most Published Research Findings Are False (2005) - https://news.ycombinator.com/item?id=18106679 - Sept 2018 (40 comments)

                          Why Most Published Research Findings Are False - https://news.ycombinator.com/item?id=8340405 - Sept 2014 (2 comments)

                          Why Most Published Research Findings Are False - https://news.ycombinator.com/item?id=1825007 - Oct 2010 (40 comments)

                          Why Most Published Research Findings Are False (2005) - https://news.ycombinator.com/item?id=833879 - Sept 2009 (2 comments)

                          • cb321 3 hours ago

                            As clarification, the article linked in the subject is dated 2022 BUT it is actually just "a correction" of the very famous 2005 article. The correction is eentsy-weentsy - just a missing pair of parenthesis if you click through:

                                There is an error in Table 2. A set of parentheses is missing in the equation for Research Finding = Yes and True Relationship = No. Please see the correct Table 2 here.
                          • tombert 5 hours ago

                            As I’ve transitioned to more exploratory and researchy roles in my career, I have started to understand the science fraudsters like Jan Hendrik Schön.

                            When you spent an entire week working on a test or experiment that you know should work, at least if you give it enough time, but it isn’t for whatever reason, it can be extremely tempting to invent the numbers that you think it should be, especially if your employer is pressuring you for a result. Now, obviously, reason we run these tests is precisely because we don’t actually know what the results will be, but that’s sometimes more obvious in hindsight.

                            Obviously it’s wrong, and I haven’t done it, but I would be lying if I said that the thought hadn’t crossed my mind.

                            • bluefirebrand 5 hours ago

                              > When you spent an entire week working on a test or experiment that you know should work

                              I thought the whole point of doing experiments was to challenge what we "know" so we can refine our understanding?

                              • llamaimperative 5 hours ago

                                Sure in la-la-land where science isn't conducted by humans.

                                In reality, scientists are highly motivated (i.e. biased) individuals like anyone else. Therefore science cannot be done effectively by individuals.

                                The system that derives truth from experiments - the actual scientific system - is the competitive dynamic between scientists who are trying to tarnish each others' legacies and bolster their own. The scientific method etc. primarily makes scientific claims scrutinizable in detail, but without scrutiny they are still highly liable to produce false information.

                                • hiimkeks 3 hours ago

                                  A bit of a nitpick, but...

                                  > The system that derives truth from experiments - the actual scientific system...

                                  Yes!

                                  > ... is the competitive dynamic between scientists who are trying to tarnish each others' legacies and bolster their own.

                                  Hm. To some degree, sure, that is one dynamic, but (a) this leads to/presupposes a truckload of perverse incentives and (b) this is not inherent in the system if we rearrange incentives

                                  • llamaimperative 2 hours ago

                                    Do you have an idea for a better one? It is pretty darn close to natural selection, which while ugly, does produce surprisingly good results in many domains.

                                    Of course the implementation is far from perfect. For example, the interaction between impact factor and grant funding produces pressure toward ideological conformity and excessive analytical “creativity”. But the underlying principle of competitive scrutiny is probably a desirable one.

                                    • jtc331 2 hours ago

                                      How do you eliminate the personal incentive to have found a meaningful result? I don’t think that can be changed without redesigning the human psyche.

                                    • 6510 an hour ago

                                      > Sure in la-la-land where science isn't conducted by humans.

                                      If someone has a large bag of money laying around the plan is this:

                                      There are lots of companies that will run material A though machine B for you. There are a lot of science machines. One is to put a lot of them into a large building and make a web page where one can order the processing of substances in a kind of design your own rube goldberg machine.

                                      It can start with all purchasable liquids and gasses, mixing, drying, heating, freezing, distilling etc and measure color, weight, volume, viscosity, nuclear resonance etc, microscope video, etc. Have as much automation as possible, collect all the machines. A robot cocktail bar basically.

                                      Work your way up to assembling special contraptions all ordered though the gui.

                                      Jim can have x samples of his special cement mixture mixed and strength tested. Jack can have his cold fusion cells assembled. Stanley can have his water powered combustion engine. Howard can have his motor powered by magnets. Veljko can have his gravity powered engine. Thomas can have his electrogravitics. Wilhelm can have his orgone energy.

                                      or not... hah....

                                      If any people are involved they should not know what they are working on.

                                      It wont be cheap but then you get an url with your nice little test report and opinions be damned.

                                      • wredue 5 hours ago

                                        And yet, it is still the best we got for also producing highly reliable and correct information.

                                        Personally, I think the “highly” in your statement is quite over exaggerated. Humans can be convinced to produce bad science, for sure, and there are even journals set up by religious orgs that specifically exist to do just that.

                                        But at the same time, science landed humans on the moon.

                                        • mcmoor 2 hours ago

                                          This is what makes me troubled regarding medical science. I've heard tons of things about fraud and unreproducible results but new wonder drugs (that actually worked!) are deployed every year.

                                          • llamaimperative 2 hours ago

                                            Clinical trials in general are extremely, extremely above board. The level of scrutiny is extreme, and the stakes are unbelievably high for pharma companies and the individuals involved. There are better ways for an unscrupulous pharma co to gain an edge.

                                            That said, wonder drugs are few and far between. The GLPs are at least a once-in-a-decade breakthrough, so that’s probably most of the noise you’re hearing (there are a lot of brand names already).

                                            • jtc331 2 hours ago

                                              What about Vioxx?

                                              • llamaimperative 2 hours ago

                                                > in general

                                                No one is under the illusion it’s perfect or ungameable. A drug slipping by every few years is bad and often tragic, but IMO nowhere close to indicative of a systematic problem. It is a system that is worthy of a high degree of trust.

                                          • kelnos 3 hours ago

                                            > Personally, I think the “highly” in your statement is quite over exaggerated.

                                            Except that the entire point of the article here is that it's not exaggerated.

                                            > But at the same time, science landed humans on the moon.

                                            Cherry-picking a highly successful, well-known example doesn't prove a point.

                                            • parodysbird 2 hours ago

                                              > But at the same time, science landed humans on the moon.

                                              That was engineering. Closely linked to science, but not the same process of inquiry.

                                              • EGreg 3 hours ago

                                                No, we have better systems now

                                            • tombert 4 hours ago

                                              In theory, but it is extremely easy to get into the mindset that your hypothesis is absolutely true, and as such your goal is to prove that hypothesis.

                                              I’ve never fabricated numbers for anything I’ve done, but there certainly have been times where I thought about it, usually after the fourth or fifth broken multi-hour test, especially if the test breakage doesn’t directly contradict the hypothesis.

                                              • DiggyJohnson 2 hours ago

                                                Thanks for staying your point so clearly. I’m a bystander to this discussion, but agree with you about the reality of this.

                                              • ants_everywhere 3 hours ago

                                                There are externally motivated scientists who are in it for the prestige or awards. Some fields are more like this than others, but they show up in all fields.

                                                Plus these days there's a lot of pressure to run universities more like businesses. To eat, academics have to hit certain numbers, so you see behaviors common in business like faking the KPIs.

                                                • MathMonkeyMan 5 hours ago

                                                  Because of that, backing up a claim with research adds weight to the claim.

                                                  If the claim is false, though, you can still sometimes get research to support it. If you or the researcher stands to profit from the false claim, then there is a conflict of interest.

                                                  • SilasX 4 hours ago

                                                    I think that’s what the parent is acknowledging in the end of the second paragraph.

                                                    • renewiltord 5 hours ago

                                                      Well, that depends. What are you paying the guy to do?

                                                    • pragmomm 5 hours ago

                                                      You should also understand that there are external forces here, like state sponsorships that monetarily rewards for scientists to simply file enough research findings.

                                                      The startling rise in the publication of sham science papers has its roots in China, where young doctors and scientists seeking promotion were required to have published scientific papers. Shadow organisations – known as “paper mills” – began to supply fabricated work for publication in journals there. https://www.theguardian.com/science/2024/feb/03/the-situatio...

                                                      The number of retractions issued for research articles in 2023 has passed 10,000 — smashing annual records — as publishers struggle to clean up a slew of sham papers and peer-review fraud. Among large research-producing nations, Saudi Arabia, Pakistan, Russia and China have the highest retraction rates over the past two decades, a Nature analysis has found. https://www.nature.com/articles/d41586-023-03974-8

                                                      That's why a recent article https://news.ycombinator.com/item?id=41607430, where the measurement of China leads world in 57 of 64 critical technologies was based on number of journal citations, was laughable.

                                                      • a_bonobo 5 hours ago

                                                        Talking with some Chinese colleagues in the past, they were talking about having a 'base' salary which was not enough to have a family on. For every published paper they'd get a one-time payment. So you'd have to get a bunch of papers out every year just to survive; no wonder people start to invent papers.

                                                        Of course the same thing is happening in the 'Western' world too, with a publication ratchet going on. New hire has 50 papers out? OK! The next pool of potential hires has 50, 55, 52 papers out, so obviously you take the 55 papers-person. You want outstanding people! Then the next hire needs 60 papers. And so on.

                                                        • nativeit 3 hours ago

                                                          ...an effect known as "wonkflation".

                                                        • resoluteteeth 5 hours ago

                                                          I think there are maybe two separate issues here.

                                                          Paper mills are bad but mostly from the perspective of academic institutions trying to verify people's credentials/resumes. Paper mills aren't really that much of a concern in the sense of published research results being false in the way the article is talking about because people aren't really reading the papers they publish. In that sense it doesn't really matter if there are places where non-scientists need to get one paper published to check some box to get a promotion, because nobody is really considering those papers part of established scientific knowledge.

                                                          On the other hand, scientists intentionally (by actually falsifying data) or unintentionally (as a result of statistical effects of what is researched and what is published) publishing bogus results in journals that are considered legitimate which aren't paper mills actually causes real harm as a result of people believing the bogus results, and unfortunately the pressures that cause that (publishing papers quickly, getting publishable results, etc.) exist everywhere, and definitely not just in China, nor did they originate in China.

                                                          • DiscourseFan 4 hours ago

                                                            This is what happens when Silicon Valley execs, trying to make their employees more replaceable, call for more STEM education; suddenly, tons of funding and institutional resources go into STEM research with no real reason or motivation or material for this research. It's like an gerbil wheel: once you get on the ride, once you get tricked into becoming a "scientist" just because a few billionaires wanted to cut slightly thicker margins, there's no stop. Bullshit your way through undergraduate education, bullshit your way through a PhD; finally, if you're good enough at making up statistics, you get a job training a whole host of other bullshitters to ride the gravy train.

                                                            • aleph_minus_one 3 hours ago

                                                              > tons of funding and institutional resources go into STEM research with no real reason or motivation or material for this research.

                                                              I do believe that there exists an insane amount of (STEM) questions where there exist very good reasons to do research on - much, much more than is currently done.

                                                              ---

                                                              And by the way:

                                                              > This is what happens when Silicon Valley execs, trying to make their employees more replaceable, call for more STEM education

                                                              More STEM education does not make the employees more replaceable. The reason why the Silicon Valley execs call for more STEM education is rather that

                                                              - they want to save money training the employees,

                                                              - they want to save money doing research (let rather the taxpayer pay for the research).

                                                              • randomdata 2 hours ago

                                                                > - they want to save money training the employees,

                                                                So what you're saying is that they push for STEM education to make their employees more replaceable...?

                                                          • highcountess 3 hours ago

                                                            I’ve been in a meeting with government research officials where a director of the primary global institution in that field described how when she writes or does research and writes papers she draws a graph she needs to support her research or a point she is trying to make and then goes to look for the data to create that graph.

                                                            Maybe I’m missing something, but I do not believe that is the way it is stopped to go. Btw, she has a PhD and failed up into a global scale.

                                                            I’ve been meaning to find out if there are any open tools to evaluate someone’s dissertation.

                                                            It was equal part stunning and seemingly a bit traumatizing to me considering I still remember it as if it had happened earlier today. I think what surprised me too was her open admission of it, even with external parties present.

                                                            • randomdata 2 hours ago

                                                              So she establishes a hypothesis (draws a graph or picks a point to make) and then tests it through experimentation (looks for data to support the hypothesis)? Isn't that just the scientific method worded another way?

                                                              • elashri an hour ago

                                                                Wait until the GP knows about how scientists generate Monte-Carlo (MC) simulation data to see what a positive results looks like and then do meta analysis for both real data and MC.

                                                          • md224 4 hours ago

                                                            Something that continues to puzzle me: how do molecular biologists manage to come up with such mindbogglingly complex diagrams of metabolic pathways in the midst of a replication crisis? Is our understanding of biology just a giant house of cards or is there something about the topic that allows for more robust investigation?

                                                            • youainti 6 hours ago

                                                              Please note the peerpub comments discussing that it appears that followup research shows about 15% is wrong, not the 5% anticipated.

                                                              https://pubpeer.com/publications/14B6D332F814462D2673B6E9EF9...

                                                              • smeej 4 hours ago

                                                                This kind of report always raises the question for me of what the existing system's goals are. I think people assume that "new, reliable knowledge" is among the goals, but I don't see that the incentives align toward that goal, so I don't know that that's actually among them.

                                                                Does the world really want/need such a system? (The answer seems obvious to me, but not above question.) If so, how could it be designed? What incentives would it need? What conflicting interests would need to be disincentivized?

                                                                I think it's been pretty evident for a long time that the "peer-reviewed publications system" doesn't produce the results people think it should. I just don't hear anybody really thinking through the systems involved to try to invent one that would.

                                                                • motohagiography 6 hours ago

                                                                  i wonder if science could benefit from publishing using pseudonyms the way software has. if it's any good, people will use it, the reputations will be made by the quality of contributions alone, it makes fraud expensive and mostly not worth it, etc.

                                                                  • wwweston 6 hours ago

                                                                    People have uses for conclusions that sometimes don't have anything to do with their validity.

                                                                    So while "if it's any good, people will use it" is true and quality contributions will be useful, the converse is not true: the use or reach of published work may be only tenuously connected to whether it's good.

                                                                    Reputation signals like credentials and authority have their limits/noise, but bring some extra signal to the situation.

                                                                    • motohagiography 4 hours ago

                                                                      what's missing from this paper is a probability using its own model that it too is false. counter to the headline, it implies that by its own probable falsehood, most published research is in fact true.

                                                                      I admit to missing the joke in the first reading.

                                                                      pseudonyms may prevent the abuse of invalid papers by removing the ability of the authors to front institutional reputations for partisan claims.

                                                                      the movement for science and data to drive policy outside of their domains sounds nice until you find that the science and data are irrepreducible, and the institutions have become laundering vehicles for debased opinions that wash the hands of policymakers. as though the potential for abuse has become the value.

                                                                      maybe it's a rarefied kind of funny, but the kernel of truth it reveals is that it could be time to start using pseudonyms in some disciplines to make the axis of policymakers and academics more honest.

                                                                  • withinboredom 6 hours ago

                                                                    I've implemented several things from computer science papers in my career now, mostly related to database stuff. They are mostly terribly wrong or show the exact OPPOSITE as to what they claim in the paper. It's so frustrating. Even occasionally, they offer their code used to write the paper and it is missing entire features they claim are integral for it to function properly; to the point that I wonder how they even came up with the results they came up with.

                                                                    My favorite example was a huge paper that was almost entirely mathematics-based. It wasn't until you implemented everything that you would realize it just didn't even make any sense. Then, when you read between the lines, you even saw their acknowledgement of that fact in the conclusion. Clever dude.

                                                                    Anyway, I have very little faith in academic papers; at least when it comes to computer science. Of all the things out there, it is just code. It isn't hard to write and verify what you purport (usually takes less than a week to write the code), so I have no idea what the peer reviews actually do. As a peer in the industry, I would reject so many papers by this point.

                                                                    And don't even get me started on when I send the (now professor) questions via email to see if I just implemented it wrong, or whatever, that just never fucking reply.

                                                                    • jltsiren 5 hours ago

                                                                      This is a common failure mode when people outside academic CS read CS papers. They take the papers too literally.

                                                                      Computer science studies computation as an abstract concept. The work may be motivated by what happens in the industry, but it's not supposed to produce anything immediately applicable. Papers may include fake justifications and fake applications, because populist politicians decided long ago that all publicly funded research must have practical real-world applications. But you should not take them at face value.

                                                                      Academic CS values abstract results over concrete results, because real-world systems change too rapidly. Real-world results tend to become obsolete too quickly to be relevant in the time scales the academia is supposed to operate.

                                                                      If you are not in academic CS, you should be careful when reading the papers that you understand the context. Most of the time, you are not in the target audience. Even when there is something relevant in the paper, it's probably not the main result, but an idea related to it. And if you start investigating where that idea came from, it probably builds on many earlier results that seemed obscure and practically irrelevant on their own.

                                                                      Peer reviewers usually spend a few hours on a single review (though there is a lot of variation between fields). A week would be so expensive that most established academics would have to stop teaching and doing research and become full-time reviewers.

                                                                      • Lerc 5 hours ago

                                                                        For papers with code, I have a seen a tendency to consider the code, not the paper to be the ground truth. If the code works, then it doesn't matter what the paper says, the information is there.

                                                                        If the code doesn't work, it seems like a red flag.

                                                                        It's not an advantage that can be applied to biology or physics, but at least computer science catches a break here.

                                                                        • reasonableklout 6 hours ago

                                                                          Wow, sounds awful. Help the rest of us out - what was the huge paper that didn't work or was actively misleading?

                                                                          • withinboredom 6 hours ago

                                                                            I'd rather not, for obvious reasons. The less obvious reason is that I don't remember the title/author of the paper. It was back in 2016/17 when I was working on a temporal database project at work and was searching literature for temporal query syntax though.

                                                                          • meling 4 hours ago

                                                                            If it is as bad as you claim, it would be interesting if you could back this up with a falsification report for the papers in question.

                                                                            • DaoVeles 5 hours ago

                                                                              It is also frustrating when a papers summary says one thing, you pay for the full thing only to see it is a complete opposite of the claims. Waste of time and money, bleh!

                                                                            • DaoVeles 5 hours ago

                                                                              It has been said that "Publish or Perish" would make a good tomb stone epitaph for a lot of modern sciences.

                                                                              I speak to a lot of people in various science fields and generally they are some of the heaviest drinkers I know simply because of the system they have been forced into. They want to do good but are railroaded into this nonsense for dear of losing their livelihood.

                                                                              Like those that are trying to progress our treatment of mental health but have ended up almost exclusively in the biochemicals space because that is where the money is even though that is not the only path. It is a real shame.

                                                                              Also other heavy drinkers are the ecologists and climatologists, for good reason. They can see the road ahead and it is bleak. They hope they are wrong.

                                                                              • tdba 3 hours ago

                                                                                One study tried to replicate 100 psychology studies and only 36% attained significance.

                                                                                https://osf.io/ezcuj/wiki/home/

                                                                                • skybrian 6 hours ago

                                                                                  (2005). I wonder what's changed?

                                                                                • Animats 5 hours ago

                                                                                  How broad a range is this result supposed to cover? It seems to be mostly applicable to areas where data is too close to the noise threshold. Some phenomena are like that, and some are not.

                                                                                  "If your experiment needs statistics, you ought to have done a better experiment" - Rutherford

                                                                                  • meling 4 hours ago

                                                                                    I only read the abstract; “Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.”

                                                                                    True vs false seems like a very crude metric, no?

                                                                                    Perhaps this paper’s research claim is also false.

                                                                                    • ninetyninenine 3 hours ago

                                                                                      So whenever someone gives me a detailed argument with cited sources I can show them this and render the truth into an unobtainable objective.

                                                                                      • Daub 3 hours ago

                                                                                        From my experience, my main criticism of research in the field of computer vision is that most of it is 'meh'. In a university that focused on security research, I saw mountains of research into detection/recognition, yet most of it offered no more than slightly different ways of doing the same old thing.

                                                                                        I also saw: a head of design school insisting that they and their spouse were credited on all student and staff movies, the same person insisting that massive amounts of school cash be spent promoting their solo exhibition that no one other than students attended, a chair of research who insisted they were given an authorship role on all published output in the school, labs being instituted and teaching hires brought in to support a senior admin's research interested (despite them not having any published output in this area), research ideas stolen from undergrad students and given to PhD students... I could go on all day.

                                                                                        If anyone is interested in how things got like this, you might start with Margret Thatcher. It was she who was the first to insist that funding of universities be tied to research. Given the state of British research in those days it was a reasonable decision, but it produced a climate where quantity is valued over quality and true 'impact'.

                                                                                        • blackeyeblitzar 6 hours ago

                                                                                          It’s a matter of incentives. Everyone who wants a PhD has to publish and before that they need to produce findings that align with the values of their professors. These bad incentives combined with rampant statistical errors lead to bad findings. We need to stop putting “studies” on a pedestal.

                                                                                          • iskander 3 hours ago

                                                                                            I think unpopular to mention here but John Ioannidis did a really weird turn in his career and published some atrociously non-rigorous Covid research that falls squarely in the cross-hairs of "why...research findings are false".

                                                                                            • hofo 3 hours ago

                                                                                              Oh the irony

                                                                                              • carabiner 6 hours ago

                                                                                                This only applies to life sciences, social sciences right? Or are most papers in computer science or mechanical engineering also false?

                                                                                                • thatguysaguy 6 hours ago

                                                                                                  It's very bad in CS as well. See e.g.: https://arxiv.org/abs/1807.03341

                                                                                                  IIRC there was also a paper analyzing how often results in some NLP conference held up when a different random seed or hyperparameters were used. It was quite depressing.

                                                                                                • titanomachy 6 hours ago

                                                                                                  2022

                                                                                                  • angry_octet 5 hours ago

                                                                                                    ... including the junk pushed by Ioannidis. His completely trashed his credibility during COVID.

                                                                                                    • pessimizer 4 hours ago

                                                                                                      By being less wrong than almost everyone else. Since everyone else was wrong together they shunned him (as science dictates), and now agree to not talk about how wrong they were.

                                                                                                      • angry_octet 2 hours ago

                                                                                                        He used his reputation and statistical expertise to mislead the world as to the true prevalence (COVID infection rate) and supported the fantasies of Bhattacharya, Kulldorff and Gupta. It is hard to estimate what effect his misinformation had on COVID control measures, and there was no shortage of attention seeking clowns, but he stepped up to the plate and he can take credit for some of the millions of deaths. It was scientific misconduct but his position shields him from consequences.

                                                                                                    • ape4 6 hours ago

                                                                                                      So is this paper false too? .. infinite recursion...

                                                                                                      • wccrawford 6 hours ago

                                                                                                        Most probably.

                                                                                                      • giantg2 5 hours ago

                                                                                                        This must be a satire piece.

                                                                                                        It talks on things like power, reproducibility, etc. Which is fine. There are minority of papers with mathematical errors. What it fails to examine is what is "false". Their results may be valid for what they studied. Future studies may have new and different findings. You may have studies that seem to conflict with each other due to differences in definitions (eg what constitutes a "child", 12yo or 24yo?) or the nuance in perspective apllied to the policies they are investigating (eg aggregate vs adjusted gender wage gap).

                                                                                                        It's about how you use them - "Research suggests..." or "We recommend further studies of larger size", etc. It's a tautology that if you misapply them they will be false a majority of the time.

                                                                                                        • austinjp 5 hours ago

                                                                                                          It's not satire. Ioannidis has a long history of pointing out flaws in scientific processes.

                                                                                                          (Edit: spelling.)

                                                                                                          • dcl 5 hours ago

                                                                                                            What part about the genuine statistical arguments made in the article would make you believe it is satire?

                                                                                                            I've found the reaction to this article can be pretty intense. We read this in a journal club many years ago and one of the mathematicians who was kind of new to the idea that research papers (in other fields) didn't more or less represent 'truth' said this article was _dangerous_.

                                                                                                            • giantg2 4 hours ago

                                                                                                              It's ironic that the paper has a correction. The title and abstract sound highly editorial, especially given that "false" is never defined. They talk about pre study odds being an enhancement but I didn't see them include those in their own paper. The fact that a study is small or the impact is minor doesn't make the results false, especially when these limitations are called out and further research is requested. You could even have a case study n=1 be valid if the conclusion is properly defined. The main problem is people generalizing from things that don't have that level of support.

                                                                                                          • marcosdumay 6 hours ago

                                                                                                            Yeah, when you try new things, you often get them wrong.

                                                                                                            Why do we expect most published results to be true?

                                                                                                            • bluefirebrand 6 hours ago

                                                                                                              Because people use published results to justify all sorts of government policy, business activity, social programs, and such.

                                                                                                              If we cannot trust that results of research are true, then how can we justify using them to make any kind of decisions in society?

                                                                                                              "Believe the science", "Trust the experts" etc sort of falls flat if this stuff is all based on shaky research

                                                                                                              • wredue 4 hours ago

                                                                                                                If government used science to back up policy, we would most definitely not be having a huge portion of the problems we currently have.

                                                                                                                • thaumasiotes 5 hours ago

                                                                                                                  > people use published results to justify all sorts of government policy, business activity, social programs, and such.

                                                                                                                  That would be a reason to expect those results to be false, not a reason to expect them to be true.

                                                                                                                  • marcosdumay 6 hours ago

                                                                                                                    > If we cannot trust that results of research are true, then how can we justify using them to make any kind of decisions in society?

                                                                                                                    Well, don't.

                                                                                                                    Make your decisions based on replicated results. Stop hyping single studies.

                                                                                                                    • XorNot 5 hours ago

                                                                                                                      > Stop hyping single studies.

                                                                                                                      This right here really. The reason people go "oh well science changes every week" is because what happens is the media writes this headline: "<Thing> shown to do <effect> in brand new study!" and then includes a bunch of text which implies it works great...and one or two sentences, out of context, from the lead research behind it saying "yes I think this is a very interesting result".

                                                                                                                      They omit all the actual important details like sample sizes, demographics, history of the field or where the result sits in terms of the field.

                                                                                                                      • adamrezich 5 hours ago

                                                                                                                        After decades upon decades of teaching Western society to “Trust The Science”—where “Science” means “published academic research papers”—you can't unteach society from thinking this way with a simple four-word appeal to logic.

                                                                                                                        The damage has already long since been done. It's great that people are starting to realize the mistake, but it's going to take a lot more work than just saying “stop hyping single studies” in this comments thread to radically alter the status quo.

                                                                                                                        I once knew a guy who ended his friendship of many years with me over an argument about “safe drug use sites”, or whatever they're called—those places where drug addicts can go to “safely” do drugs with medical staff nearby in case they inadvertently overdose. Dude was of the belief that these initiatives were unequivocally good, and that any common-sense thinking along the lines of, “hey, isn't that only going to encourage further self-destructive behavior in vulnerable members of the populace?” could be countered by pointing to a handful of studies that supposedly showed that these “safe shoot-up sites” had been Proven To Be Unequivocally Good, Actually.

                                                                                                                        I took a look at one of these published academic research “studies”—said research was conducted by finding local drug dealers and asking them, before and after a “safe shoot-up site” was constructed, how their business was doing. The answer they got was, “more or less the same”—so the paper concluded (by means of a rather remarkable extrapolation, if I do say so myself) that these “safe shoot-up sites” were Provably Objectively Good For Society.

                                                                                                                        After pointing this out to my friend of many years, he informed me that I had apparently become some flavor of far-right Nazi or whatever, and blocked me on all social media platforms, never speaking to me again.

                                                                                                                        You're not going to get people like him to see reason by just saying “stop hyping single studies” and calling it a day. Our entire culture revolves around placing a rather unreasonable amount of completely blind faith in the veracity of published academic research findings.

                                                                                                                        • lemmsjid 3 hours ago

                                                                                                                          I was intrigued and took a Quick Look at the top studies on this subject and the metrics used are things like relative overdose deaths in an area, crime statistics, and usage of treatment programs. They say that by virtue of a number of epidemiological metrics that safe consumption sites appear to be associated with harm reduction in terms of overdoses, while not increasing crime stats. I don’t see outsized claims of objective truth being made, more of the standard, “here’s how we got the numbers, here’s the numbers, they appear to point in this direction.”

                                                                                                                          I’m not doubting your claim but I’m wondering how that very weird paper you’re citing bubbles up to the top, when there’s some very middle of the road meta analyses that don’t make outsized claims like access to objective truth.

                                                                                                                          • adamrezich 2 hours ago

                                                                                                                            It's not that the paper itself made the claim of having access to objective Truth, it's that papers like these make conclusions, and these conclusions get taken in aggregate to advance various agendas, and the whole premise is treated (in aggregate) as being functionally identical to building a rocket based on conclusions reached by mathematics and physics research papers—because both situations involve making decisions based upon “scientific research”, so in both situations you can justify your actions by pointing to “Science”.

                                                                                                                          • blackbear_ 5 hours ago

                                                                                                                            So what do you suggest?

                                                                                                                            • mistermann 4 hours ago

                                                                                                                              Philosophy has all sorts of different ways to study this complex, multifaceted problem. Too bad it got kicked to the curb by science and is now mostly laughed at.

                                                                                                                              As ye sow, so shall ye reap, IRL maybe.

                                                                                                                              • adamrezich 4 hours ago

                                                                                                                                No idea—all I know how to do is recognize patterns and program computers.

                                                                                                                                But admitting to the existence of a problem is the first step toward fixing it, and, judging by the downvotes on various comments on this story here, we still have a ways to go before the existence of the problem is commonly-accepted.

                                                                                                                                • marcosdumay an hour ago

                                                                                                                                  You are threading into one of those areas that seem to replicate very well.

                                                                                                                                  The difficulty or risk of using drugs does not appear to be a bottleneck on the amount of it people use. This probably does not hold all over the world, but I'm not aware of anybody actually finding an exception.

                                                                                                                        • ekianjo 6 hours ago

                                                                                                                          because people believe that peer review improve things but in fact not really. its more of a stamping process

                                                                                                                          • elashri 6 hours ago

                                                                                                                            Yes that a misconception that many people think that peer-review involves some sort of verification or replication which is not true.

                                                                                                                            I would blame mainstream media in part for this and how they report on research and don't emphasize this nature. Mainstream media also is not interested in reporting on progress but likes catchy headlines/findings.

                                                                                                                        • debacle 6 hours ago

                                                                                                                          Most? Really?

                                                                                                                          • 23B1 5 hours ago

                                                                                                                            Imagine if tech billionaires, instead of building dickships and buying single-family homes, decided to truly invest in humanity by realigning incentives in science.

                                                                                                                            • joycesticks 5 hours ago

                                                                                                                              Damn people are getting pretty good at manifesting these days

                                                                                                                              Check out ResearchHub[1], it's a company founded by a tech billionaire that's trying to realign incentives in science

                                                                                                                              [1] - https://www.researchhub.com/

                                                                                                                              • 23B1 3 hours ago

                                                                                                                                Heh, thanks.

                                                                                                                            • breck 6 hours ago

                                                                                                                              On a livestream the other day, Stephan Wolfram said he stopped publishing through academic journals in the 1980's because he found it far more efficient to just put stuff online. (And his blog is incredible: https://writings.stephenwolfram.com/all-by-date/)

                                                                                                                              A genius who figured it academic publishing had gone to shit decades ahead of everyone else.

                                                                                                                              P.S. We built the future of academic publishing, and it's an order of magnitude better than anything else out there.

                                                                                                                              • wahern 5 hours ago

                                                                                                                                He created his own peer reviewed academic journal and founded a corporation to publish it: https://en.wikipedia.org/wiki/Complex_Systems_(journal) That's a little different than just putting stuff online.

                                                                                                                                • paulpauper 3 hours ago

                                                                                                                                  But it's not a reputable journal at all. An Impact Factor: 1.2 makes it close to useless.

                                                                                                                                  • breck 4 hours ago

                                                                                                                                    Oh wow, that's amazing. I missed that.

                                                                                                                                    This is incredible: https://www.complex-systems.com/archives/

                                                                                                                                    "Submissions for Complex Systems journal may be made by webform or email. There are no publication charges. Papers submitted to Complex Systems should present results in a manner accessible to a wide readership."

                                                                                                                                    So well done. Bravo.

                                                                                                                                  • DiscourseFan 4 hours ago

                                                                                                                                    If we were all Stephan Wolfram, perhaps that would be possible. But very few academics have either the notoriety or the funds to self-publish and ensure their work isn't stolen in their highly competitive industry.

                                                                                                                                    There is a lot of academic work that is very obscure and only becomes important later, sometimes decades later, maybe even centuries, to someone else doing equally obscure work, but it always goes somewhere, and the goal is not to "move fast and break things," but create bodies of scholarship that last far beyond any specific capitalist industry or company.

                                                                                                                                    • breck 3 hours ago

                                                                                                                                      > ensure their work isn't stolen in their highly competitive industry.

                                                                                                                                      If you published your work online backed by git with hashes with a free public service like GitHub, how could someone steal it?

                                                                                                                                      If you are an academic and don't know git, why can't you pick up "Version Control with Git" from your library or buy a used copy for $5 and spend a couple days to learn it?

                                                                                                                                      > the goal is not to "move fast and break things,"

                                                                                                                                      Who said that was the goal?

                                                                                                                                      Why would you want to remain wrong longer?

                                                                                                                                      If you want to move slower, why not take slower walks in the woods versus adding unnecessary bureaucracy?

                                                                                                                                    • jordigh 5 hours ago

                                                                                                                                      Genius? The one who came up with a new kind of science?

                                                                                                                                      • breck 5 hours ago

                                                                                                                                        Do you think judging someone by your least favorite work of their's is a good strategy?

                                                                                                                                        Do you also say, "Newton a genius? The one who tried to turn lead into gold?"

                                                                                                                                  • ants_everywhere 5 hours ago

                                                                                                                                    This is a classic and important paper in the field of metascience. There are other great papers predating this one, but this one is widely known.

                                                                                                                                    Unfortunately the author John Ioannidis turned out to be a Covid conspiracy theorist, which has significantly affected his reputation as an impartial seeker of truth in publication.

                                                                                                                                    • kelipso 5 hours ago

                                                                                                                                      Ha how meta is this comment because the obvious inference one makes from the title is "Why Most Published Research Findings on Covid Are False" and that goes against the science politics. If only he had avoided the topic of Covid entirely, then he would be well regarded.

                                                                                                                                      • ants_everywhere 5 hours ago

                                                                                                                                        It is pretty meta I guess.

                                                                                                                                        > Why Most Published Research Findings on Covid Are False

                                                                                                                                        Well, that's why there was so much focus on replication, multiple data sources and meta-analyses. The focus was there because the assumption is each study is flawed and those tools help extract better signal from the noise of individual studies.

                                                                                                                                        > and that goes against the science politics

                                                                                                                                        I don't think I follow you here. Are you referring to the anti-science populism? That's really the only science politics I'm aware of now that creationism and climate skepticism have been firmly put to rest.

                                                                                                                                        > If only he had avoided the topic of Covid entirely, then he would be well regarded.

                                                                                                                                        I think it's more that his predictions were bad and poorly reasoned and he chose to defend them on right wind media outlets instead of making his case among scientists.

                                                                                                                                        He's not the first well-regarded scientist to go off on a politically-fueled side quest later in his career. Kary Mullis is a famous example.

                                                                                                                                        • kelipso 4 hours ago

                                                                                                                                          Ha ha, I suppose as long as he goes on left wing media outlets and make bad and poorly reasoned predictions that follow left wing politics, then he would be well regarded.

                                                                                                                                          And please, don't pretend like there is no left wing aligned science politics that is as much based on science as flat earthers. I assume you haven't been hibernating during the covid times. All the doctors who did exactly that are doing fine with regard to their reputations.

                                                                                                                                          • ants_everywhere 4 hours ago

                                                                                                                                            I think you're fighting a culture war I'm out of the loop on.

                                                                                                                                            • pessimizer 4 hours ago

                                                                                                                                              Calling Ioanidis a "covid conspiracy theorist" is carrying the flag at the head of the culture war. Playing dumb doesn't make you look above the discussion, it makes you look dishonest.

                                                                                                                                              • ants_everywhere 2 hours ago

                                                                                                                                                I am a science dude. I read mostly science and talk to other science people. That's how I got my covid info. I wasn't on social media until recently. I have no idea what fringe political groups were into during the covid era. I also have no idea what flat earthers have anything to do with it.

                                                                                                                                      • elzbardico 5 hours ago

                                                                                                                                        Your comment reminded me to listen to 2112 from Rush. Thanks.

                                                                                                                                        • dekhn 5 hours ago

                                                                                                                                          Can you point to his statemetns that were conspiracy theory?

                                                                                                                                          I know about Barrington and many of his other claims, but I don't recall him actually saying anything that I would classify as conspiracy theory. Certainly in my world, a credentialled epidemiologist questioning the accuracy of government statistics during a world health crisis, and suggestion that perhaps our strategy could be different, is not conspiracy theory.

                                                                                                                                          • tripletao 4 hours ago

                                                                                                                                            He published an estimate of SARS-CoV-2 antibody seroprevalence in Santa Clara county, claiming a signal from a positivity rate that was within the 95% CI for the false-positive rate for the test. Recruitment was also highly non-random.

                                                                                                                                            https://statmodeling.stat.columbia.edu/2020/04/19/fatal-flaw...

                                                                                                                                            Such careless use of statistics is hardly uncommon; but it's funny to see that he succumbed too, perhaps blinded by the same factors he identifies in this paper.

                                                                                                                                            Beyond that, he sometimes advocated for a less restrictive response on the basis of predictions (of deaths, infections, etc.) that turned out to be incorrect. I don't think that's a conspiracy theory, though. Are the scientists who advocated for school closures now "conspiracy theorists" too, because they failed to predict the learning loss and social harm we now observe in those children? Any pandemic response comes with immense harms, which are near-impossible to predict or even articulate fully, let alone trade off in an unquestionably optimal way.

                                                                                                                                            • EnigmaFlare 4 hours ago

                                                                                                                                              During covid, people got so hyped up about trusting authorities, they threw science out the window. Well I guess they never understood science in the first place but wanted to shame anyone who disagreed with or even questioned whatever arbitrary ideas their government proposed. It was disgusting, and those people are still walking around among us ready to damage society next time some emergency happens.

                                                                                                                                              • ants_everywhere 5 hours ago

                                                                                                                                                > Certainly in my world, a credentialled epidemiologist questioning the accuracy of government statistics during a world health crisis, and suggestion that perhaps our strategy could be different, is not conspiracy theory.

                                                                                                                                                I fully agree. (Well, with some caveats. I think credentials matters less than facts. And I think epidemiology is still in its infancy, so I personally don't put much faith in any single epidemiologist.)

                                                                                                                                                Maybe conspiracy theorist is the wrong term. What he did was show a very political concern with public policy (especially IIUC his opposition to lockdowns) and very little concern about the quality of his research or the people it affected.

                                                                                                                                                This article seems pretty decent at containing details: https://www.buzzfeednews.com/article/stephaniemlee/ioannidis...

                                                                                                                                                You mention Barrington, from the Wikipedia article https://en.wikipedia.org/wiki/Great_Barrington_Declaration

                                                                                                                                                > The World Health Organization (WHO) and numerous academic and public-health bodies stated that the strategy would be dangerous and lacked a sound scientific basis.

                                                                                                                                                So I guess maybe less "conspiracy theory" and more "recklessly dangerous" or "abandonment of the Hippocratic oath".

                                                                                                                                                • dekhn 4 hours ago

                                                                                                                                                  I don't think anything he did or said was recklessly dangerous. In fact I think he believes he was acting in the US's best interest. I would be curious what the outcome would be if we had followed his approaches (which evolved during the course of the epidemic). I think he would have been much more succcessful if he worked the back channels and never been so public in twitter.

                                                                                                                                                  I saw a lot of "epidemiological immune system" activity during COVID- if you didn't toe a specific line, the larger community would attack you, right or wrong. My guess is that this is mainly from historical experience with vaccines and large-scale disease outbreaks, where having a simple, consistent message that did not freak out the population is considered more importantly than being absolutely technically correct.

                                                                                                                                                  • ants_everywhere 4 hours ago

                                                                                                                                                    We essentially did follow his policy and we have a sense of the impact.

                                                                                                                                                    The Lancet estimated that about 40% of US covid deaths could have been avoided if the administration had better policies. That's a bit over 400,000 deaths. That's about the same number of Americans lost during WWII.

                                                                                                                                                    Not all of that can be directly attributed to John Ioannidis's advocacy against lockdowns, but it at least gives us a sense of how big a blunder it was.

                                                                                                                                              • llm_trw 5 hours ago

                                                                                                                                                >Unfortunately the author John Ioannidis turned out to be a Covid conspiracy theorist, which has significantly affected his reputation as an impartial seeker of truth in publication.

                                                                                                                                                Ad hominem attacks against ideas can safely be ignored.

                                                                                                                                                • sunshinesnacks 2 hours ago

                                                                                                                                                  Ioannidis published a journal article with what many considered an ad hominem attacks against a graduate student. He later withdrew that portion of the paper, (somewhat) in his defense.

                                                                                                                                                  • ants_everywhere 5 hours ago

                                                                                                                                                    (1) I don't think you can have an ad hominem against an idea?

                                                                                                                                                    (2) I'm not opposed to any ideas in this paper. I think the paper stands on its own merits.

                                                                                                                                                    • llm_trw 4 hours ago

                                                                                                                                                      >(1) I don't think you can have an ad hominem against an idea?

                                                                                                                                                      You've tried your best.

                                                                                                                                                      >(2) I'm not opposed to any ideas in this paper. I think the paper stands on its own merits.

                                                                                                                                                      You're just preemptively setting a limit to how much thinking we can do. After all you made a post in this very thread:

                                                                                                                                                      >>So I guess maybe less "conspiracy theory" and more "recklessly dangerous" or "abandonment of the Hippocratic oath".

                                                                                                                                                      Which is odd, since medical research has no Hippocratic oath or recklessly dangerous caveats. After all, they were doing gain of function research on coronaviruses in the very city where covid-19 started. Unless geography is now a reckless pseudo science which we must sensor for the good of all.

                                                                                                                                                      • ants_everywhere 4 hours ago

                                                                                                                                                        I honestly can't follow what you're trying to say.

                                                                                                                                                        But John Ioannidis is a physician and has served in a number of medical organizations. Even non-medical researchers are bound by IRB boards for research on humans and more generally are bound by all sorts of codified ethical standards.

                                                                                                                                                    • adamrezich 5 hours ago

                                                                                                                                                      Judging by the downvotes on your post, ad hominem attacks against ideas are A Good Thing, Actually—I'm sure there's a published academic research study somewhere that quite conclusively proves this to be the case.

                                                                                                                                                    • golergka 5 hours ago

                                                                                                                                                      Is he a conspiracy theorist? I googled it, and here's the interview where he explains his views on COVID: https://www.medscape.com/viewarticle/933977?form=fpf Nothing in there looks like a conspiracy theory.

                                                                                                                                                      • ants_everywhere 4 hours ago

                                                                                                                                                        Maybe conspiracy theorist is the wrong term.

                                                                                                                                                        It's more accurate to say that his ideas were dangerous and fringe and unsupported by the science. And while he was derelict in the science, he was very active promoting his opposition to lock downs to the White House and to conservative media.

                                                                                                                                                        It would be more accurate to say that he heavily fueled the conspiracy theorists rather than he was one himself.