• cperciva 7 days ago

    In 2005, my paper on breaking RSA by observing a single private-key operation from a different hyperthread sharing the same L1 cache -- literally the first publication of a cryptographic attack exploiting shared caches -- was rejected from the cryptology preprint archive on the grounds that "it was about CPU architecture, not cryptography". Rejection from journals is like rejection from VCs -- it happens all the time and often not for any good reason.

    (That paper has now been cited 971 times according to Google Scholar, despite never appearing in a journal.)

    • fl4tul4 7 days ago

      The journal lost, as it would have increased their h-index and reputation significantly.

      • informal007 7 days ago

        Time is always a better evaluater than anyone in any journal

        • fliesskomma 6 days ago

          [Add:controversity] Q: If 'Time is an 'always better' evaluater", why do i see nobody out there, writing about "compressed-time" ?

          regards...

        • davrosthedalek 7 days ago

          Is it on the arxiv? If not, please put it there.

          • ilya_m 7 days ago

            The paper is here: http://www.daemonology.net/hyperthreading-considered-harmful...

            As its author noted, the paper has done fine ciation- and impact-wise.

            • cperciva 7 days ago

              Paper is here: https://www.daemonology.net/papers/cachemissing.pdf

              Your link is the website I put up for non-experts when I announced the issue.

              • davrosthedalek 7 days ago

                In this case, it's less about discoverability, but more about long term archival. Will daemonology.net continue to exist forever? Arxiv.org might perish, but I am sure the community will make sure the data will be preserved.

                • cperciva 7 days ago

                  I'm not too worried about that -- this paper is "mirrored" on hundreds of university websites since it's a common reference for graduate courses in computer security.

                  • ht_th 7 days ago

                    In my experience, once teachers retire or move on, or a course gets mothballed, it's only a matter of time for course websites disappear or become non-functional.

                    If the course website was even on the open web to begin with. If they're in some university content management system (CMS), chances are that access is limited to students and teachers of that university and the CMS gets "cleaned" regularly by removing old and "unused" content. Let alone what will happen when the CMS is replaced by another after a couple of years.

                    • pabs3 7 days ago

                      ArchiveTeam is trying to save some of that stuff to archive.org, obviously it can't get the non-public stuff though.

                      https://wiki.archiveteam.org/index.php/University_Web_Hostin...

                      • bigiain 6 days ago

                        I wonder if there's an Aaron Swartz of paywalled university course material out there? Someone (or a group) downloading, datahoarding, and sharing every collection of courses they can get that you need a university login to access?

          • jraph 7 days ago

            I was confused by the title because paper rejection is incredibly common in research, but that's the point and one of the goals is to fight imposter syndrome.

            It's a good initiative. Next step: everybody realizes that researchers are just random people like everybody. Maybe that could kill any remaining imposter syndrome.

            A rejection, although common, is quite tough during your PhD though, even ignoring the imposter syndrome, because in a short time, you are expected to have a bunch of accepted papers, in prestigious publications if possible. It feels like a rejection slows you down, and the clock is still ticking. If we could kill some of this nefarious system, that'd be good as well.

            • arrowsmith 7 days ago

              It’s noteworthy because it’s from Terence Tao, regarded by many as the world’s greatest living mathematician.

              If you read the full post he’s making the exact same point as you: it’s common and normal to get a paper rejected even if you’re Terence Tao, so don’t treat a rejection like the end of the world.

              • motorest 7 days ago

                > It’s noteworthy because it’s from Terence Tao, regarded by many as the world’s greatest living mathematician.

                I think it's important to post a follow-up comment clarifying that papers are reviewed following a double blind peer review process. So who the author is shouldn't be a factor.

                Also, the author clarified that the paper was rejected on the grounds that the reviewer felt the topic wasn't a good fit for the journal. This has nothing to do with the quality of the paper, but uploading editorial guidelines on the subject. Trying to file a document in a wrong section and being gently nudged to file under another section hardly matches the definition of a rejection that leads authors to question their life choices.

                • ccppurcell 7 days ago

                  Just a quick point: double blind is not common for mathematics journals, at least in my area. Some TCS conferences have started it.

                  • infogulch 6 days ago

                    In many fields there are only a handful of researchers that would submit a paper in a given specialty, and a couple of them are the reviewers. At some point blinding is just futile.

                • vitus 7 days ago

                  > regarded by many as the world’s greatest living mathematician.

                  Oh?

                  Perelman comes to mind (as the only person who has been eligible to claim one of the Millennium prizes), although he is no longer actively practicing math AFAIK. Of Abel prize winners, Wiles proved Fermat's last theorem, and Szemeredi has a number of number-theoretic and combinatorial contributions.

                  Recently deceased (past ~10 years) include figures such as John Nash, Grothendieck, and Conway.

                  Tao is definitely one of the most well-known mathematicians, and he's still got several more decades of accomplishments ahead of him, but I don't know that he rises to "greatest living mathematician" at this point.

                  That said, I do appreciate that he indicates that even his papers get rejected from time to time.

                  • xanderlewis 7 days ago

                    Having been a child prodigy somehow gives one infamy (in the wider public consciousness) beyond anything one can achieve as an adult.

                    • vanderZwan 7 days ago

                      He's a child prodigy who didn't burn out, that does make him quite rare.

                      • xanderlewis 6 days ago

                        Well, true.

                    • kenjackson 6 days ago

                      He said “regarded by many”, which I think is probably an accurate statement.

                      • arrowsmith 5 days ago

                        I chose those words deliberately because I knew that if I phrased it too strongly then someone would reply to litigate the claim. Apparently I still phrased it too strongly, but hey, nothing's ever too pedantic for HN.

                      • flocciput 7 days ago

                        To add to your list, you can also find Richard Borcherds teaching math on YouTube.

                      • jraph 7 days ago

                        > It’s noteworthy because it’s from Terence Tao, regarded by many as the world’s greatest living mathematician.

                        I didn't know :-)

                        > If you read the full post he’s making the exact same point as you

                        Oh yeah, that's because I did read the full post and was summarizing. I should have made this clear.

                      • bisby 7 days ago

                        It's especially important coming from someone like Terence Tao. If one of the best and brightest mathematicians out there can get a paper declined, then it can happen to literally anyone.

                        • jonathan_landy 7 days ago

                          I guess it is nice to know that he is also not perfect. But it’s still the case that his accomplishments outshine my own, so my imposter syndrome remains intact.

                          • TheRealPomax 7 days ago

                            I'd counter the "like everybody": they're not. They spent a decade or more focused on honing their skills and deepening their knowledge to become experts in their subfield, and sometimes even entire fields. They are very much not random people like everybody in this context.

                            • 2-3-7-43-1807 7 days ago

                              terence tao is suffering from imposter syndrome? if anything, imposter syndrome is suffering from terence tao ... do you maybe not know who terence tao is?

                              • danielmarkbruce 7 days ago

                                It's terence tao trying to help others with imposter syndrome. It seems quite unlikely he himself would suffer from it...

                                • NooneAtAll3 7 days ago

                                  on the contrary, that's exactly what he states in comment discussion below the thread

                                  having higher reputation means higher responsibility not to crush someone with it in the sub-fields you aren't as proficient as

                                  • danielmarkbruce 7 days ago

                                    yeah... he's telling a white lie of sorts...reread the comment. That doesn't sound like someone lacking self confidence. "the other members collectively ...". He's basically saying "get the world leading experts in some area of math that I'm sort of interested in in a room and between them they'll know more than I do myself". Lol. And, that's happened "several times".

                                    I'm sure he's a genuinely nice, friendly person trying to do the right thing. But he is also likely confident as hell and never felt like an imposter anywhere.

                                    • jraph 7 days ago

                                      I don't think it's a white lie. Whether he has imposter syndrome is beside the point. It shows he has sympathy for his colleagues who might have it. Maybe he himself had it before which would let him understand even better what it is, and now he doesn't anymore, this would motivate him to make this point.

                                      The point he is making is all the motr convincing especially that he is seen as very good, whether he had imposter syndrome or not.

                                      • danielmarkbruce 7 days ago

                                        Yes, that's the point.

                                        • jraph 6 days ago

                                          Yes, I think we mostly agree, except for the "white lie of sorts" and the "never felt like an imposter anywhere" parts which could be true but I'm not sure about (because I met bright people with imposter syndrome); but that was not your main point anyway, you were initially answering to 2-3-7-43-1807, who apparently didn't understand my initial comment (in which I never state anything about Terence Tao's imposter syndrome, who, no, I didn't know about, but that doesn't change anything about what I wrote).

                              • firesteelrain 7 days ago

                                At my college, you only need one paper not many

                                • jraph 7 days ago

                                  In mine, I don't think there was a hard requirement, but your PhD would be seen as weak with zero paper, and only one would be common enough I guess but still be seen a bit weak. It's not very important to grade, but it's important for what follows: your carrier, including getting a position.

                              • bumby 7 days ago

                                Adam Grant once related an amusing rejection from a double-blind review. One of the reviewers justified the rejection with something along the lines of “The author would do well to familiarize themselves with the work of Adam Grant”

                                • orthoxerox 7 days ago

                                  Life imitates art. In a 1986 comedy "Back to School" Rodney Dangerfield's character delegates his college assignments to various subject matter experts. His English Lit teacher berates him for it, saying that not only did he obviously cheat, but he also copied his essay from someone who's unfamiliar with the works of Kurt Vonnegut. Of course, the essay was written by Vonnegut himself, appearing in a cameo role.

                                  • Cheer2171 7 days ago

                                    Fair warning: I don't know enough about mathematics to say if this is the case here.

                                    I hear this all the time, but this is actually a real phenomenon that happens when well-known senior figures are rightfully cautious about over-citing their own work and/or are just so familiar with their own work that they don't include much of it in their literature review. For everybody else in the field, it's obvious that the work of famous person X should make up a substantial chunk of the lit review and be explicit about how the new work builds on X's prior literally paradigm shifting work. You can do a bad job at writing about your own past work for a given audience for so many different reasons, and many senior academics do all the time, making their work literally indistinguishable from that of graduate students --- hence the rejection.

                                    • bumby 7 days ago

                                      I totally understand the case when an author doesn't sufficiently give context because they are so close to their previous work that they take it for granted that it's obvious (or, like you said, they are wary of auto-citation).

                                      I may be misremembering, but I believe the case with Grant was that the referee was using his own work to discredit his submission. Ie "If the author was aware of the work of Adam Grant, they would understand why the submitted work is wrong."

                                    • Upvoter33 7 days ago

                                      This also happens pretty commonly. However, it's not even unreasonable! Sometimes you write a paper and you don't do a good enough of a job putting in the context of your own related work.

                                      • CrazyStat 7 days ago

                                        And sometimes the reviewer didn't read carefully and doesn't understand what you're doing.

                                        I once wrote a paper along the lines of "look we can do X blazingly fast, which (among other things) lets us put it inside a loop and do it millions of times to do Y." A reviewer responded with "I don't understand what the point of doing X fast is if you're just going to put it in a loop and make it slow again." He also asked us to run simulations to compare our method to another paper which was doing an unrelated thing Z. The editor agreed that we could ignore his comments.

                                      • Metacelsus 7 days ago

                                        I, as a reviewer, made a similar mistake once! The author's initial version seemed to contradict one of their earlier papers but I was missing some context.

                                        • j-krieger 7 days ago

                                          I also made this mistake! I recommended the author to read an adjacent work, which turned out to be by the very same author. He had just forgot to include it his work.

                                          • adastra22 7 days ago

                                            I’ve had it happen to me. Paper rejected because it was copying and not citing a prior message to a mailing list… the message from the mailing list was mine, and the paper was me turning it into a proper publication.

                                        • GuB-42 6 days ago

                                          Play devil's advocate here. Human memory is not flawless, and people makes mistakes, so maybe Adam Grant should have read one of his previous work as a refresher. Even if not wrong, it is possible that he missed some stuff he thought he had published, but hadn't.

                                          If, as a developer, you had the experience of looking at some terrible code, angrily searching for whoever wrote that monstrosity, only to realize that you did, that's the idea.

                                          • Nevermark 7 days ago

                                            Yes, funny the first time.

                                            Not so much the fifth!

                                          • remoquete 7 days ago

                                            I find it refreshing when researchers disclose their own failures. Science is made of negative results, errors, and rejections, though it's often characterized in a much different, unrealistic way.

                                            By the way, even though some of you may know about it, here's the link to the Journal of Negative Results: https://www.jnr-eeb.org/index.php/jnr

                                            • UniverseHacker 7 days ago

                                              I am actually quite surprised Terence Tao still gets papers rejected from math journals... but appreciate him sharing this, as hearing this from him will help newer scientists not get discouraged by a rejection.

                                              I had the lucky opportunity to do a postdoc with one of the most famous people in my field, and I was shocked how much difference the name did make- I never had a paper rejection from top tier journals submitting with him as the corresponding author. I am fairly certain the editors would have rejected my work for not being fundamentally on an interesting enough topic to them, if not for the name. The fact that a big name is interested in something, alone can make it a "high impact subject."

                                              • vouaobrasil 7 days ago

                                                > I am actually quite surprised Terence Tao still gets papers rejected from math journals

                                                At least it indicates that the system is working somewhat properly some of the time...

                                                • 9dev 7 days ago

                                                  I find it bewildering that it wouldn’t, actually. I would have expected one of the earliest things in the review process happening would be to black out the submitters name and university, only to be revealed after the review is closed.

                                                  • vouaobrasil 7 days ago

                                                    Well, the editor still sees the name of the submitter, and can also push the reviewers for an easy publication by downplaying the requirements of the journal.

                                                  • scubbo 7 days ago

                                                    Could you elaborate on this statement? It sounds like you're implying something, but it's not clear what.

                                                    • monktastic1 7 days ago

                                                      I interpret it as saying that at least the system hasn't just degraded into a rubber stamp (where someone like Tao can publish anything on name alone).

                                                      • TN1ck 7 days ago

                                                        I think it’s that a paper submitted by one of the most famous authors in the math field is not auto approved by the journals. That even he has to go through the normal process and gets rejected at times.

                                                    • jcrites 7 days ago

                                                      Could that also be because he reviewed the papers first and made sure they were in a suitable state to publish? Or you think it really was just the name alone, and if you had published without him they would not have been accepted?

                                                      • UniverseHacker 7 days ago

                                                        He only skimmed them- scientists at his level are more like a CEO than the stereotype of a scientist- with multiple large labs, startups, and speaking engagements every few days. He trusted me to make sure the papers were good- and they were, but his name made the difference between getting into a good journal in the field, and a top “high impact” journal that usually does not consider the topic area popular enough to accept papers on, regardless of the quality or content of the paper. At some level, high impact journals are a popularity contest- to maintain the high citation rate, they only publish from people in large popular fields, as having more peers means more citations.

                                                    • ndesaulniers 7 days ago

                                                      The master has failed more than the beginner has tried.

                                                      • TZubiri 7 days ago

                                                        "Rejection is actually a relatively common occurrence for me, happening once or twice a year on average."

                                                        This feels like a superhuman trying to empathize with a regular person.

                                                        • asah 7 days ago

                                                          Non-zero failure rate is indeed often optimal because it provides valuable feedback toward finding the optimal horizon for various metrics, e.g. speed, quality, LPU[1], etc.

                                                          That said, given the labor involved in academic publishing and review, the optimal rejection rate should be quite low, i.e. find a lower cost way to pre-filter papers. OTOH, the reviewers may get value from rejected papers...

                                                          [1] least publishable unit

                                                          • PaulHoule 7 days ago

                                                            If you stick around in physics long enough you will submit a paper to Physical Review Letters (which is limited to about four pages) that gets rejected because it isn't of general enough interest, then you resubmit to some other section of The Physical Review and get in.

                                                            These days I read a lot of CS papers with an eye on solving the problems and personally I tend to find the short ones useless. (e.g. pay $30 for a 4-page paper because it supposedly has a good ranking function for named entity recognition except... it isn't a good ranking function)

                                                            • amichail 7 days ago

                                                              Sure, even top mathematicians have paper rejections.

                                                              But I think the more important point is that very few people are capable of publishing papers in top math journals.

                                                              • tetha 7 days ago

                                                                > Because of this, a perception can be created that all of one's peers are achieving either success or controversy, with one's own personal career ending up becoming the only known source of examples of "mundane" failure.

                                                                I've found similar insights when I joined a community of musicians and also discovered twitch / youtube presences of musicians I listen to. Some of Dragonforces corona streams are absolutely worth a watch.

                                                                It's easy to listen to mixed and finished albums and... despair to a degree. How could anyone learn to become that good? It must be impossible, giving up seems the only rational choice.

                                                                But in reality, people struggle and fumble along at their level. Sure enough, the level of someone playing guitar professionally for 20 years is a tad higher than mine, but that really, really perfect album take? That's the one take out of a couple dozen.

                                                                This really helped me "ground" or "calibrate" my sense of how good or how bad I am and gave me a better appreciation of how much of a marathon an instrument can be.

                                                                • kittikitti 7 days ago

                                                                  Academia is a paper tiger. The Internet means you don't need a publisher for your work. Ironically, this self published blog might be one of his most read works yet.

                                                                  • snowwrestler 7 days ago

                                                                    You never needed a publisher; before the Internet you could write up your findings and mail them to relevant people in your field. Quite a lot of scientists did this, actually.

                                                                    What publication in a journal gives you is context, social proof, and structured placement in public archives like libraries. This remains true in the age of the Internet.

                                                                  • atrettel 7 days ago

                                                                    I agree with the discussion that rejection is normal and researchers should discuss it more often.

                                                                    That said, I do think that "publish or perish" plays an unspoken role here. I see a lot of colleagues trying to push out "least publishable units" that might barely pass review (by definition). If you need to juice your metrics, it's a common strategy that people employ. Still, I think a lot of papers would pass peer review more easily if researchers just combined multiple results into a single longer paper. I find those papers to be easier to read since they require less boilerplate, and I imagine they would be easier to pass peer review by the virtue that they simply contain more significant results.

                                                                    • matthewdgreen 7 days ago

                                                                      One of the issues is that we have grad students, and they need to publish in order to travel through the same cycle that we went through. As a more senior scientist I would be thrilled to publish one beautiful paper every two years, but then none of my students would ever learn anything or get a job.

                                                                      • nextn 7 days ago

                                                                        Longer papers with more claims have more to prove, not less. I imagine they would be harder to pass peer review.

                                                                        • tredre3 7 days ago

                                                                          > Longer papers with more claims have more to prove, not less. I imagine they would be harder to pass peer review.

                                                                          Yes, a longer paper puts more work on the peer reviewers (handful of people). But splitting one project in multiple papers puts more work on the reader (thousands of people). There is a balance to strike.

                                                                          • atrettel 7 days ago

                                                                            I agree with your first part but not your second. Most authors do not make outrageous claims, and I surely would reject their manuscript if they did. I've done it before and will do it again without any issue.

                                                                            To me, the point of peer review is to both evaluate the science/correctness of the work, but also to ensure that this is something novel that is worth telling others about. Does the manuscript introduce something novel into the literature? That is my standard (and the standard that I was taught). I typically look for at least one of three things: new theory, new data/experiments, or an extensive review and summation of existing work. The more results the manuscript has, the more likely it is to meet this novelty requirement.

                                                                          • paulpauper 7 days ago

                                                                            Lots of co-authors. That is one surefire way to inflate it.

                                                                          • ak_111 7 days ago

                                                                            I always thought that part of the upside of being tenured and extremely recognised as a leader of your field is the freedom to submit to incredibly obscure (non-predatory) journals just for fun.

                                                                            • kzz102 7 days ago

                                                                              In academic publishing, there is an implicit agreement between the authors and the journal to roughly match the importance of the paper to the prestige of the journal. Since there is no universal standard on either the prestige of the journal or the importance of the paper, mismatches happen regularly, and rejection is the natural result. In fact, the only way to avoid rejections is to submit a paper to a journal of lower prestige than your estimate, which is clearly not what authors want to do.

                                                                              • directevolve 7 days ago

                                                                                It’s not an accident - if academics underestimated the quality of their own work or overestimated that of the journal, this would increase acceptance rates.

                                                                                Authors start at an attainable stretch goal, hope for a quick rejection if that’s the outcome, and work their way down the list. That’s why rejection is inevitable.

                                                                              • 23B1 7 days ago

                                                                                A similar story.

                                                                                I actively blogged about my thesis and it somehow came up in one of those older-model plagarism detectors (this was years and years ago, it might have been just some hamfisted google search).

                                                                                The (boomer) profs convened a 'panel' without my knowledge and decided I had in fact plagiarized, and informed me I was in deep doo doo. I was pretty much ready to lose my mind, my career was over, years wasted, etc.

                                                                                Luckily I was buddy with a Princeton prof. who had dealt with this sort of thing and he guided me through the minefield. I came out fine, but my school never apologized.

                                                                                Failure is often just temporary and might not even be real failure.

                                                                                • ziofill 7 days ago

                                                                                  This is his main point, and I wholeheartedly agree: …a perception can be created that all of one's peers are achieving either success or controversy, with one's own personal career ending up becoming the only known source of examples of "mundane" failure. I speculate that this may be a contributor to the "impostor syndrome"…

                                                                                  • aborsy 7 days ago

                                                                                    Research is getting more and more specialized. Increasingly there may not be many potential journals for a paper, and, even if there are, the paper might be sent to the same reviewers (small sub communities).

                                                                                    You may have to leave a year of work on arxiv, with the expectation that the work will be rehashed and used in other published papers.

                                                                                    • shadowvoxing 4 days ago
                                                                                      • cess11 7 days ago

                                                                                        Journals are typically for-profit, and science is not, so they don't always align and we should not expect journals to serve science except incidentally.

                                                                                        • iamnotsure 7 days ago

                                                                                          Please note that despite much work being done in the equality department being famous is nowadays still a requirement for acquiring the status of impostor syndrome achiever. Persons who are not really famous do not have impostor syndrome but are just a simple copycats in this respect.

                                                                                          • lcnPylGDnU4H9OF 7 days ago

                                                                                            So the non-famous people who claim to have impostor syndrome are actual impostors because they claim to have impostor syndrome. Honestly, that seems like a bit of a weird take but to each their own.

                                                                                          • j7ake 7 days ago

                                                                                            We can laugh at academia but we know of these similar rejection stories nearly in all domains.

                                                                                            AirBnB being rejected for funding, musicians like Schubert struggling their entire life, writers like Rowling in poverty.

                                                                                            Rejection will always be the norm in competitive winner take all dynamics.

                                                                                            • SergeAx 7 days ago

                                                                                              We often talk about how important it is to be a platform for oneself, self-host blog under own domain etc. Why it is not the case for science papers, articles, issues? Like, isn't the whole World Wide Web was invented specifically for that?

                                                                                              • drumhead 7 days ago

                                                                                                Saw the title and thought, nothing unusual in that really, then saw the domain was maths based, it's not Terrence Tao is it?! It was Terrence Tao. If one of the greats can get rejected then there's no shame in you getting rejected.

                                                                                                • kizer 7 days ago

                                                                                                  Whether it’s a journal, a university, a tech company… never take it personally because there’s bureaucracy, policies, etc and information lost in the operation of the whole process. Cast a wide net and believe in the value you’ve created or bring.

                                                                                                  • slackr 7 days ago

                                                                                                    Reminds me—I wish someone would make an anti-LinkedIn, where the norm is to announce only setbacks and mistakes, disappointments etc.

                                                                                                    • omoikane 7 days ago

                                                                                                      There was a site where people posted company failures:

                                                                                                      https://en.wikipedia.org/wiki/Fucked_Company

                                                                                                      • 77pt77 7 days ago

                                                                                                        Just like in academia, no one cares about negative results in professional settings.

                                                                                                        • remoquete 7 days ago

                                                                                                          Folks already do. They often turn them into inspirational tales.

                                                                                                        • lupire 7 days ago

                                                                                                          It's important to remember there a journal's reputation is built by the authors who publish there, and not vice versa.

                                                                                                          • justinl33 7 days ago

                                                                                                            It’s okay Terence, it happens to the best of us.

                                                                                                            • soheil 7 days ago

                                                                                                              Should we therefore also publicize everything else that lies between success and failure?

                                                                                                              • ak_111 7 days ago

                                                                                                                - hey honey how was work today?

                                                                                                                - it was fine, I desk rejected terence tao, his result was a bit meh and the write up wasn't up to my standard. Then I had a bit of a quite office hour, anyway, ...

                                                                                                                • Der_Einzige 7 days ago

                                                                                                                  I've had the surreal moment of attending a workshop where the main presenter (famous) is talking about their soon to-be-published work where I realize that I'm one of their reviewers (months after I wrote the review, so no impact on my score). In this case, I loved their paper and gave it high marks, and so did the other reviewers. Not surprising when I found out who the author was!!!

                                                                                                                  I have to not say a word to them as I talk to them or else I could ruin the whole peer review thing!

                                                                                                                  "Hey honey, I reviewed X work from Y famous person today"

                                                                                                                  • krisoft 7 days ago

                                                                                                                    > I have to not say a word to them as I talk to them or else I could ruin the whole peer review thing!

                                                                                                                    In what sense would it ruin peer review to reveal your role after you already wrote and submitted the review?

                                                                                                                • d0mine 7 days ago

                                                                                                                  Why journals exist at all? Could papers be published on something like arxiv.org (like software is on github.com)?

                                                                                                                  It could support links/backref, citations(forks), questions(discussions), tags, followers, etc easily.

                                                                                                                  • bumby 7 days ago

                                                                                                                    Part of the idea is that journals help curate better publications via the peer review process. Whether or not that occurs in practice is up for some debate.

                                                                                                                    Having a curated list can be important to separate the wheat from the chaff, especially in an era with ever increasing rates of research papers.

                                                                                                                    • d0mine 7 days ago

                                                                                                                      Eliminating journals as a corporate monopoly doesn't eliminate peer review. For example, it should be easy to show the number of citations and even their specific context in other articles on the arxiv-like site. For example, if I like some app/library implementation on github, I look at their dependencies (a citation in a sense) to discover things to try.

                                                                                                                      Curated lists can also exist on the site. Look at awesome* repos on github eg https://github.com/vinta/awesome-python

                                                                                                                      Obviously, some lists can be better than the others. Usual social mechanics is adequate here.

                                                                                                                      • bumby 7 days ago

                                                                                                                        I think citation is a noisy/poor signal for peer-review. I've refereed a number of papers where I dig into the citations and find the article doesn't actually support the author's claim. Still, the vast majority of citations go unchecked.

                                                                                                                        I don't think peer-review has to be done by journals, I'm just not sure what the better solution is.

                                                                                                                        • d0mine 7 days ago

                                                                                                                          I’ve definitely encountered such cases myself (when actual cited paper didn’t support author’s claims).

                                                                                                                          Nothing prevents the site introducing more direct peer review (published X papers on a topic -> review a paper).

                                                                                                                          Though If we compare two cases: reading a paper to leave an anonymous review vs reading a paper to cite it. The latter seems like more authentic and useful (less perversed incentives).

                                                                                                                    • sunshowers 7 days ago

                                                                                                                      I think in math, and in many other fields, it is pretty normal to post all papers on arXiv. But arXiv has a lot of incorrect papers on it (tons of P vs NP papers for example), so journals are supposed to act as a filtering mechanism. How well they succeed at it is debated.

                                                                                                                      • d0mine 7 days ago

                                                                                                                        It is naive to think that “journal paper” means correct paper. There are many incorrect papers in journals too (remember reproduction crisis).

                                                                                                                        Imagine, you found a paper on arxiv-like site: there can be metadata that might help determine quality (author credentials, citations by other high-ranked papers, comments) but nothing is certain. There may be cliques that violently disagree with each other (paper clusters with incompatible theories). The medium can help with highlighting quality results (eg by choosing the default ranking algorithm for the search, introducing StackOverflow-like gamification) but it can’t and shouldn’t do science instead of practitioners.

                                                                                                                    • haunter 7 days ago

                                                                                                                      fwiw, editorial review =/= peer review

                                                                                                                      • abetusk 7 days ago

                                                                                                                        The second post in that thread is gold:

                                                                                                                        """

                                                                                                                        ... I once almost solved a conjecture, establishing the result with an "epsilon loss" in a key parameter. We submitted to a highly reputable journal, but it was rejected on the grounds that it did not resolve the full conjecture. So we submitted elsewhere, and the paper was accepted.

                                                                                                                        The following year, we managed to finally prove the full conjecture without the epsilon loss, and decided to try submitting to the highly reputable journal again. This time, the paper was rejected for only being an epsilon improvement over the previous literature!

                                                                                                                        ...

                                                                                                                        """

                                                                                                                        • YouWhy 7 days ago

                                                                                                                          While I'm not a mathematician, I think such an attitude on behalf of the journal does not encourage healthy community dynamics.

                                                                                                                          Instead of allowing the community to join forces by breaking up a larger problem into pieces, it encourages siloing and camper mentality.

                                                                                                                          • abetusk 7 days ago

                                                                                                                            I agree. This is also a lack of effort on the journal's part to set expectations of what the reviewers should be looking for in an accepted paper.

                                                                                                                            In the journal's defense though, what most likely happened is that the reviewers were different between submissions and they didn't know about the context. Ultimately, I think, this type of rejection comes down to the mostly the reviewers discretion and it can lead to this type of situation.

                                                                                                                            I cut off the rest of the post but Tao finished it with this:

                                                                                                                            """

                                                                                                                            ... Being an editor myself, and having had to decline some decent submissions for a variety of reasons, I find it best not to take these sorts of rejections personally,

                                                                                                                            ...

                                                                                                                            """

                                                                                                                        • jongjong 7 days ago

                                                                                                                          The high standards of those academic journals sound incredible in this day and age when media is full of misinformation and irrelevant information.

                                                                                                                          The anecdote about the highly reputable journal rejecting the second of a 2-part paper which (presumably) would have been accepted as a 1-part paper is telling.

                                                                                                                          • dwaltrip 7 days ago

                                                                                                                            Hilarious irony:

                                                                                                                            > With hindsight, some of my past rejections have become amusing. With a coauthor, I once almost solved a conjecture, establishing the result with an "epsilon loss" in a key parameter. We submitted to a highly reputable journal, but it was rejected on the grounds that it did not resolve the full conjecture. So we submitted elsewhere, and the paper was accepted.

                                                                                                                            > The following year, we managed to finally prove the full conjecture without the epsilon loss, and decided to try submitting to the highly reputable journal again. This time, the paper was rejected for only being an epsilon improvement over the previous literature!

                                                                                                                            • bumby 7 days ago

                                                                                                                              A lot of the replies make it seem like there is some great over-arching coordination and intent between subsequent submissions, but I’ll offer up an alternative explanation: sometimes the reviewer selection is an utter crap shoot. Just because the first set of reviewers may offer a justification for rejection, it may be completely unrelated to the rationale of a different set of reviewers. Reviewers are human and bring all kinds of biases and perspectives into the process.

                                                                                                                              It’s frustrating but the result of a somewhat haphazard process. It’s also not uncommon for conflicting comments within the same review cycle. Some of this may be attributed to a lack of clear communication by the author. But on occasion, it leads me to believe many journals don’t take a lot of time selecting appropriate reviewers and settle for the first few that agree to review.

                                                                                                                              • dhosek 7 days ago

                                                                                                                                Luck plays a lot of a role in many vaguely similar things. I regularly submit fiction and poetry for publication (with acceptance rates of 2% for fiction and 1.5% for poetry) and so much depends on things well out of my control (which is part of why I’m sanguine about those acceptance rates—given the venues I‘m submitting to, they’re not unreasonable numbers and more recent years’ stats are better than that).¹ In many cases the editors like what they read, but don’t have a place for it in the current issue or sometimes they’re just having a bad day.

                                                                                                                                1. For those who care about the full messy details I have charts and graphs at https://www.dahosek.com/2024-in-reejctions-and-acceptances/

                                                                                                                                • keepamovin 7 days ago

                                                                                                                                  And this is how we do science? How is that a good basis for scientific reality? Seems there should at least be transparency and oversight, or maybe the whole system is broke: open reviews on web not limited to a small committee sounds better.

                                                                                                                                  Science is about the unknown, building testable models and getting data.

                                                                                                                                  Even an AI review system could help.

                                                                                                                                  • larodi 7 days ago

                                                                                                                                    This is how we don’t do papers.

                                                                                                                                    Even though my pal did a full Gouraud shading in pure assembly using registers only (including the SP and a dummy stack segment) - absolute breakthrough back in 1997.

                                                                                                                                    We did a 4 server p3 farm seeding 40mbits of outward traffic in 1999. Myself did a complete Perl-based binary stream unpacking - before protobuf was a thing. Still live handling POS terminals.

                                                                                                                                    Discovered a much more effective teaching methodology which almost doubled effectiveness. Time-series compression with grammars,… And many more as we keep doing new r&d.

                                                                                                                                    None of it is going to be published as papers on time (if ever), because we really don’t want to suffer this process which brings very little value afterwards for someone outside academia or even for people in academia unless they peruse PHD and similar positions.

                                                                                                                                    I’m struggling to force myself to write an article on text2sql which is already checked and confirmed to contain a novel approach to RAG which works, but do I want to suffer such rejection humiliation? Not really…

                                                                                                                                    It seems this paper ground is reserved for academics and mathematics in a certain ‘sectarian modus operandi’, and everyone else is a sucker. Sadly after a while the code is also lost…

                                                                                                                                    • tsurba 7 days ago

                                                                                                                                      If you are not even going to bother writing them up properly, no one is going to care. Seems fair to me.

                                                                                                                                      You don’t have to make a ”paper” out of it, maybe make blog post or whatever if that is more your style. Maybe upload a pdf to arxiv.

                                                                                                                                      Half the job in science is informing (or convincing) everyone else about what you made and why it is significant. That’s what conferences try to facilitate, but if you don’t want to do that, feel free to do the ”advertising” some other way.

                                                                                                                                      Complaining about journals being selective is just a lazy excuse for not publishing anything to help others. Sure the system sucks, but then you can just publish some other way. For example, ask other people who understand your work to ”peer review” your blog posts.

                                                                                                                                      • fhdjkghdfjkf 7 days ago

                                                                                                                                        > Half the job in science is informing (or convincing) everyone else about what you made and why it is significant.

                                                                                                                                        Additionally, writing is the best way to properly think things through. If you can't write an article about your work then most likely you don't even understand it yet. Maybe there are critical errors in it. Maybe you'll find that you can further improve it. By researching and citing the relevant literature you put your work in perspective, how it relates to other results.

                                                                                                                                        • larodi 6 days ago

                                                                                                                                          the point was about the paper thing, not about the joy of writing one's thoughts down or publishing in the public domain. most of my work gets published these days - code, designs, teaching, translations... actually prefer it to keeping backups of obscure random. but of course, people sure do release a lot of cool stuff on github and where not. one of my repos has ±300 stars and going, thousands of papers are anyway near in actual impact.

                                                                                                                                          BUT... the topic is not about releasing stuff in the wild. opensource being a vehicle for research is outside the scope of present discussion. the incentives and the barrier to writing what is called an academic paper is. wild stuff does not bring impact factor, and does not get you closer to a PhD in the practical sense.

                                                                                                                                          the whole paper thing is intended for sharing purposes, yet it keeps people away very successfully. its a system, not a welcoming one, that all I'm saying.

                                                                                                                                        • spenczar5 7 days ago

                                                                                                                                          “do I want to suffer such rejection humiliation? Not really”

                                                                                                                                          The point of Terence Tao’s original post is that you just cannot think of rejection as humiliation. Rejection is not a catastrophe.

                                                                                                                                          • pabs3 7 days ago

                                                                                                                                            > Sadly after a while the code is also lost…

                                                                                                                                            Get it included in the archives of Software Heritage and Internet Archive:

                                                                                                                                            https://archive.softwareheritage.org/ https://wiki.archiveteam.org/index.php/Codearchiver

                                                                                                                                            • marvel_boy 7 days ago

                                                                                                                                              >Discovered a much more effective teaching methodology which almost doubled effectiveness.

                                                                                                                                              Please, could you elaborate?

                                                                                                                                            • michaelt 7 days ago

                                                                                                                                              > And this is how we do science? How is that a good basis for scientific reality?

                                                                                                                                              The journal did not go out empty, and the paper did not cease to exist.

                                                                                                                                              The incentives on academics reward them for publishing in exclusive journals, and the most exclusive journals - Nature, Science, Annals of Mathematics, The BMJ, Cell, The Lancet, JAMS and so on - only publish a limited number of pages in each issue. Partly because they have print editions, and partly because their limited size is why they're exclusive.

                                                                                                                                              A rejection from "Science" or "Nature" doesn't mean that your paper is wrong, or that it's fraudulent, or that it's trivial - it just means you're not in the 20 most important papers out of the 50,000 published this week.

                                                                                                                                              And yes, if instead of making one big splash you make two smaller splashes, you might well find neither splash is the biggest of the week.

                                                                                                                                              • n144q 7 days ago

                                                                                                                                                It is not a good way of doing science, but it is the best we have.

                                                                                                                                                All the alternatives, including the ones you proposed, have their own serious downsides, which is why we kept the status quo for the past few decades.

                                                                                                                                                • eeeeeeehio 7 days ago

                                                                                                                                                  Peer review is not designed for science. Many papers are not rejected because of an issue with the science -- in fact, reviewers seldom have the time to actually check the science! As a CS-centric example: you'll almost never find a reviewer who reads a single line of code (if code is submitted with the paper at all). There is artifact review, but this is never tied to the acceptance of the paper. Reviewers focus on ideas, presentation, and the presented results. (And the current system is a good filter for this! Most accepted papers are well-written and the results always look good on paper.) However, reviewers never take the time to actually verify that the experiment code matches the ideas described in the paper, and that the results reproduce. Ask any CS/engineering PhD student how many papers (in top venues) they've seen with a critical implementation flaw that invalidates the results -- and you might begin to understand the problem.

                                                                                                                                                  At least in CS, the system can be fixed, but those in power are unable and unwilling to fix it. Authors don't want to be held accountable ("if we submit the code with the paper -- someone might find a critical bug and reject the paper!"), and reviewers are both unqualified (i.e. haven't written a line of code in 25 years) and unwilling to take on more responsibility ("I don't have the time to make sure their experiment code is fair!"). So we are left with an obviously broken system where junior PhD students review artifacts for "reproducibility" and this evaluation has no bearing whatsoever on whether a paper gets accepted. It's too easy to cook up positive results in almost any field (intentionally, or unintentionally), and we have a system with little accountability.

                                                                                                                                                  It's not "the best we have", it's "the best those in power will allow". Those in power do not want consequences for publishing bad research, and also don't want the reviewing load required to keep bad research out.

                                                                                                                                                  • Ar-Curunir 7 days ago

                                                                                                                                                    This is much too negative. Peer review indeed misses issues with papers, but by-and-large catches the most glaring faults.

                                                                                                                                                    I don’t believe for one moment that the vast majority of papers in reputable conferences are wrong, if only for the simple reason that putting out incorrect research gives an easy layup for competing groups to write a follow-up paper that exposes the flaw.

                                                                                                                                                    It’s also a fallacy to state that papers aren’t reproducible without code. Yes code is important, but in most cases the core contribution of the research paper is not the code, but some set of ideas that together describe a novel way to approach the tackled problem.

                                                                                                                                                    • izacus 7 days ago

                                                                                                                                                      I spent a chunk of my career working on productionizing code from ML/AI papers and huge part of them are outright not reproducible.

                                                                                                                                                      Mostly they lack critical information (missing chosen constants in equations, outright missing information on input preparation or chunks of "common knowledge algorithms"). Those that don't have measurements that outright didn't fit the reimplemented algorithms or only succeeded in their quality on the handpicked, massaged dataset of the author.

                                                                                                                                                      It's all worse than you can imagine.

                                                                                                                                                      • tsurba 7 days ago

                                                                                                                                                        That’s the difference between truly new approaches to modelling an existing problem, or coming up with a new problem. No set of a bit different results or missing exact hyperparameter settings really invalidates the value of the aforementioned research. If the math works, and is a nice new point of view, its good. It may not even help anyone with practical applications right now, but may inspire ideas further down the line that do make the work practicable, too.

                                                                                                                                                        In contrast, if the main value of a paper is a claim that they increase performance/accuracy in some task by x%, then its value can be completely dependent on whether it actually is reproduceable.

                                                                                                                                                        Sounds like you are complaining about the latter type of work?

                                                                                                                                                        • eeeeeeehio 6 days ago

                                                                                                                                                          > No set of a bit different results or missing exact hyperparameter settings really invalidates the value of the aforementioned research.

                                                                                                                                                          If this is the case, the paper should not include a performance evaluation at all. If the paper needs a performance evaluation to prove its worth, we have every right to question the way that evaluation was conducted.

                                                                                                                                                          • izacus 7 days ago

                                                                                                                                                            I don't think theres much value in theoretical approaches that lack important derivation data either, so no need to try to split the papers like this. The academic CS publishing is flooded with bad quality papers in any case.

                                                                                                                                                        • withinboredom 7 days ago

                                                                                                                                                          I spent 3 months implementing a paper once. Finally, I got to the point where I understood the paper probably better than the author. It was an extremely complicated paper (homomorphic encryption). At this point, I realized that it doesn't work. There was nothing about it that would ever work, and it wasn't for lack of understanding. I emailed the author asking to clarify some specific things in the paper, they never responded.

                                                                                                                                                          In theory, the paper could work, but it would be incredibly weak (the key turned out to be either 1 or 0 -- a single bit).

                                                                                                                                                          • Ar-Curunir 7 days ago

                                                                                                                                                            Do you have a link to the paper?

                                                                                                                                                            • no_identd 7 days ago

                                                                                                                                                              +1

                                                                                                                                                          • jeltz 7 days ago

                                                                                                                                                            Anecdotally it is not. Most papers in CS I have read have been bad and impossible to reproduce. Maybe I have been unlucky but my experience is sadly the same.

                                                                                                                                                            • eeeeeeehio 6 days ago

                                                                                                                                                              > by-and-large catches the most glaring faults.

                                                                                                                                                              I did not dispute that peer review acts as a filter. But reviewers are not reviewing the science, they are reviewing the paper. Authors are taking advantage of this distinction.

                                                                                                                                                              > if only for the simple reason that putting out incorrect research gives an easy layup for competing groups to write a follow-up paper that exposes the flaw.

                                                                                                                                                              You can’t make a career out of exposing flaws in existing research. Finding a flaw and showing that a paper from last year had had cooked results gets you nowhere. There’s nowhere to publish “but actually, this technique doesn’t seem to work” research. There’s no way for me to prove that the ideas will NEVER work —- only that their implementation doesn’t work as well as they claimed. Authors who claim that the value is in the ideas should stick to Twitter, where they can freely dump all of their ideas without any regard for whether they will work or not.

                                                                                                                                                              And if you come up with another way of solving the problem that actually works, it’s much harder to convince reviewers that the problem is interesting (because the broken paper already “solved” it!)

                                                                                                                                                              > in most cases the core contribution of the research paper is not the code, but some set of ideas that together describe a novel way to approach the tackled problem

                                                                                                                                                              And this novel approach is really only useful if it outperforms existing techniques. “We won’t share the code but our technique works really well we promise” is obviously not science. There is a flood of papers with plausible techniques that look reasonable on paper and have good results, but those results do not reproduce. It’s not really possible to prove the technique “wrong”, but the burden should be on the authors to provide proof that their technique works and on reviewers to verify it.

                                                                                                                                                              It’s absurd to me that mathematics proofs are usually checked during peer review, but in other fields we just take everyone at their word.

                                                                                                                                                              • kortilla 7 days ago

                                                                                                                                                                They aren’t necessarily wrong but most are nearly completely useless due to some heavily downplayed or completely omitted flaw that surfaces when you try to implement the idea in actual systems.

                                                                                                                                                                There is technically academic novelty so it’s not “wrong”. It’s just not valuable for the field or science in general.

                                                                                                                                                                • franga2000 7 days ago

                                                                                                                                                                  I don't think anyone is saying it's not reproducible without code, it's just much more difficult for absolutely no reason. If I can run the code of a ML paper, I can quickly check if the examples were cherry-picked, swap in my own test or training set... The new technique or idea was still the main contribution, but I can test it immediately, apply it to new problems, optimise the performance to enable new use-cases...

                                                                                                                                                                  It's like a chemistry paper for a new material (think the recent semiconductor thing) not including the amounts used and the way the glassware was set up. You can probably get it to work in a few attempts, but then the result doesn't have the same properties as described, so now you're not sure if your process was wrong or if their results were.

                                                                                                                                                                  • cauch 7 days ago

                                                                                                                                                                    Sharing the code may also share the incorrect implementation biases.

                                                                                                                                                                    It's a bit like saying that to help reproduce the experiment, the experimental tools used to reach the conclusion should be shared too. But reproducing the experiment does not mean "having a different finger clicking on exactly the same button", it means "redoing the experiment from scratch, ideally with a _different experimental setup_ so that it mitigates the unknown systematic biases of the original setup".

                                                                                                                                                                    I'm not saying that sharing code is always bad, you give examples of how it can be useful. But sharing code has pros and cons, and I'm surprised to see so often people not understanding that.

                                                                                                                                                                    • HPsquared 7 days ago

                                                                                                                                                                      If they don't publish the experimental setup, another person could use the exact same setup anyway without knowing. Better to publish the details so people can actually think of independent ways to verify the result.

                                                                                                                                                                      • cauch 7 days ago

                                                                                                                                                                        But they will not make the same mistakes. If you ask two persons to build a software, they can use the same logic and build the same algorithm, but what are the chances they will do exactly the same bugs.

                                                                                                                                                                        Also, your argument seems to be "_maybe_ they will use the exact same setup". So it already looks better than the solution where you provide the code and they _will for sure_ use the exact same setup.

                                                                                                                                                                        And "publish the details" corresponds to explain the logic, not share the exact implementation.

                                                                                                                                                                        Also, I'm not saying that sharing the code is bad, but I'm saying that sharing the code is not the perfect solution and people who thinks not sharing the code is very bad are usually not understanding what are the danger of sharing the code.

                                                                                                                                                                        • pegasus 7 days ago

                                                                                                                                                                          Nobody said sharing the code "is the perfect solution". Just that sharing the code is way better and should be commonplace, if not required. Your argument that not doing so will force other teams to do re-write the code seems unrealistic to me. If anyone wants to check the implementation they can always disregard the shared code, but having it allows other, less time-intensive checks to still happen: like checking for cherry-picked data, as GP suggested, looking through the code for possible pitfalls etc. Besides, your argument could be extended to any specific data the paper presents: why publish numbers so people can get lazy and just trust them? Just publish the conclusion and let other teams figure out ways to prove/disprove it! - which is (more than) a bit ridiculous, wouldn't you say?

                                                                                                                                                                          • cauch 6 days ago

                                                                                                                                                                            > Just that sharing the code is way better

                                                                                                                                                                            And I disagree with that and think that you are overestimating the gain brought by sharing the code and are underestimating the possible problems that sharing the code bring.

                                                                                                                                                                            At CERN, there are 2 generalistic experiments, CMS and ATLAS. The policy is that people from one experiment are not allowed to talk of undergoing work with people from the other. You notice that they are officially forbidden, not "if some want to discuss, go ahead, others may choose to not discuss". Why? Because sharing these details is ruining the fact that the 2 experiments are independent. If you hear from your CMS friend that they have observed a peak at 125GeV, you are biased. Even if you are a nice guy and try to forget about it, it is too late, you are unconsciously biased: you will be drawn to check the 125GeV region and possibly notice a fluctuation as a peak while you would have not noticed otherwise.

                                                                                                                                                                            So, no, saying "I give the code but if you want you may not look at it" is not enough, you will still de-blind the community. As soon as some people will look at the code, they will be biased: if they will try to reproduce from scratch, they will come up with an implementation that is different from the one they would have come up with without having looked at the code.

                                                                                                                                                                            Nothing too catastrophic either. Don't get me wrong, I think that sharing the code is great, in some cases. But this picture of saying that sharing the code is very important is just misunderstanding of how science is done.

                                                                                                                                                                            As for the other "specific data", yes, some data is better not to share too if it is not needed to reproduce the experiment and can be source of bias. The same could be said about everything else in the scientist process: why sharing the code is so important, and not sharing all the notes of each and every meetings? I think that often the person who don't understand that is a software developer, and they don't understand that the code that the scientist creates is not the science, it's not the publication, it's just the tool, the same way a pen and a piece of paper was. Software developers are paid to produce code, so code is for them the end goal. Scientists are paid to do research, and code is not the end goal.

                                                                                                                                                                            But, as I've said, sharing the code can be useful. It can help other teams working on the same subject to reach the same level faster or to notice errors in the code. But in both case, the consequence is that these others teams are not producing independent work, and this is the price to pay. (and of course, they are layers of dependence: some publications tend to share too much, other not, but it does not mean some are very bad and others very good. Not being independent is not the end of the world. The problem is when someone considers that sharing the code is "the good thing to do" without understanding that)

                                                                                                                                                                      • izacus 7 days ago

                                                                                                                                                                        What you're deliberately ignoring is that omitting important information is material to a lot of papers because the methodology was massaged into desired results to created publishable content.

                                                                                                                                                                        It's really strange seeing how many (academic) people will talk themselves into bizarre explanations for a simple phenomenon of widespread results hacking to generate required impact numbers. Occams razor and all that.

                                                                                                                                                                        • cauch 7 days ago

                                                                                                                                                                          If it is massaged into desired results, then it will be invalidated by facts quite easily. Inversely, obfuscating things is also easy if you just provide the whole package and just say "see, you click on the button and you get the same result, you have proven that it is correct". No providing code means that people will redo their own implementation and come back to you when they will see they don't get the same results.

                                                                                                                                                                          So, no, no need to invent that academics are all part of this strange crazy evil group. Academics are debating and are being skeptical of their colleagues results all the time, which is already contradictory to your idea that the majority is motivated by frauding.

                                                                                                                                                                          Occams razor is simply that there are some good reasons why code is not shared, going from laziness to lack of expertise on code design to the fact that code sharing is just not that important (or sometimes plainly bad) for reproducibility, no need to invent that the main reason is fraud.

                                                                                                                                                                          • izacus 7 days ago

                                                                                                                                                                            Ok, that's a bit naive now. The whole "replication crisis" is exactly the term for bad papers not being invalidated "easily". [1]

                                                                                                                                                                            Beacuse - if you'd been in academia - you'd find out that replicating papers isn't something that will allow you to keep your funding, your job and your path to next title.

                                                                                                                                                                            And I'm not sure why did you jump to "crazy evil group" - noone is evil, everyone is following their incentives and trying to keep their jobs and secure funding. The incentives are perverse. This willing blindness against perverse incentives (which appears both in US academia and corporate world) is a repeated source of confusion for me - is the idea that people aren't always perfectly honest when protecting their jobs, career success and reputation really so foreign to you?

                                                                                                                                                                            [1]:https://en.wikipedia.org/wiki/Replication_crisis

                                                                                                                                                                            • cauch 6 days ago

                                                                                                                                                                              That's my point: people here link the replication crisis to "not sharing the code", which is ridiculous. If you just click on a button to run the code written by the other team, you haven't replicated anything. If you review the code, you have replicated "a little bit" but it is still not as good as if you would have recreated the algorithm from scratch independently.

                                                                                                                                                                              It's very strange to pretend that sharing the code will help the replication crisis, while the replication crisis is about INDEPENDENT REPLICATION, where the experience is redone in an independent way. Sometimes even with a totally perpendicular setup. The closer the setup, the weaker is the replication.

                                                                                                                                                                              It feels like it's watching the finger who point at the moon: not understanding that replication does not mean "re-running the experiment and reaching the same numbers"

                                                                                                                                                                              > noone is evil, everyone is following their incentives and trying to keep their jobs and secure funding

                                                                                                                                                                              Sharing the code has nothing to do with the incentives. I will not loose my funding if I share the code. What you are adding on top of that, is that the scientist is dishonest and does not share because they have cheated in order to get the funding. But this is the part that does not make sense: unless they are already established enough to have enough aura to be believed without proofs, they will lose their funding because the funding is coming from peer committee that will notice that the facts don't match the conclusions.

                                                                                                                                                                              I'm sure there are people who down-play the fraud in the scientific domain. But pretending that fraud is a good strategy for someone's career and that it is why people will fraud so massively that sharing the code is rare, this is just ignorance of the reality.

                                                                                                                                                                              I'm sure some people fraud and don't want to share their code. But how do you explain why so many scientists don't share their code? Is that because the whole community is so riddled with cheaters? Including cheaters that happens to present conclusions that keep being proven correct when reproduced? Because yes, there are experiments that have been reproduced and confirmed and yet the code, at the time, was not shared. How do you explain that if the main reason to not share the code is to hide cheating?

                                                                                                                                                                              • izacus 6 days ago

                                                                                                                                                                                I've spent plenty of time of my career doing exactly the type of replication you're talking about and easily the majority of CS papers weren't replicable with the methodology written down on the paper and on dataset that wasn't optimized and preselected by the papers author.

                                                                                                                                                                                I didn't care about sharing code (it's not common), but independent implementation and comparison of ML and AI algorithms with purpose of independent comparison. So I'm not sure why you're getting so hung up on the code part: majority of papers were describing trash science even in their text in effort to get published and show results.

                                                                                                                                                                                • cauch 6 days ago

                                                                                                                                                                                  I'm sorry that the area you are exercising in is rotten and does not have the minimum scientific standard. But please, do not reach conclusion that are blatantly incorrect in areas you don't know.

                                                                                                                                                                                  The problem is not really "academia", it is that, in your area, the academic community is particularly poor. The problem is not really the "replication crisis", it is that, in your area, even before we reach the concept of replication crisis, the work is not even reaching the basic scientific standard.

                                                                                                                                                                                  Oh, I guess it is Occams Razor after all: "It's really strange seeing how many (academic) people will talk themselves into bizarre explanations for a simple phenomenon of widespread results hacking to generate required impact numbers". Occams Razor explanation: so many (academic) people will not talk about the malpractice because so many (academic) people work in an area where these malpractice are exceptional.

                                                                                                                                                                                  • bumby 4 days ago

                                                                                                                                                                                    But what’s the point of the peer review process if it’s not sifting out poor academic work?

                                                                                                                                                                                    It reads as if your point is talking in circles. “Don’t blame academia when academia doesn’t police itself” is not a strong stance when they are portrayed as doing exactly that. Or, maybe more generously, you have a different definition of academia and it’s role.

                                                                                                                                                                                    I think sharing code can help because it’s part of the method. It wouldn’t be reasonable for omitting aspects of the methodology of a paper under the guise that replication should devise their own independent method. Explicitly sharing methods is the whole point of publication and sharing it is necessary for evaluating its soundness, generalizability, and limitations. izacus is right, a big part of the replication crisis is because there aren’t near as many incentives to replicating work and omitting parts of the method make this worse, not better.

                                                                                                                                                                                    • cauch 4 days ago

                                                                                                                                                                                      Maybe for the audience here, it is useful to consider that peer review is a bit like scrum. It's a good idea, but it does not mean that everyone who say they do scrum does it properly. And when, in some situation, it does not work, it does not mean that scrum is useless or incorrect.

                                                                                                                                                                                      And, like "scrum", "academia" is just the sum of the actors, including the paper authors. It's even more obvious that peer review is done by other paper authors: you cannot really be a paper author and blame "academia" for not doing a good peer review, because you are one of the person in charge of the peer review yourself.

                                                                                                                                                                                      As for "sharing code is part of the method", it is where I strongly disagree. Reproducibility and complete description allowing reproducibility is part of the method, but keeping enough details blinded (a balance that can be subjective) is also part of the method. So, someone can argue that sharing code is in contradiction with some part of the method. I think one of the misunderstanding is that people cannot understand that "sharing methods" does not require "sharing code".

                                                                                                                                                                                      Again, the "replication crisis" can be amplified by sharing code: people don't replicate the experiment, they just re-run it and then pretend it was replicated. Replicating the experiment means re-proving the results in an independent way, sometimes even with an orthogonal setup (that's why CMS and ATLAS at CERN are using on purpose different technologies and that they are not allowed to share their code). Using the same code is strongly biased.

                                                                                                                                                                                      • bumby 3 days ago

                                                                                                                                                                                        It seems you are conflating concepts, maybe because you take it personally which it shouldn’t be. The process can be broken, but that doesn’t mean the academic is bad, just that they are part of a broken process. Likewise if a scrum is a broken process, it will lead to bad results. If it isn’t “done properly” then we seem to be saying the same thing: the process isn’t working. As I and others have said, there are some misaligned incentives which can lead to a broken process. Just because it sometimes works doesn’t mean it’s a good process, anymore than a broken clock is still correct twice a day. It varies by discipline, but there seems to be quite a few domains where there is actually more bad publications than good. That signals a bad process.

                                                                                                                                                                                        As others have talked about here, sometimes it becomes impossible to replicate the results. Is it because of some error in the replication process, the data, the practioner, or is the original a sham? It's hard to deduce when there's a lot you can't chase down.

                                                                                                                                                                                        I also think you are applying an overly superficial rationalization as to why sharing code would amplify the replication issue. This is only true if people mindlessly re-run the code. The point of sharing it is so the code can be interrogated to see if there are quality issues. Your same argument could be made for sharing data; if people just blindly accept the data the replication issue would amplify. Yet we know that sharing the data is what led to uncovering some of the biggest issues in replication, and I don’t see many people defending hiding data as a contradiction in the publication process. I suspect it’s for the reasons others have already eluded to in this thread.

                                                                                                                                                                                        • cauch a day ago

                                                                                                                                                                                          I'm not sure what you are saying. The peer review process works relatively well in the large majority of the scientific fields. There are problems but they are pretty anecdotal and are far from counterbalancing the advantages. The previous commenter was blaming the peer review process for "bad incentives that lead to bad science", but that is an incorrect analysis. The bad science in their field is mainly due to the fact that private interest and people with poor scientific culture are getting more easily into this field.

                                                                                                                                                                                          Also, let's not mix up "peer review" or "code sharing" and "bad publication" or "replication crisis".

                                                                                                                                                                                          I know people outside of science don't realise that, but publishing is only a very small element amongst the full science process. Scientists are talking together, exchanging all the time, at conferences, at workshops, ... This idea that a bad publication is fooling the domain experts does not correspond to reality. I can easily find a research paper mill and publish my made-up paper, but this would be 100% ignored by domain experts. Maybe one or two will have a look at the article, just in case, but it is totally wild to think that domain experts just randomly give a lot of credit to random unknown people rather than working with the groups of peers that they know well enough to know they are reliable. So, the percentage of "bad paper" is not a good metric: the percentage of bad papers is not at all representative of the percentage of bad papers that made it to the domain experts.

                                                                                                                                                                                          You seem to not understand the "replication crisis". The replication does not happens because the replicators are bad or the initial authors are cheating. There is a lot of causes, from the fact that science happens to the technology edge and that the technology edge is more tricky to reach, that the number of publications has increased a lot, that there is more and more economical interest trying to bias the system, to the stupid "publish or perish" + "publish only the good result" that everyone in the academic sector agree is stupid but exist because of non-academic people. If you publish scientifically interesting result that says "we have explored this way but found nothing", you have a lot of pressure from the non-academic people who are stupid enough to say that you have wasted money.

                                                                                                                                                                                          You seems to say "I saw a broken clock once, so it means that all clocks are broken and if you pretend it is not the case, it is just because a broken clock is still correct twice a day".

                                                                                                                                                                                          > This is only true if people mindlessly re-run the code. The point of sharing it is so the code can be interrogated to see if there are quality issues.

                                                                                                                                                                                          "Mindlessly re-running the code" is one extreme. "reviewing the code perfectly" is another one. Then there are all the scenario in the middle from "reviewing almost perfectly" to "reviewing superficially but having a false feeling of security". Something very interesting to mention is that in good practices, code review is part of software development, and yet, it does not mean that software have 0 bugs. Sure, it helps, and sharing the code will help too (I've said that already), but the question is "does it help more than the problem it may create". That's my point in this discussion: too many people here just don't understand that sharing the code create biases.

                                                                                                                                                                                          > Yet we know that sharing the data is what led to uncovering some of the biggest issues in replication,

                                                                                                                                                                                          What? What are your example of "replication crisis" where the problem "uncovered" by sharing the data? Do you mix up "replication crisis" and "fraud"? Even for "fraud", sharing the data is not really the solution, people who are caught are just being reckless and they could have easily faked their data in more subtle ways. On top of that, rerunning on the same data does not help if the conclusion is incorrect because of a statistical fluctuation in the data (at 95% confidence level, 5% of the paper can be wrong while they have 0 bugs, the data is indeed telling them that the most sensible conclusion is the one they have reached, and yet these conclusions are incorrect). On the other hand, rerunning on independent data is ALWAYS exposing a fraudster.

                                                                                                                                                                                          > and I don’t see many people defending hiding data as a contradiction in the publication process.

                                                                                                                                                                                          What do you mean? At CERN, sharing the data of your newly published paper with another collaboration is strictly forbidden. Only specific samples are allowed to be shared, after a lengthy approval procedure. But the point is that a paper should provide enough information that you don't need the data to discover if the methodology is sound or not.

                                                                                                                                                                                          • bumby a day ago

                                                                                                                                                                                            >I'm not sure what you are saying.

                                                                                                                                                                                            I'm saying the peer review process is largely broken, both in the quality and quantity of publications. You have taken a somewhat condescending tone a couple times now to indicate you think you are talking to an audience unfamiliar with the peer review process, but you should know that the HN crowd goes far beyond professional coders. I am well aware of the peer review process, and publish and referee papers regularly.

                                                                                                                                                                                            >There are problems but they are pretty anecdotal

                                                                                                                                                                                            This makes me think you may not be familiar with the actual work in this area. It varies, but some domains show the majority (as many as 2/3rds) of studies have replication issues. The replication rates are lowest in complex systems, with 11% in biomedical being the lowest I'm aware of. Other domains have better rates, but not trivial and not anecdotal. Brian Nosek was one of the first that I'm aware of to systematically study this, but there are others. Data Colada focuses on this problem, and even they only talk about the studies that are generally (previously) highly regarded/cited. They don't even bother to raise alarms about the less consequential work they find problems with. So, no, this is not about me extrapolating from seeing "a broken clock once."

                                                                                                                                                                                            >it does not mean that software have 0 bugs

                                                                                                                                                                                            Anyone who regularly works with code knows this. But I think you're misunderstanding the intent of the code. It's not just for the referees, but the people trying to replicate it for their own purposes. As numerous people in this thread have said, replicating can be very hard. Good professors will often assign well-regarded papers to students to show them the results are often impossible to reproduce. Sharing code helps troubleshoot.

                                                                                                                                                                                            >So, the percentage of "bad paper" is not a good metric: the percentage of bad papers is not at all representative of the percentage of bad papers that made it to the domain experts.

                                                                                                                                                                                            This is a unnecessary moving of the goalposts. The thrust of the discussion is about the peer-review and publication process. Remember the title is "one of my papers got declined today" And now you seemingly admit that the publication process is broken, but it doesn't matter because experts won't be fooled. Except we have examples of Nobel laureates making mistakes with data (Daniel Kahneman), or high-caliber researchers sharing their own anecdotes (Tao and Grant) as well as fraudulent publications impacting millions of dollars of subsequent work (Alzheimers). My claim is that a good process should catch both low quality research and outright fraud. Your position is like an assembly line saying they don't have a problem when 70% of their widgets have to be thrown out because people at the end of the line can spot the bad widgets (even when they can't).

                                                                                                                                                                                            >What are your example of "replication crisis" where the problem "uncovered" by sharing the data?

                                                                                                                                                                                            Early examples would be dermatology studies for melanoma where simple bad practices were not followed, like balanced datasets. Or criminal justice studies that amplified racial biases or showed the authors didn't realize the temporal data was sorted by criminal severity. And yes, the most egregious examples are fraud, like the Dan Ariely case. That wasn't found until people went to the data source directly, rather than the researchers. But there are countless examples of p-hacking that could be found by sharing data. If your counter is that these are examples of people cheating recklessly and they could have been more careful, that doesn't make your case that the peer-review process works. It just means it's even worse.

                                                                                                                                                                                            >sharing the data of your newly published paper with another collaboration is strictly forbidden

                                                                                                                                                                                            Yup, and I'm aware of other domains that hide behind the confidentiality of their data as a way to obfuscate bad practices. But, in general, people assume sharing data is a good thing, just like sharing code should be.

                                                                                                                                                                                            >But the point is that a paper should provide enough information that you don't need the data to discover if the methodology is sound or not.

                                                                                                                                                                                            Again (this has been said before) the point in sharing is to aid in troubleshooting. Since we already said replication is hard, people need an ability to understand why the results differed. Is it because the replicator made a mistake? Shenanigans in the data? A bug in the original code? P-hacking? Is the method actually broken? Or is the method not as generalizable as the original authors led the reader to believe? Many of those are impossible to rule out unless the authors share their code and data.

                                                                                                                                                                                            You bring up CERN so consistently that I tend to believe you are looking at this problem through a straw and missing the larger context of rest of the scientific world. Yours reads as a perspective of someone inside a bubble.

                                                                                                                                                                                            • cauch 9 hours ago

                                                                                                                                                                                              I will not answer to everything because what's the point.

                                                                                                                                                                                              Yes, sharing the code can be one way to find bugs, I've said that already. Yes, sharing the code can help bootstrap another team, I've said that already.

                                                                                                                                                                                              What people don't realize is that reproducing from scratch the algorithm is also very very efficient. First, it's arguably a very good way to find bugs: if the other team does not have the exact same number as you, you can pinpoint exactly where you have diverged. When you find the reason, in the large majority of the case, it totally passed through several code reviewer. Reading a code thinking "does it make sense" is not an easy way to find bug, because bugs are usually in place where the code of the original author looked good when read.

                                                                                                                                                                                              And secondly, there is a contradiction in saying "people will study the code intensively" and "people will go faster because they don't have to write the code".

                                                                                                                                                                                              > Remember the title is "one of my papers got declined today"

                                                                                                                                                                                              Have you even read what Tao says? He explains that he himself have rejected papers and has probably generated similar apparently paradoxical situations. His point is NOT that there is a problem with paper publication, it is that paper rejection is not such a big deal.

                                                                                                                                                                                              For the rest, you keep mixing up "peer review", "code sharing", "replication crisis", ... and because of that, your logic just make 0 sense. I say "bad paper that turns out to have errors (involuntary or not) are anecdotal" and you answer "11% of the biomedical publication have replication problem". Then when I ask you to give example where the replication crisis was avoided by sharing the data, you talk about bad papers that turns out to have errors (involuntary or not).

                                                                                                                                                                                              And, yes, I used CERN as an example because 1) I know it well, 2) if what you say is correct, how on hell CERN is not bursting with fire right now? You are pretending that sharing code or sharing data is a good idea and part of good practice. If it is true, how do you explain that CERN forbid it and still is able to generate really good papers. According to you, CERN would even be an exception where replication crisis, bad paper and peer-review problem is almost existent (and therefore I got the wrong idea). But if it is the case, how do you explain that: despite not doing what you pretend will help avoiding those, CERN does BETTER?!

                                                                                                                                                                                              But by the way, at uni, I became very good friend with a lot of people. Some of them scientists in other discipline. We regularly have this kind of discussion because it is interesting to compare our different world. The funny part is that I did not really think of how sharing the code or the data is not such a big deal after (it still can be good, but it's not "the good practice"), I realise it because another person, a chemist, mentioned it.

                                                                                                                                                                                              • bumby 4 hours ago

                                                                                                                                                                                                >What people don't realize is that reproducing from scratch the algorithm is also very very efficient.

                                                                                                                                                                                                This is where we differ. Especially if the author shares neither the data or the code, because you can never truly be sure it's a software bug or a data anomaly or a bad method or outright fraud. So you can end up burning tremendous amounts of time investigating all those avenues. That statement (as well as others about how trivial replication is) makes me think you don't actually try to replicate anything yourself.

                                                                                                                                                                                                >there is a contradiction in saying "people will study the code intensively" and "people will go faster because they don't have to write the code".

                                                                                                                                                                                                I never said "people will go faster" because they don't have to write the code. Maybe you're confusing me with another poster. You were the one who said sharing code is worthless because people can "click on the button and you get the same result". My point, and maybe this is where we differ, is that for the ultimate goal is not to create the exact same results. The goal I'm after is to apply the methodology to something else useful. That's why we share the work. When it doesn't seem to work, I want to go back to the original work to figure out why. The way you talk about the publication process tells me you don't do very much of this. Maybe that's because of your work at CERN is limited in that regard, but when I read interesting research I want to apply it to different data that are relevant to the problems I'm trying to solve. This is the norm outside of those who aren't studying the replication crisis directly.

                                                                                                                                                                                                >I say "bad paper that turns out to have errors (involuntary or not) are anecdotal"

                                                                                                                                                                                                My answer was not conflating peer-review and code sharing and replication (although I do think they are related). My answer was to give you researchers who work in this area because their work shows it is far from anecdotal. My guess is you didn't bother to look it up because you've already made up your mind and can't be bothered.

                                                                                                                                                                                                >I ask you to give example where the replication crisis was avoided by sharing the data, you talk about bad papers that turns out to have errors

                                                                                                                                                                                                Because it's a bad question. A study that is replicated using the same data is "avoiding the replication crisis". Did you really want me to list studies that have been replicated? Go on Kaggle or Figshare or Genbank if you want example of datasets that have been used (and replicated), like CORD-19 or NIH-dbGaP or World Values Survey or any host of other datasets. You can find plenty of published studies that use that data and try to replicate them yourself.

                                                                                                                                                                                                >how on hell CERN is not bursting with fire

                                                                                                                                                                                                The referenced authors talk about how physics is generally the most replicable. This is largely because they have the most controlled experimental setups. Other domains that do much worse in terms of replicability are hampered by messier systems, ethical considerations, etc. that limit the scientific process. In the larger scheme of things, physics is more of an anomaly and not a good basis to extrapolate to the state of affairs for science as a whole. I tend to think you being in a bubble there has caused you to over-extrapolate and have too strong of a conclusion. (You should also review the HN guidelines that urge commenters to avoid using caps for emphasis)

                                                                                                                                                                                                >"sharing the code...but it's not "the good practice""

                                                                                                                                                                                                I'm not sure if you think sharing a single unsourced quip is convincing but, your anecdotal discussion aside, lots of people disagree with you and your chemist friend. Enough so that it's become a more and more common practice (and even requirement in some journals) to share data and code. Maybe that's changed since your time at uni, and probably for the better.

                                                                                                                                                                      • pastage 7 days ago

                                                                                                                                                                        More code should be released, but code is dependent on the people or environment that run it. When I release buggy code I will almost always have to spend time supporting others in how to run it. This is not what you want to do in Proof of concept to prove an idea.

                                                                                                                                                                        I am not published but I have implemented a number of papers to code, it works fine (hashing, protocols and search mostly). I have also used code dumps to test something directly. I think I spend less time on code dumps, and if I fail I give up easier. That is the danger you start blaming the tools instead of how good you have understood the ideas.

                                                                                                                                                                        I agree with you that more code should be released.. It is not a solution for good science though.

                                                                                                                                                                    • DiogenesKynikos 7 days ago

                                                                                                                                                                      > It's not "the best we have", it's "the best those in power will allow". Those in power do not want consequences for publishing bad research, and also don't want the reviewing load required to keep bad research out.

                                                                                                                                                                      This is a very conspiratorial view of things. The simple and true answer is your last suggestion: doing a more thorough review takes more time than anyone has available.

                                                                                                                                                                      Reviewers work for free. Applying the level of scrutiny you're requesting would require far more work than reviewers currently do, and maybe even something approaching the amount of work required to write the paper in the first place. The more work it takes to review an article, the less willing reviewers are to volunteer their time, and the harder it is for editors to find reviewers. The current level of scrutiny that papers get at the peer-review stage is a result of how much time reviewers can realistically volunteer.

                                                                                                                                                                      Peer review is a very low standard. It's only an initial filter to remove the garbage and to bring papers up to some basic quality standard. The real test of a paper is whether it is cited and built upon by other scientists after publication. Many papers are published and then forgotten, or found to be flawed and not used any more.

                                                                                                                                                                      • ksenzee 7 days ago

                                                                                                                                                                        > Reviewers work for free.

                                                                                                                                                                        If journals were operating on a shoestring budget, I might be able to understand why academics are expected to do peer review for free. As it is, it makes no sense whatsoever. Elsevier pulls down huge amounts of money and still manages to command free labor.

                                                                                                                                                                        • withinboredom 7 days ago

                                                                                                                                                                          I think it has to be this way, right? Otherwise a paid reviewer will have obvious biases from the company.

                                                                                                                                                                          • ksenzee 7 days ago

                                                                                                                                                                            It seems to me that paying them for their time would remove bias, rather than add it.

                                                                                                                                                                            • nativeit 7 days ago

                                                                                                                                                                              How is that?

                                                                                                                                                                              • flir 7 days ago

                                                                                                                                                                                I guess the sensible response is "what bias does being paid by Elsevier add that working for free for Elsevier doesn't add?"

                                                                                                                                                                                The external bias is clear to me (maybe a paper undermines something you're about to publish, for example) but I honestly can't see much additional bias in adding cash to a relationship that already exists.

                                                                                                                                                                                • ksenzee 6 days ago

                                                                                                                                                                                  Exactly. At least if the work is paid, the incentive to do it is clearer.

                                                                                                                                                                        • vixen99 7 days ago
                                                                                                                                                                          • eeeeeeehio 6 days ago

                                                                                                                                                                            >The real test of a paper is whether it is cited and built upon by other scientists after publication. Many papers are published and then forgotten, or found to be flawed and not used any more.

                                                                                                                                                                            This does seem true, but this forgets the downstream effects of publishing flawed papers.

                                                                                                                                                                            Future research in this area is stymied by reviewers who insist that the flawed research already solved the problem and/or undermines the novelty of somewhat similar solutions that actually work.

                                                                                                                                                                            Reviewers will reject your work and insist that you include the flawed research in your own evaluations, even if you’ve already pointed out the flaws. Then, when you show that the flawed paper underperforms every other system, reviewers will reject your results and ask you why they differ from the flawed paper (no matter how clearly you explain the flaws) :/

                                                                                                                                                                            Published papers are viewed as canon by reviewers, even if they don’t work at all. It’s very difficult to change this perception.

                                                                                                                                                                            • DiogenesKynikos 6 days ago

                                                                                                                                                                              If you get such a simple-minded reviewer, you can push back in your response, or you can even contact the editor directly.

                                                                                                                                                                              Reviewers are not all-powerful, and they don't all share the same outlook. After all, reviewers are just scientists who have published articles in the past. If you are publishing papers, you're also reviewing papers. When you review papers, will you assume that everything that has ever passed peer review is true? Obviously not.

                                                                                                                                                                        • eru 7 days ago

                                                                                                                                                                          > It is not a good way of doing science, but it is the best we have.

                                                                                                                                                                          What makes you think so? We already have and had plenty of other ways. Eg you can see how science is done in corporations or for the military or for fun (see those old gentlemen scientists, or amateurs these days), and you can also just publish things on your own these days.

                                                                                                                                                                          The only real function of these old fashioned journals is as gatekeepers for funding and career decisions.

                                                                                                                                                                          • n144q 7 days ago

                                                                                                                                                                            I heard first hand accounts from multiple people of running into a different set of problems (from academia) publishing papers in corporations. Publishing is never simple or easy. If you have concrete examples, or better, generally recognized studies that show there is an objectively better way to do research, I'd very like to know that.

                                                                                                                                                                            Because, as an PhD who knows dozens of other PhDs in both academia and industry, and who has never heard of this magic new approach to doing science, it would be quite a surprise.

                                                                                                                                                                            • eru 6 days ago

                                                                                                                                                                              I wasn't talking about publishing, I was talking about doing science.

                                                                                                                                                                              Publishing can be one part of doing science, but it's not the end-and-be-all.

                                                                                                                                                                              And yes, I have no idea how great corporate research or military research etc are, I just brought them up as examples of research outside of academia that we can look to and perhaps learn something from.

                                                                                                                                                                              (And I also strongly suspect research at TSMC will be very different from research at Johnson & Johnson and that's very different from how Jane Street does research. So not all corporate research is the same.)

                                                                                                                                                                              > Because, as an PhD who knows dozens of other PhDs in both academia and industry, and who has never heard of this magic new approach to doing science, it would be quite a surprise.

                                                                                                                                                                              And why would you expect your PhD friends to hear from that? PhD's are very much in academia, and very much embedded in academia's publish-or-perish.

                                                                                                                                                                              • bumby 4 days ago

                                                                                                                                                                                There are plenty of PhDs doing science outside of academia

                                                                                                                                                                              • bumby 7 days ago

                                                                                                                                                                                I think the distinction in the examples given (corporations, military), science is being done but much less open.

                                                                                                                                                                            • psychoslave 7 days ago

                                                                                                                                                                              So the lesson is there is not a single good way to do science (or anything really), as whatever the approach retained, there will be human biases involved.

                                                                                                                                                                              So the less brittle option obviously might be to go through all possible approaches, but this is obviously more resources demanding, plus we still have the issue of creating some synthesis of all the accumulated insights from various approaches which itself might be taken into various approaches. That’s more of a indefinitely deep spiral, under that perspective

                                                                                                                                                                              An other perspective is to consider, what are the expected outcomes of the stakeholders maybe. A shiny academic career? An attempt to bring some enlightenment on deep cognitive patterns to the luckiest follows that have the resources at end to follow your high level intellectual gymnastic? A pursuit of ways to improve humanity condition through relevant and sound knowledge bodies? There are definitely many others.

                                                                                                                                                                              • Panoramix 7 days ago

                                                                                                                                                                                We kept that mostly due to inertia and because it's the most profitable for the journals (everybody does their work for free and they don't have to invest in new systems), not because it's the best for science and scientists.

                                                                                                                                                                                • tuyiown 7 days ago

                                                                                                                                                                                  > It is not a good way of doing science, but it is the best we have.

                                                                                                                                                                                  It may have been for some time, but there is human social dynamics in play.

                                                                                                                                                                                  • fastball 7 days ago

                                                                                                                                                                                    What is the serious downside of open internet centric review?

                                                                                                                                                                                    • Al-Khwarizmi 7 days ago

                                                                                                                                                                                      If by "open" you mean that the paper is there and people just voluntarily choose to review it, rather than having some top-down coordinated assignment process, the problem is that papers by the superstars would get hundreds of reviews while papers from unknown labs would get zero.

                                                                                                                                                                                      You could of course make it double blind, but that seems hard to enforce in practice in such an open setup, and still, hyped papers in fashionable topics would get many reviews while papers that are hardcore theoretical, in an underdog domain, etc. would get zero.

                                                                                                                                                                                      Finally, it also becomes much more difficult to handle conflicts of interest, and the system is highly vulnerable to reviewer collusion.

                                                                                                                                                                                      • daemontus 7 days ago

                                                                                                                                                                                        As others have mentioned, the main problem is that open systems are more vulnerable to low-cost, coordinated external attacks.

                                                                                                                                                                                        This is less of an issue with systems where there is little monetary value attached (I don't know anyone whose mortgage is paid for by their Stack Overflow reputation). Now imagine that the future prospects of a national lab with multi-million yearly budget are tied to a system that can be (relatively easily) gamed with a Chinese or Russian bot farm for a few thousand dollars.

                                                                                                                                                                                        There are already players that are trying hard to game the current system, and it sometimes sort of works, but not quite, exactly because of how hard it is to get into the "high reputation" club (on the other hand, once you're in, you can often publish a lot of lower quality stuff just because of your reputation, so I'm not saying this is a perfect system either).

                                                                                                                                                                                        In other words, I don't think anyone reasonable is seriously against making peer review more transparent, but for better or worse, the current system (with all of its other downsides) is relatively robust to outside interference.

                                                                                                                                                                                        So, unless we (a) make "being a scientist" much more financially accessible, or (b), untangle funding from this new "open" measure of "scientific achievement", the open system would probably not be very impactful. Of course, (a) is unlikely, at least in most high-impact fields; CS was an outlier for a long time, not so much today. And (b) would mean that funding agencies would still need something else to judge your research, which would most likely still be some closed, reputation-based system.

                                                                                                                                                                                        Edit TL;DR: Describe how the open science peer-review system should be used to distribute funding among researchers while begin reasonably robust to coordinated attacks. Then we can talk :)

                                                                                                                                                                                        • reilly3000 7 days ago

                                                                                                                                                                                          The open internet.

                                                                                                                                                                                          i.e. trolls, brigades, spammers, bots, and all manner of uninformed voices.

                                                                                                                                                                                          • bruce511 7 days ago

                                                                                                                                                                                            To expand on this - because if the barrier to publishing is zero, then the "reputation" of the publisher is also zero.

                                                                                                                                                                                            (Actually, we already have the "open publishing" you are suggesting - it's called Blogging or social media.)

                                                                                                                                                                                            In other words, if we have open publishing, then someone like me (with zero understanding of a topic) can publish a very authentic-looking pile of nonsense with exactly the same weight as someone who, you know, has actually done some science and knows what they're talking about.

                                                                                                                                                                                            The common "solution" to this is voting - like with StackOverflow answers. But that is clearly trivial to game and would quickly become meaningless.

                                                                                                                                                                                            So human review it is - combined with the reputation that a journal brings. The author gains reputation because some reviewers (with reputation) reviewed the paper, and the journal (with reputation) accepted it.

                                                                                                                                                                                            Yes, this system is cumbersome, prone to failure, and subject to outside influences. It's not perfect. Just the best we have right now.

                                                                                                                                                                                            • eru 7 days ago

                                                                                                                                                                                              > To expand on this - because if the barrier to publishing is zero, then the "reputation" of the publisher is also zero.

                                                                                                                                                                                              That's fine. I don't read eg Astral Codex Ten because I think the reputation of Substack is great. The blog can stand entirely on its own reputation (and the reputation of its author), no need for the publisher to rent out their reputation.

                                                                                                                                                                                              See also Gwern.net for a similar example.

                                                                                                                                                                                              No need for any voting.

                                                                                                                                                                                              • ricksunny 7 days ago

                                                                                                                                                                                                Reviewers could themselves have reputation levels that weight how visible their review is. This would make brigading more costly. There might still be a pseudoscientific brigade trying to take down (or boost) a particular paper, one that clusters so much that it builds its own competing reputatation, but that's okay. The casual reader can decide which high-vote reviews to follow on their own merits.

                                                                                                                                                                                      • hanche 7 days ago

                                                                                                                                                                                        > sometimes the reviewer selection is an utter crap shoot

                                                                                                                                                                                        Indeed, but when someone of Tao's caliber submits a paper, any editor would (should) make an extra effort to get the very best researchers to referee the paper.

                                                                                                                                                                                        • crote 7 days ago

                                                                                                                                                                                          But isn't that exactly why the submission should be anonymous to the reviewer? It's science, the paper should speak for itself. You don't want a reviewer to be biased by the previous accomplishments of the author. An absolute nobody can make groundbreaking and unexpected discoveries, and a Nobel prize winner can make stupid mistakes.

                                                                                                                                                                                          • aj7 7 days ago

                                                                                                                                                                                            In subfields of physics, and I suspect math, the submitter is never anonymous. These people talk at conferences, have a list of previous works, etc., and fields are highly specialized. So the reviewer knows with 50-95% certainty who he is reviewing.

                                                                                                                                                                                            • gus_massa 7 days ago

                                                                                                                                                                                              I agree, also many papers near the begining say

                                                                                                                                                                                              > We are exending our previous work in [7]

                                                                                                                                                                                              or cite a few relevant papers

                                                                                                                                                                                              > This topic has been studied in [3-8]

                                                                                                                                                                                              Where 3 was published by group X, 5 by group Y, 7 by group Z and 4, 6 and 8 by group W. Anyone can guess the author of the paper is in group W.

                                                                                                                                                                                              Just looking at the citations, it's easy to guess the group of the author.

                                                                                                                                                                                              • hexane360 7 days ago

                                                                                                                                                                                                In many subfields, the submitter isn't even attempted to be hidden from the reviewers. Usually, even the reviewers can be guessed with high accuracy by the submitters

                                                                                                                                                                                              • hoten 7 days ago

                                                                                                                                                                                                The reviewer wouldn't need to know, just the one coordinating who should review what.

                                                                                                                                                                                                • sokoloff 7 days ago

                                                                                                                                                                                                  Inherent in the editor trying to "get the very best researchers to [review] the paper" is likely to be a leak of signal. (My spouse was a scientific journal editor for years; reviewers decline to review for any number of reasons, often just being too busy and the same reviewer is often asked multiple times per year. Taking the extra effort to say "but this specific paper is from a really respected author" would be bad, but so would "but please make time to review this specific paper for reasons that I can't tell you".)

                                                                                                                                                                                                  • bumby 7 days ago

                                                                                                                                                                                                    I didn’t read the comment to mean the editor would explicitly signal anything was noteworthy about the paper, but rather they would select referees from a specific pool of experts. From that standpoint, the referee would have no insight into whether it was anything special (and they couldn’t tell if the other referees were of distinction either).

                                                                                                                                                                                                    • sokoloff 7 days ago

                                                                                                                                                                                                      The editor is already selecting the best matched reviewers though, for any paper they send out for review.

                                                                                                                                                                                                      They have more flexibility on how hard they push the reviewer to accept doing the specific review, or for a specific timeline, but they still get declines from some reviewers on some papers.

                                                                                                                                                                                                      • bumby 7 days ago

                                                                                                                                                                                                        I know that’s the ideal but my original post ends with some skepticism at this claim. I’ve had more than a few come across my desk that are a poor fit. I try to be honest with the editors about why I reject the chance to review them. If I witness it more than a few times, they obviously aren’t being as judicial at their assignments as the ideal assumes.

                                                                                                                                                                                                  • wslh 7 days ago

                                                                                                                                                                                                    When submitting papers to high-profile journals, the expectations are very high for all authors. In most cases, the editorial team can determine from the abstract whether the paper is likely to meet their standards for acceptance.

                                                                                                                                                                                                    • taneq 7 days ago

                                                                                                                                                                                                      Doesn’t that just move the source of bias from the reviewer to the coordinator? Some ‘nobody’ submitting a paper would get a crapshoot reviewer while a recognisable ‘somebody’ gets a well regarded fair reviewer.

                                                                                                                                                                                                    • derefr 7 days ago

                                                                                                                                                                                                      Full anonymity may be valuable, if the set of a paper's reviewers has to stay fixed throughout the review process

                                                                                                                                                                                                      If peer review worked more like other publication workflows (where documents are handed across multiple teams that review them for different reasons), I think partial anonymity (e.g. rounding authors down to a citation-count number) might actually be useful.

                                                                                                                                                                                                      Basically: why can't we treat peer review like the customer service gauntlet?

                                                                                                                                                                                                      - Papers must pass all levels from the level they enter up to the final level, to be accepted for publication.

                                                                                                                                                                                                      - Papers get triaged to the inbox of a given level based on the citation numbers of the submitter.

                                                                                                                                                                                                      - Thus, papers from people with no known previous publications, go first to the level-1 reviewers, who exist purely to distinguish and filter off crankery/quackery. They're just there so that everyone else doesn't have to waste time on this. (This level is what non-academic publishing houses call the "slush pile.") However, they should be using criteria that give only false-positives [treating bad papers as good] but never false-negatives [treating good papers as bad.] The positives pass on to the level-2 ("normal") stream.

                                                                                                                                                                                                      - Likewise, papers from pre-eminent authors are assumed to not often contain stupid obvious mistakes, and therefore, to avoid wasting the submitter's time and the time of reviewers in levels 1 through N-1, these papers get routed straight to final level-N reviewers. This group is mostly made up of pre-eminent authors themselves, who have the highest likelihood of catching the smallest, most esoteric fatal flaws. (However, they're still also using criteria that requires them to be extremely critical of any obvious flaws as well. They just aren't supposed to bother looking for them first, since the assumption is that they won't be there.)

                                                                                                                                                                                                      - Papers from people with an average number of citations end up landing on some middle level, getting reviewed for middling-picky stuff by middling-experienced people, and then either getting bounced back for iteration at that point, or getting repeatedly handed up the chain with those editing marks pre-picked so that the reviewers on higher levels don't have to bother looking for those things and can focus on the more technically-difficult stuff. It's up to the people on the earlier levels to make the call of whether to bounce the paper back to the author for revision.

                                                                                                                                                                                                      (Note that, under this model, no paper is ever rejected for publication; papers just get trapped in an infinite revision loop, under the premise that in theory, even a paper fatally-flawed in its premise could be ship-of-Theseus-ed during revision into an entirely different, non-flawed paper.)

                                                                                                                                                                                                      You could compare this to a software toolchain — first your code is "reviewed" by the lexer; then by the parser; then by the macro expansion; then by any static analysis passes; then by any semantic-model transformers run by the optimizer. Your submission can fail out as invalid at any step. More advanced / low-level code (hand-written assembler) skips the earlier steps entirely, but that also means talking straight to something that expected pre-picked output and will give you very terse, annoyed-sounding, non-helpful errors if it does encounter a flaw that would have been caught earlier in the toolchain for HLL code.

                                                                                                                                                                                                      • bumby 7 days ago

                                                                                                                                                                                                        I agree with a lot of this premise but this gave me pause:

                                                                                                                                                                                                        >under this model, no paper is ever rejected for publication; papers just get trapped in an infinite revision loop

                                                                                                                                                                                                        This could mean a viable paper never gets published. Most journals require that you only submit to one journal at a time. So if it didn’t meet criteria for whatever reason (even a bad scope fit) it would never get a chance at a better fit somewhere else).

                                                                                                                                                                                                        • davrosthedalek 7 days ago

                                                                                                                                                                                                          Typically, papers are reviewed by 1 to 3 reviewers. I don't think you realistically can have more than two levels -- the editor as the first line, and then one layer of reviewers.

                                                                                                                                                                                                          You can't really blind the author names. First, the reviewers must be able to recognize if there is a conflict of interest, and second, especially for papers on experiments, you know from the experiment name who the authors would be.

                                                                                                                                                                                                          • satellite2 7 days ago

                                                                                                                                                                                                            Assuming citations follow a zip distribution, almost all papers would have to go through all levels.

                                                                                                                                                                                                            • melagonster 7 days ago

                                                                                                                                                                                                              Unfortunately, reviewers do not get salary from this...

                                                                                                                                                                                                          • httpsterio 7 days ago

                                                                                                                                                                                                            depending on the publication the reviewers might not even know who the authors are.

                                                                                                                                                                                                            • sharth 7 days ago

                                                                                                                                                                                                              But the journal editor should.

                                                                                                                                                                                                          • foxglacier 7 days ago

                                                                                                                                                                                                            Or maybe it doesn't matter. He got them published anyway and just lost some prestigious journal points on his career. Science/math was the winner on the day and that's the whole point of it. Maybe some of those lower ranked journals are run better and legitimately chipping away at the prestige of the top ones due to their carelessness.

                                                                                                                                                                                                            • bumby 7 days ago

                                                                                                                                                                                                              Research and publication incur opportunity costs. For every manuscript that has to be reworked and submitted elsewhere, you’re losing the ability to do new research. So a researcher is left trying to balance the cost/benefit of additional time investment. Sometimes that results in a higher quality publication, sometimes it results in abandoning good (or bad) work, and sometimes it just wastes time.

                                                                                                                                                                                                              • melagonster 7 days ago

                                                                                                                                                                                                                foxglacier offered a very good point! If some guy is so talented as Tao, perhaps this is the time to ameliorate journal by his power (like what he did here).

                                                                                                                                                                                                            • nine_k 7 days ago

                                                                                                                                                                                                              It's as if big journals are after some drama. Or excitement at least. Not just an important result, but a groundbreaking result in its own right. If it's a relatively small achievement that finishes a long chain of gradual progress, it better be some really famous problem, like Fermat's last theorem, Poinrcaré's conjecture, etc.

                                                                                                                                                                                                              I wonder if it's actually optimal from the journal's selfish POV. I would expect it to want to publish articles that would be cited most widely. These should be results that are important, that is, are hubs for more potential related work, rather that impressive but self-contained results.

                                                                                                                                                                                                              • Salgat 7 days ago

                                                                                                                                                                                                                This is all due to the preverse incentives of modern academia prioritizing quantity over quantity, flooding journals with an unending churn of low effort garbage.

                                                                                                                                                                                                                • bruce511 7 days ago

                                                                                                                                                                                                                  There are easily tens of thousands of researchers globally. If every one did a single paper per year, that would still be way more than journals could realistically publish.

                                                                                                                                                                                                                  Since it is to some extent a numbers game, yes, academics (especially newer ones looking to build reputation) will submit quantity over quality. More tickets in the lottery means more chances to win.

                                                                                                                                                                                                                  I'm not sure though how you change this. With so many voices shouting for attention it's hard to distinguish "quality" from the noise. And what does it even mean to prioritize "quality"? Is science limited to 10 advancements per year? 100? 1000? Should useful work in niche fields be ignored simply because the fields are niche?

                                                                                                                                                                                                                  Is it helpful to have academics on staff for multiple years (decades?) before they reach the standard of publishing quality?

                                                                                                                                                                                                                  I think perhaps the root of the problem you are describing is less one of "quantity over quality" and more one of an ever-growing "industry" where participants are competing against more and more people.

                                                                                                                                                                                                                  • Salgat 7 days ago

                                                                                                                                                                                                                    Perhaps you have better insight into this, why do you think having the primary incentive for professors/researchers being quantity of papers published is appropriate? Or are you saying that it's simply unfixable and we must accept this? As far as I'm aware, quantity of papers published has no relevance to the value of the papers being published, with regard to contributing to the scientific record, and focusing on quantity is a very inappropriate and misleading metric to a researcher's actual contributions. And don't downplay that it isn't purely a numbers game for most people. Your average professor has their entire career tied to the quantity, from getting phd candidates through in a timely manner to acquiring grants. All of it hinging on quantity.

                                                                                                                                                                                                                    • eru 7 days ago

                                                                                                                                                                                                                      > [...] way more than journals could realistically publish.

                                                                                                                                                                                                                      In what sense? If you put it on a website, you can publish a lot more without breaking a sweat.

                                                                                                                                                                                                                      People who want a dead tree version can print it out on demand.

                                                                                                                                                                                                                      • bruce511 7 days ago

                                                                                                                                                                                                                        Publishing in the sense or reviewing, editing, etc. Distribution is the easy part.

                                                                                                                                                                                                                        • eru 7 days ago

                                                                                                                                                                                                                          Well, but that scales with the number of people.

                                                                                                                                                                                                                          The scientists themselves are working as reviewers.

                                                                                                                                                                                                                          More scientists writing papers also means more scientists available for reviewing papers.

                                                                                                                                                                                                                          And as you say, distribution is easy, so you can do reviewing after publishing instead of doing it before.

                                                                                                                                                                                                                          • bumby 7 days ago

                                                                                                                                                                                                                            The featured article demonstrates that good review may not be a function of the number of reviewers available. I personally think that with a glut of reviewers, there's a higher chance an editor will assign a referee who doesn't have the capability (or time!) to perform an adequate review and manuscripts will be rejected for poor reasoning.

                                                                                                                                                                                                                            • eru 6 days ago

                                                                                                                                                                                                                              Yes, that's why I am suggesting to 'publish first, review at your leisure'.

                                                                                                                                                                                                                              Just like what we are doing with blog posts or web comics or novels.

                                                                                                                                                                                                                              • bumby 6 days ago

                                                                                                                                                                                                                                I think the problem with a “publish first” paradigm is that it creates an enormous amount to sift through that is of unverified quality. We’re already publishing ever increasing amounts even with journal gatekeepers.

                                                                                                                                                                                                                                Replicating research is already difficult. Finding quality research under the publish first approach will be like trying to find a needle in a haystack and I fear considerable research will be wasted on dead ends.

                                                                                                                                                                                                                                • eru 3 days ago

                                                                                                                                                                                                                                  > Replicating research is already difficult. Finding quality research under the publish first approach will be like trying to find a needle in a haystack and I fear considerable research will be wasted on dead ends.

                                                                                                                                                                                                                                  I don't think I ever heard anyone complain that eg Arxiv makes replicating research harder?

                                                                                                                                                                                                                                  • bumby 3 days ago

                                                                                                                                                                                                                                    Well I guess I'm one, if you expand the consideration between a single arbitrary paper. The operative word in my previous comment is “quality”. Since we operate in the real world with constraints on resources like time and labor, making the haystack needlessly big can become a problem with replication. I want to focus my time on the highest quality papers because they are the most likely to be replicated in a useful manner. There is no sorting mechanism in open publishing so I may waste unseemly amounts of time trying to read and replicate crap. Publishing is far from perfect, but it does help separate the wheat from the chaff. (This point has already been discussed in this thread so I won't belabor the conversation further).

                                                                                                                                                                                                                  • grepLeigh 7 days ago

                                                                                                                                                                                                                    What's the compensation scheme for reviewers?

                                                                                                                                                                                                                    Are there any mechanisms to balance out the "race to the bottom" observed in other types of academic compensation? e.g. increase of adjunct/gig work replacing full-time professorship.

                                                                                                                                                                                                                    Do universities require staff to perform a certain number of reviews in academic journals?

                                                                                                                                                                                                                    • hanche 7 days ago

                                                                                                                                                                                                                      Normally, referees are unpaid. You're just supposed to do your share of referee work. And then the publisher sells the fruits of all that work (research and refereeing) back to universities at a steep price. Academic publishing is one of the most profitable businesses on the planet! But univesities and academics are fighting back. Have been for a few years, but the fight is not yet over.

                                                                                                                                                                                                                      • throwaway2037 7 days ago

                                                                                                                                                                                                                        If unis "win", what is the likely outcome?

                                                                                                                                                                                                                        • bumby 7 days ago

                                                                                                                                                                                                                          More/easier/cheaper dissemination of research.

                                                                                                                                                                                                                      • SJC_Hacker 7 days ago

                                                                                                                                                                                                                        > Do universities require staff to perform a certain number of reviews in academic journals?

                                                                                                                                                                                                                        No. Reviewers mostly do it because its expected of them, and they want to publish their own papers so they can get grants

                                                                                                                                                                                                                        In the end, the university only cares about the grant (money), because they get a cut - somewhere between 30-70% depending on the instituition/field - for "overhead"

                                                                                                                                                                                                                        Its like the mafia - everyone has a boss they kick up to.

                                                                                                                                                                                                                        My old boss (PI on an RO1) explained it like this

                                                                                                                                                                                                                        Ideas -> Grant -> Money -> Equipment/Personnel -> Experiments -> Data -> Paper -> Submit/Review/Publish (hopefully) -> Ideas -> Grant

                                                                                                                                                                                                                        If you don't review, go to conferences/etc. its much less likely your own papers will get published, and you won't get approved for grants.

                                                                                                                                                                                                                        Sadly there is still a bit of "junior high popularity contest" , scratch my back I'll scratch yours that is still present in even "highly respected" science journals.

                                                                                                                                                                                                                        I hear this from basically every scientist I've known. Even successful ones - not just the marginal ones.

                                                                                                                                                                                                                        • davrosthedalek 7 days ago

                                                                                                                                                                                                                          While most of what you write is true to some extend, I do not see how reviewing will get your paper published, except maybe for the cases the authors can guess the reviewer. It's anonymous normally.

                                                                                                                                                                                                                          • SJC_Hacker 7 days ago

                                                                                                                                                                                                                            The editor does though, they all know each other. They would know who's not refereeing - and word gets around.

                                                                                                                                                                                                                        • tokinonagare 7 days ago

                                                                                                                                                                                                                          I don't thing it's a money problem. It's more like a framing issue, with some reviewers being too narrow-minded, or lacking background knowledge on the topic of the paper. It's not uncommon to have a full lab with people focussing on very different things, when you look in the details, the exact researchers interests don't overlap too much.

                                                                                                                                                                                                                          • davrosthedalek 7 days ago

                                                                                                                                                                                                                            Typically, at least in physics (but as far as I know in all sciences), it's not compensated, and the reviewers are anonymous. Some journals try to change this, with some "reviewer coins", or Nature, which now publishes reviewer names if a paper is accepted and if the reviewer agrees. I think these are bad ideas.

                                                                                                                                                                                                                            Professors are expected to review by their employer, typically, and it's a (very small) part of the tenure process.

                                                                                                                                                                                                                            • paulpauper 7 days ago

                                                                                                                                                                                                                              It's implicitly understood that volunteer work makes the publishing process 'work'. It's supposed to be a level playing field where money does not matter.

                                                                                                                                                                                                                              • jasonfarnon 7 days ago

                                                                                                                                                                                                                                Do universities require staff to perform a certain number of reviews in academic journals?

                                                                                                                                                                                                                                Depends on what you mean by "require". At most research universities it is a plus when reviewing tenureship files, bonuses, etc. It is a sign that someone cares about your work, and the quality of the journal seeking your review matters. If it were otherwise faculty wouldn't list the journals they have reviewed for on their CVs. If no one would ever find out about a reviewers' efforts e.g. the process were double blind to everyone involved, the setup wouldnt work.

                                                                                                                                                                                                                                • canjobear 7 days ago

                                                                                                                                                                                                                                  There is no compensation for reviewers, and usually no compensation for editors. It’s effectively volunteer work. I agree to review a paper if it seems interesting to me and I want to effectively force myself to read it a lot more carefully than normal. It’s hard work, especially if there is a problem with the paper, because you have to dig out the problem and explain it clearly. An academic could refuse to do any reviews with essentially no formal consequences, although they’d get a reputation as a “bad citizen” of some kind.

                                                                                                                                                                                                                                  • acomjean 7 days ago

                                                                                                                                                                                                                                    I know from some of my peers that reviewed biology (genetics) papers, they weren’t compensated.

                                                                                                                                                                                                                                    I was approached to review something for no compensation as well, but I was a bad fit.

                                                                                                                                                                                                                                  • wrsh07 7 days ago

                                                                                                                                                                                                                                    Right - it's somewhat similar to code review

                                                                                                                                                                                                                                    Sometimes one person is looking for an improvement in this area while someone else cares more about that other area

                                                                                                                                                                                                                                    This is totally reasonable! (Ideally if they're contradicting each other you can escalate to create a policy that prevents future contradictions of that sort)

                                                                                                                                                                                                                                  • bradleyjg 7 days ago

                                                                                                                                                                                                                                    This seems reasonable?

                                                                                                                                                                                                                                    Suppose the full result is worth 7 impact points, which is broken up into 5 points for the partial result and 2 points for the fix. The journal has a threshold of 6 points for publication.

                                                                                                                                                                                                                                    Had the authors held the paper until they had the full result, the journal would have published it, but neither part was significant enough.

                                                                                                                                                                                                                                    Scholarship is better off for them not having done so, because someone else might have gotten the fix, but the journal seems to have acted reasonably.

                                                                                                                                                                                                                                    • tux3 7 days ago

                                                                                                                                                                                                                                      If people thought this way - internalizing this publishing point idea - it would incentivize sitting on your incremental results, fiercely keeping them secret if and until you can prove the whole bigger result by yourself. However long that might take.

                                                                                                                                                                                                                                      If a series of incremental results were as prestigious as holding off to bundle them people would have reason to collaborate and complete each other's work more eagerly. Delaying an almost complete result for a year so that a journal will think it has enough impact point seems straightforwardly net bad, it slows down both progress & collaboration.

                                                                                                                                                                                                                                      • gwerbret 7 days ago

                                                                                                                                                                                                                                        > If people thought this way - internalizing this publishing point idea - it would incentivize sitting on your incremental results, fiercely keeping them secret if and until you can prove the whole bigger result by yourself. However long that might take.

                                                                                                                                                                                                                                        This is exactly what people think, and exactly what happens, especially in winner-takes-all situations. You end up with an interesting tension between how long you can wait to build your story, and how long until someone else publishes the same findings and takes all the credit.

                                                                                                                                                                                                                                        A classic example in physics involves the discovery of the J/ψ particle [0]. Samuel Ting's group at MIT discovered it first (chronologically) but Ting decided he needed time to flesh out the findings, and so sat on the discovery and kept it quiet. Meanwhile, Burton Richter's group at Stanford also happened upon the discovery, but they were less inclined to be quiet. Ting found out, and (in a spirit of collaboration) both groups submitted their papers for publication at the same time, and were published in the same issue of Physical Review Letters.

                                                                                                                                                                                                                                        They both won the Nobel 2 years later.

                                                                                                                                                                                                                                        0: https://en.wikipedia.org/wiki/J/psi_meson

                                                                                                                                                                                                                                        • jvanderbot 7 days ago

                                                                                                                                                                                                                                          Wait, how did they both know that they both discovered it, but only after they had both discovered it?

                                                                                                                                                                                                                                          • davrosthedalek 7 days ago

                                                                                                                                                                                                                                            People talk. The field isn't that big.

                                                                                                                                                                                                                                          • ahartmetz 7 days ago

                                                                                                                                                                                                                                            They got an optimal result in that case, isn't that nice.

                                                                                                                                                                                                                                          • chongli 7 days ago

                                                                                                                                                                                                                                            The reasonable thing to do here is to discourage all of your collaborators from ever submitting anything to that journal again. Work with your team, submit incremental results to journals who will accept them, and let the picky journal suffer a loss of reputation from not featuring some of the top researchers in the field.

                                                                                                                                                                                                                                            • bennythomsson 7 days ago

                                                                                                                                                                                                                                              To supply a counter viewpoint here... The opposite is the "least publishable unit" which leads to loads and loads of almost-nothing results flooding the journals and other publication outlets. It would be hard to keep up with all that if there wasn't a reasonable threshold. If anything then I find that threshold too low currently, rather than too high. The "publish or perish" principle also pushes people that way.

                                                                                                                                                                                                                                              • lupire 7 days ago

                                                                                                                                                                                                                                                That's much less of a problem than the fact that papers are such poor media for sharing knowledge. They are published too slowly to be immediately useful versus just a quick chat, and simultaneously written in too rushed a way to comprehensively educate people on progress in the field.

                                                                                                                                                                                                                                                • bennythomsson 7 days ago

                                                                                                                                                                                                                                                  > versus just a quick chat,

                                                                                                                                                                                                                                                  Everybody is free to keep a blog for this kind of informal chat/brainstorming kind of communication. Paper publications should be well-written, structured, thought-through results that make it worthwhile for the reader to spend their time. Anything else belongs to a blog post.

                                                                                                                                                                                                                                                  • ahartmetz 7 days ago

                                                                                                                                                                                                                                                    The educational and editorial quality of papers from before 1980 or so beats just about anything published today. That is what publish or perish - impact factor - smallest publishable unit culture did.

                                                                                                                                                                                                                                                • slow_typist 7 days ago

                                                                                                                                                                                                                                                  Don‘t know much about publishing in maths but in some disciplines it is clearly incentivised to create the biggest possible number of papers out of a single research project, leading automatically to incremental publishing of results. I call it atomic publishing (from Greek atomos - indivisible) since such a paper contains only one result that cannot be split up anymore.

                                                                                                                                                                                                                                                  • lupire 7 days ago

                                                                                                                                                                                                                                                    Andrew Wiles spent 6 years working on 1 paper, and then another year working on a minor follow-up.

                                                                                                                                                                                                                                                    https://en.m.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27...

                                                                                                                                                                                                                                                    • hanche 7 days ago

                                                                                                                                                                                                                                                      Or cheese slicer publishing, as you are selling your cheese one slice at a time. The practice is usually frowned upon.

                                                                                                                                                                                                                                                      • dataflow 7 days ago

                                                                                                                                                                                                                                                        I thought this was called salami slicing in publication.

                                                                                                                                                                                                                                                      • SoftTalker 7 days ago

                                                                                                                                                                                                                                                        Science is almost all incremental results. There's far more incentive to get published now than there is to "sit on" an incremental result hoping to add to it to make a bigger splash.

                                                                                                                                                                                                                                                        • Too 7 days ago

                                                                                                                                                                                                                                                          Academic science discovers continuous integration.

                                                                                                                                                                                                                                                          In the software world, it's often desired to have a steady stream of small, individually reviewable commits, that each deliver a incremental set of value.

                                                                                                                                                                                                                                                          Dropping a 20000 files changed bomb "Complete rewrite of linux kernel audio subsystem" is not seen as prestigious. Repeated, gradual contributions and involvement in the community is.

                                                                                                                                                                                                                                                          • bradleyjg 7 days ago

                                                                                                                                                                                                                                                            The big question here is if journal space is a limited resource. Obviously it was at one point.

                                                                                                                                                                                                                                                            Supposing it is, you have to trade off publishing these incremental results against publishing someone else’s complete result.

                                                                                                                                                                                                                                                            What if it had taken ten papers to get there instead of two? For a sufficiently important problem, sure, but the interesting question is at a problem that’s interesting enough to publish complete but barely.

                                                                                                                                                                                                                                                            • parpfish 7 days ago

                                                                                                                                                                                                                                                              The limiting factor isn’t journal space, but attention among the audience. (In theory) the journals publishing restrictions help to filter and condense information so the audience is maximally informed given that they will only read a fixed amount

                                                                                                                                                                                                                                                              • btilly 7 days ago

                                                                                                                                                                                                                                                                Journal space is not a limited resource. Premium journal space is.

                                                                                                                                                                                                                                                                That's because every researcher has a hierarchy of journals that they monitor. Prestigious journals are read by many researchers. So you're essentially competing for access to the limited attention of many researchers.

                                                                                                                                                                                                                                                                Conversely, publishing in a premium journal has more value than a regular journal. And the big scientific publishers are therefore in competition to make sure that they own the premium journals. Which they have multiple tricks to ensure.

                                                                                                                                                                                                                                                                Interestingly, their tricks only really work in science. That's because in the humanities, it is harder to establish objective opinions about quality. By contrast everyone can agree in science that Nature generally has the best papers. So attempting to raise the price on a prestigious science journal, works. Attempting to raise the price on a prestigious humanities journal, results in its circulation going down. Which makes it less prestigious.

                                                                                                                                                                                                                                                                • waldrews 7 days ago

                                                                                                                                                                                                                                                                  Space isn't a limited resource, but prestige points are deliberatly limited, as a proxy for the publications' competition for attention. We can appreciate the irony, while considering the outcome reasonable - after all, the results weren't kept out of the literature. They just got published with a label that more or less puts them lower in the search ranking for the next mathematician who looks up the topic.

                                                                                                                                                                                                                                                                • jvanderbot 7 days ago

                                                                                                                                                                                                                                                                  Hyper focusing on a single journal publication is going to lead to absurdities like this. A researcher is judged by the total delta of his improvements, at least by his peers and future humanity. (the sum of all points, not the max).

                                                                                                                                                                                                                                                                  • krick 7 days ago

                                                                                                                                                                                                                                                                    It is easy to defend any side of the argument by inflating the "pitfalls of other approach" ad absurdum. This is silly. Obviously, balance is the key, as always.

                                                                                                                                                                                                                                                                    Instead, we should look at which side the, uh, industry currently tends to err. And this is definitely not the "sitting on your incremental results" side. The current motto of academia is to publish more. It doesn't matter if your papers are crap, it doesn't matter if you already have significant results and are working on something big, you have to publish to keep your position. How many crappy papers you release is a KPI of academia.

                                                                                                                                                                                                                                                                    I mean, I can imagine a world were it would have been a good idea. I think it's a better world, where science journals don't exist. Instead, anybody can put any crap on ~arxiv.org~ Sci-Hub and anybody can leave comments, upvote/downvote stuff, papers have actual links and all other modern social network mechanics up to the point you can have a feed of most interesting new papers tailored specially for you. This is open-source, non-profit, 1/1000 of what universities used to pay for journal subscriptions is used to maintain the servers. Most importantly, because of some nice search screens or whatever the paper's metadata becomes more important than the paper itself, and in the end we are able to assign 10-word simple summary on what the current community consensus on the paper is: if it proves anything, "almost proves" anything, has been 10 times disproved, 20 research teams failed to reproduce to results or 100 people (see names in the popup) tried to read and failed to understand this gibberish. Nothing gets retracted, ever.

                                                                                                                                                                                                                                                                    Then it would be great. But as things are and all these "highly reputable journals" keep being a plague of society, it is actually kinda nice that somebody encourages you to finish your stuff before publishing.

                                                                                                                                                                                                                                                                    Now, should have been this paper of Tao been rejected? I don't know, I think not. Especially the second one. But it's somewhat refreshing.

                                                                                                                                                                                                                                                                    • YetAnotherNick 7 days ago

                                                                                                                                                                                                                                                                      Two submission in medium reputation journal does not have significantly lower prestige than one in high reputation journal.

                                                                                                                                                                                                                                                                      • JJMcJ 7 days ago

                                                                                                                                                                                                                                                                        Gauss did something along these lines and held back mathematical progress by decades.

                                                                                                                                                                                                                                                                        • lupire 7 days ago

                                                                                                                                                                                                                                                                          Gauss had plenty of room for slack, giving people time to catch up on his work..

                                                                                                                                                                                                                                                                          Every night Gauss went to sleep, mathematics was held back a week.

                                                                                                                                                                                                                                                                          • JJMcJ 6 days ago

                                                                                                                                                                                                                                                                            During his college/grad school days, he was going half nuts, ideas would come to him faster than he could write them down.

                                                                                                                                                                                                                                                                            Finally one professor saw what was happening, insisted that Gauss take some time off - being German that involved walking in the woods.

                                                                                                                                                                                                                                                                      • Arainach 7 days ago

                                                                                                                                                                                                                                                                        These patterns are ultimately detrimental to team/community building, however.

                                                                                                                                                                                                                                                                        You see it in software as well: As a manager in calibration meetings, I have repeatedly seen how it is harder to convince a committee to promote/give a high rating to someone with a large pile of crucial but individually small projects delivered than someone with a single large project.

                                                                                                                                                                                                                                                                        This is discouraging to people whose efforts seem to be unrewarded and creates bad incentives for people to hoard work and avoid sharing until one large impact, and it's disastrous when (as in most software teams) those people don't have significant autonomy over which projects they're assigned.

                                                                                                                                                                                                                                                                        • mlepath 7 days ago

                                                                                                                                                                                                                                                                          Hello, fellow Metamate ;)

                                                                                                                                                                                                                                                                        • cvoss 7 days ago

                                                                                                                                                                                                                                                                          The idea that a small number of reviewers can accurately quantify the importance of a paper as some number of "impact points," and the idea that a journal should rely on this number and an arbitrary cut off point to decide publication, are both unreasonable ideas.

                                                                                                                                                                                                                                                                          The journal may have acted systematically, but the system is arbitrary and capricious. Thus, the journal did not act reasonably.

                                                                                                                                                                                                                                                                          • remus 7 days ago

                                                                                                                                                                                                                                                                            > This seems reasonable?

                                                                                                                                                                                                                                                                            In some sense, but it does feel like the journal is missing the bigger picture somewhat. Say the two papers are A and B, and we have A + B = C. The journal is saying they'll publish C, but not A and B!

                                                                                                                                                                                                                                                                            • Nevermark 7 days ago

                                                                                                                                                                                                                                                                              How many step papers before a keystone paper seems reasonable to you?

                                                                                                                                                                                                                                                                              I suspect readers don’t find it as exciting to read partial result papers. Unless there is an open invitation to compete on its completion, which would have a purpose and be fun. If papers are not page turners, then the journal is going to have a hard time keeping subscribers.

                                                                                                                                                                                                                                                                              On the other hand, publishing a proof of a Millennium Problem as several installments, is probably a fantastic idea. Time to absorb each contributing result. And the suspense!

                                                                                                                                                                                                                                                                              Then republish the collected papers as a signed special leather limited series edition. Easton, get on this!

                                                                                                                                                                                                                                                                              • slow_typist 7 days ago

                                                                                                                                                                                                                                                                                Publishing partial results is always an invitation to compete in the completion, unless the completion is dependent on special lab capabilities which need time and money to acquire. There is no need to literally invite anyone.

                                                                                                                                                                                                                                                                                • Nevermark 7 days ago

                                                                                                                                                                                                                                                                                  I meant if the editors found the paper’s problem and progress especially worthy of a competition.

                                                                                                                                                                                                                                                                                • remus 7 days ago

                                                                                                                                                                                                                                                                                  > I suspect readers don’t find it as exciting to read partial result papers. Unless there is an open invitation to compete on its completion, which would have a purpose and be fun. If papers are not page turners, then the journal is going to have a hard time keeping subscribers.

                                                                                                                                                                                                                                                                                  Yeah I agree, a partial result is never going to be as exciting as a full solution to a major problem. Thinking on it a little more, it seems more of a shame the journal wasn't willing to publish the first part as that sounds like it was the bulk of the work towards the end result.

                                                                                                                                                                                                                                                                                  I quite like that he went to publish a less-than-perfect result, rather than sitting on it in the hopes of making the final improvement. That seems in the spirit of collaboration and advancing science, whereas the journal rejecting the paper because it's 98% of the problem rather than the full thing seems a shame.

                                                                                                                                                                                                                                                                                  Having said that I guess as a journal editor you have to make these calls all the time, and Im sure every author pitches their work in the best light ("There's a breakthrough just around the corner...") and Im sure there are plenty of ideas that turn out to be dead ends.

                                                                                                                                                                                                                                                                                • cubefox 7 days ago

                                                                                                                                                                                                                                                                                  ... A and B separately.

                                                                                                                                                                                                                                                                                • pinkmuffinere 7 days ago

                                                                                                                                                                                                                                                                                  I agree this is reasonable from the individual publisher standpoint. I once received feedback from a reviewer that I was "searching for the minimum publishable unit", and in some sense the reviewer was right -- as soon as I thought the result could be published I started working towards the publication. A publisher can reasonably resist these kinds of papers, as you're pointing out.

                                                                                                                                                                                                                                                                                  I think the impact to scholarship in general is less clear. Do you immediately publish once you get a "big enough" result, so that others can build off of it? Or does this needlessly clutter the field with publications? There's probably some optimal balance, but I don't think the right balance is immediately clear.

                                                                                                                                                                                                                                                                                  • nextn 7 days ago

                                                                                                                                                                                                                                                                                    Why would publishing anything new needlessly clutter the field?

                                                                                                                                                                                                                                                                                    Discovering something is hard, proving it correct is hard, and writing a paper about is hard. Why delay all this?

                                                                                                                                                                                                                                                                                    • bumby 7 days ago

                                                                                                                                                                                                                                                                                      Playing devils advocate, there isn’t a consensus on what is incremental vs what is derivative. In theory, the latter may not warrant publication because anyone familiar with the state-of-the-art could connect the dots without reading about it in a publication.

                                                                                                                                                                                                                                                                                    • SilasX 7 days ago

                                                                                                                                                                                                                                                                                      Ouch. That would hurt to hear. It's like they're effectively saying, "yeah, obviously you came up with something more significant than this, which you're holding back. No one would be so incapable that this was as far as they could take the result!"

                                                                                                                                                                                                                                                                                      • pinkmuffinere 7 days ago

                                                                                                                                                                                                                                                                                        Thankfully the reviewer feedback was of such low quality in general that it had little impact on my feelings, haha. I think that’s unfortunately common. My advisor told me “leave some obvious but unimportant mistakes, so they have something to criticize, they can feel good, and move on”. I honestly think that was good advice.

                                                                                                                                                                                                                                                                                    • saghm 7 days ago

                                                                                                                                                                                                                                                                                      If this was actually how stuff was measured, it might be defensible. I'm having trouble believing that things are actually done this objectively rather than the rejections being somewhat arbitrary. Do you think that results can really be analyzed and compared in this way? How do you know that it's 5 and 2 and not 6 and 1 or 4 and 3, and how do you determine how many points a full result is worth in total?

                                                                                                                                                                                                                                                                                      • omoikane 7 days ago

                                                                                                                                                                                                                                                                                        But proportionally, wouldn't a solution without an epsilon loss be much better than a solution with epsilon?

                                                                                                                                                                                                                                                                                        I am not sure what's the exact conjecture that the author solved, but if the epsilon difference is between an approximate solution versus an exact solution, and the journal rejected the exact solution because it was "only an epsilon improvement", I might question how reputable that journal really was.

                                                                                                                                                                                                                                                                                        • Brian_K_White 7 days ago

                                                                                                                                                                                                                                                                                          It's demonstrably (there is one demonstration right there) self-defeating and counter-productive, and so by definition not reasonable.

                                                                                                                                                                                                                                                                                          Each individual step along the way merely has some rationale, but rationales come in the full spectrum of quality.

                                                                                                                                                                                                                                                                                          • sunshowers 7 days ago

                                                                                                                                                                                                                                                                                            Given the current incentive scheme in place it's locally reasonable, but the current incentives suck. Is the goal to score the most impact points or to advance our understanding of the field?

                                                                                                                                                                                                                                                                                            • mnky9800n 7 days ago

                                                                                                                                                                                                                                                                                              In my experience, it depends on the scientist. But it’s hard to know what an advance is. Like, people long searched for evidence of æther before giving up and accepting that light doesn’t need a medium to travel in. Perhaps 100 years from now people will laugh at the attention is all you need paper that led to the llm craze. Who knows. That’s why it’s important to give space to science. From my understanding Lorenz worked for 5 years without publishing as a research scientist before writing his atmospheric circulation paper. That paper essentially created the field of chaos. Would he be able to do the same today? Maybe? Or maybe counting papers or impact factors or all these other metrics turned science into a game instead of an intellectual pursuit. Shame we cannot ask Lorenz or Maxwell about their times as a scientist. They are dead.

                                                                                                                                                                                                                                                                                            • Ar-Curunir 7 days ago

                                                                                                                                                                                                                                                                                              I don’t think that’s a useful way to think about this, especially when theres so little information provided about this. Reviewing is a capricious process.

                                                                                                                                                                                                                                                                                            • stevage 7 days ago

                                                                                                                                                                                                                                                                                              It actually seems reasonable for a journal that has limited space and too many submissions. What's the alternative, to accept on or two of the half proofs, and bump one or two other papers in the process?

                                                                                                                                                                                                                                                                                              • generationP 7 days ago

                                                                                                                                                                                                                                                                                                To be the devil's advocate: Breaking a result up into little pieces to increase your paper count ("salami-slicing") is frowned upon.

                                                                                                                                                                                                                                                                                                Of course this is not what Terry Tao tried to do, but it was functionally indistinguishable from it to the reviewers/editors.

                                                                                                                                                                                                                                                                                                • JJMcJ 7 days ago

                                                                                                                                                                                                                                                                                                  Do Reddit mods also edit math journals?

                                                                                                                                                                                                                                                                                                  • dumbfounder 7 days ago

                                                                                                                                                                                                                                                                                                    Sort of. But it makes sense. They missed out the first time and don’t want to be an also-ran. If he had gone for the glory from the start it may have been different. The prestigious journals probably don’t want incremental papers.

                                                                                                                                                                                                                                                                                                    • gxs 7 days ago

                                                                                                                                                                                                                                                                                                      Are you sure this wasn’t an application to the DMV or an attempt to pull a building permit?

                                                                                                                                                                                                                                                                                                      • paulpauper 7 days ago

                                                                                                                                                                                                                                                                                                        Don't you hate it when you lose your epsilon, only to find it and it's too late?

                                                                                                                                                                                                                                                                                                        I wonder what the conjecture was?

                                                                                                                                                                                                                                                                                                        • pentae 7 days ago

                                                                                                                                                                                                                                                                                                          So it's basically like submitting an iOS app to the app store.

                                                                                                                                                                                                                                                                                                        • tinktank 7 days ago

                                                                                                                                                                                                                                                                                                          I wish I has an IQ that high...

                                                                                                                                                                                                                                                                                                          • aleph_minus_one 7 days ago

                                                                                                                                                                                                                                                                                                            If you want to become smarter in math, read and attempt to understand brutally hard math papers and textbooks. Torture yourself harder than any time before in your life. :-)

                                                                                                                                                                                                                                                                                                            • revskill 7 days ago

                                                                                                                                                                                                                                                                                                              IQ means interesting questions.