• gus_massa an hour ago

    > We therefore conclude that theoretically motivated experiment choice is potentially damaging for science, but in a way that will not be apparent to the scientists themselves.

    They are analyzing a toy model of science. The details and in figure 1. They have a search space that has a few Gaussians like

    f(x,y,z) = A0 * expt(-(x-x0)^2-(y-y0)^2-(z-z0)^2) + A1 * expt(-(x-x1)^2-(y-y1)^2-(z-z1)^2)

    but maybe in more than 3 dimensions and maybe with more than 2 Gaussians.

    They want the agents to find all of Gaussians.

    It's somewhat similar to a maximization problem that is easier. There are many strategies for this, from gradient ascent to random sampling to a million more of variants. I like simulated annealing.

    They claim that the best method is random sampling, that only work when the search space is small. But it breaks quite fast for high dimensional problems, unless the Gaussians are so big that cover most of the space, and perhaps I'm beeing too optimistic. Add noise, overlapping Gaussians and the problem gets super hard.

    Let's get to a realistic example, all the molecules with 6 Carbons and 12 Hydrogens. Let's try to find all of them and their stables 3D configuration. This is chemistry from the first year in the university, perhaps earlier, no cutting edge science.

    You have 18 atoms, so 18 * 3 = 54 dimensions, and the surface of -energy has a lot of mountains ranges and nasty stuff. Most of them very sharp. Let's try to find the local points of maximal -energy, that is much easier than the full map. These are the stable molecules, that (usually) have names.

    * There is a cycle one with 6 Carbons, where each Carbon has 2 Hydrogens, https://en.wikipedia.org/wiki/Cyclohexane Note that it actually has two different 3D variants.

    * There is one with a cycle of 5 Carbons and 1 carbon attached to the cycle https://en.wikipedia.org/wiki/Methylcyclopentane

    * There are variants with shorter cycles, but I'm not sure how stable they are and Wikipedia has no page for them.

    * There is also 3 linear versions, where the 6 Carbons are a s wavy line, and there is a double bound in one of the steps https://en.wikipedia.org/wiki/1-Hexene I'm not sure why the other two version have no page in Wikipedia, I think they should be stable, but sometimes it's not a local maximum or the local maximum is to shallow and the double bound jump and the Hydrogen reorganize.

    * And there may be other nasty stuff, take a look at the complete list https://en.wikipedia.org/wiki/C6H12.

    And don't try to make the complete list when of molecules that includes a few Nitrogen, because the number of molecules explodes exponentially.

    So this random sampling method they propose, does not even work for an elementary Chemistry problem.

    • Eisenstein an hour ago

      They address this specifically and hand-wave it away:

          Moreover, both random and all other experimentation strategies we examined require constructing a bounded experimental space, a challenge that lies beyond the scope of the current work (see Almaatouq et al., 2024, for further discussion).
      
      I think their conclusion is still important to consider, though. It makes a point beyond the practicalities and more towards the philosophy of approach.
    • MarkusQ 2 hours ago

      This is really interesting, but it appears to hinge on an unstated (and unjustified) assumption: that scientists learn by back propagation, or something sufficiently similar that back propagation is a reasonable model.

      It also:

      * Bakes in the assumption that there are no internal mechanisms to be discovered ("Each environment is a mixture of multivariate Gaussian distributions")

      * Ignores the possibility that their model of falsification is inadequate (they just test more near points with high error).

      * Does a lot of "hopeful naming" which makes the results easy to misinterpret as saying more about like-named things in the real world than it actually does.

      • mjburgess 2 hours ago

        The existence of "experiments" to choose from in the first place is already theory-given. As soon as you've formulated a space of such experiments to explore, almost all your theory work is done.

        • SJMG an hour ago

          What's more, the existence of data (therefore differentiation of what is and isn't), is theory-laden.

      • armchairhacker 29 minutes ago

        In real life, can you choose an experiment perfectly randomly?

        You can ask many people to propose hypotheses and choose one at random, and perhaps with a good sample you get better experiments. You can query a Markov chain until it produces an interpret-able hypothesis. But the people or Markov chain (because English itself) has significant bias.

        Also, some experiments have wider-reaching implications than others (this is probably more relevant for the Markov chain, because I expect the hypotheses it forms to be like "frogs can learn to skate").

        • Zobat an hour ago

          I fully admit that I only skimmed the abstract, but was reminded of an article in Wired about Sergey Brin and his "search for a parkinsson cure".

          https://www.wired.com/2010/06/ff-sergeys-search/

          He went backwards and started with just collecting an absurd amount of data. Later while talking to a researcher he could confirm years of research with a "simple" search in his database.

          • youknownothing 2 hours ago

            This is a thought-provoking idea but, even if true, I don't think it will gain much traction. We humans like to be right and earn awards for our predictions. A Nobel wouldn't feel quite the same if given to someone who just happened to randomly stumble upon something.

            • pixl97 2 hours ago

              I mean a lot of discoveries are things found along the way in search of something else. Look at something like the initial discovery of super glue.

            • selridge 3 hours ago

              Weird that this doesn’t mention grounded theory, a social theory toolkit which people poo-poo for Popperian purposes.

              • MarkusQ 2 hours ago

                I think they poo-poo it because it tends to produce just-so stories that "explain" known facts while saying nothing about anything beyond them. To an extent, all hypotheses arise from observations (and more specifically, the frisson between observations and theoretical expectations), but you can't just stop there. Grounded theory just feels like empiricism with a soft blur filter.

                (This problem is not just limited to social scientists. I think you could, for example, construct a plausible objection to dark matter as an "explanation" that just "saves appearances" on the same basis.)

                • selridge 2 hours ago

                  Yeah, I’m aware of those critiques and they are all correct or at least draw blood.

                  What’s interesting about this paper is the suggestion that perhaps empiricism could do with a soft blur.

                  One might even invoke KJ Healy’s “Fuck Nuance” here as well.

              • lutusp 20 minutes ago

                This idea suffers from a number of practical obstacles:

                One, in a sufficiently advanced field of study, an idea's originator may be the only person able to imagine an experimental test. I doubt that many physicists would have immediately thought that Mercury's unexplained orbital precession would serve to either support or falsify Einstein's General Relativity -- but Einstein certainly could. Same with deflected starlight paths during a solar eclipse (both these effects were instrumental in validating GR).

                Two, scientists are supposed to be the harshest critics of their own ideas, on the lookout for a contradicting observation. This was once part of a scientist's training -- I assume this is still the case.

                Three, the falsifiability criterion. If an experimental proposal doesn't include the possibility of a conclusive falsification, it's not, strictly speaking, a scientific idea. So an idea's originator either has (and publishes) a falsifying criterion, or he doesn't have a legitimate basis for a scientific experiment.

                Here's an example. Imagine if the development of the transistor relied on random experimentation with no preferred outcome. In the event, the inventors at Bell Labs knew exactly what they wanted to achieve -- the project was very focused from the outset.

                Another example. Jonas Salk (polio vaccine) knew exactly what he wanted to achieve, his wasn't a random journey in a forest of Pyrex glassware. It's hard to imagine Salk's result arising from an aimless stochastic exploration.

                So it seems science relies on people's integrity, not avoidance of any particular focus. If integrity can't be relied on, perhaps we should abandon the people, not the methods.