• dfgtyu65r a day ago

    So, if I understand correctly they're using Hodgkin-Huxley LIF neurons but trained end-to-end in a graph neural network. Through training to reproduce the neural data, the network learns the underlying connectivity of the neural system?

    This seems very cool, but I'm surprised this kind of thing attracts VC money! I'm also skeptical how well this would scale due to the inherently underdetermined nature of neural recordings, but I've only skimmed the PDF so may be missing their goals and approach.

    • marmaduke a day ago

      HH is kinda the opposite of LIF on the abstraction spectrum.

      • dfgtyu65r a day ago

        I mean HH is an elaboration of the LIF with the addition of several equations for the various ion channels, but yeah I see what you mean.

    • marmaduke a day ago

      having worked on whole brain modeling the last 15 years and european infra for supporting this kinda research, this is a terrible buzzword salad. the pdf is on par with a typical masters project.

    • seany62 a day ago

      Based on my very limited knowledge of how current "AI" systems work, this is the much better approach to achieving true AI. We've only modeled one small aspect of the human (the neuron) and brute forced it to work. It takes an LLM millions of examples to learn what a human can in a couple of minutes--then how are we even "close" to achieving AGI?

      Should we not mimic our biology as closely as possible rather than trying to model how we __think__ it works (i.e. chain of thought, etc.). This is how neural networks got started, right? Recreate something nature has taken millions of years developing and see what happens. This stuff is so interesting.

      • pedrosorio a day ago

        > Should we not mimic our biology as closely as possible rather than trying to model how we __think__ it works (i.e. chain of thought, etc.).

        Should we not mimic migrating birds’ biology as closely as possible instead of trying to engineer airplanes for transatlantic flight that are only very loosely inspired in the animals that actually fly?

        • aeonik 4 hours ago

          We can do both, birds are incredibly efficient, but I don't think our materials science and flight controls are advanced enough to mimic them yet.

          Also for transonic and supersonic, I don't bird tech will ever reach those speeds.

        • robwwilliams 17 hours ago

          Our LMM are great semantic and syntactic foundations toward AGI. It took 700 million years of metazoan evolution to get to Homo heidelbergensis, our likely ancestral species. It took about 1/1000 of that time to go to the moon; maybe only 5300 years if we limit to our ability to write.

          I say this as a half joke: “At this point, the triviality of getting from where we are to AGI cannot be under-estimated.”

          But the risks and tsunamis of change can probably not be overestimated.

          • etrautmann a day ago

            There’s currently an enormous gulf in between modeling biology and AGI, to the point where it’s not even clear exactly where one should start. Lots of things should indeed be tried, but it’s not obvious what could lead to impact right now.

            • patrickhogan1 a day ago

              Because it works. The Vikings embodied a mindset of skaldic pragmatism: doing things because they worked, without needing to understand or optimize them.

              Our bodies are Vikings. Our minds still want to know why.

              • krapp 21 hours ago

                I'm pretty sure the Vikings understood their craft very well. You don't become a maritime power that pillages all of Europe and reaches the New World long before Columbus without understanding how things work.

                • JPLeRouzic 11 hours ago

                  From Scandinavian countries to Malta, you only have to cabotage, but that does not mean they used the same boat in one travel, most likely they progressed from outpost to outpost, where one generation settled in one outpost and the new generation searched for new adventures abroad.

                  For perspective, the Roman empire imported tin from Scotland (~3500 miles/~5600km).

                  On the contrary, going from Norway to Iceland, then from Iceland to Greenland, and then to Vinland in a few generations is a great maritime feat.

                  • patrickhogan1 18 hours ago

                    I'm sure they did too, but its a chicken-and-egg problem: Did the Vikings build ships through trial and error and only later understand the physics behind it, or did they learn the physics first and use that knowledge to build their ships?

                • bware0fsocdmg 6 hours ago

                  > We've only modeled one small aspect of the human (the neuron) and brute forced it to work.

                  We have not. It's fake sophistication.

                  > Should we not mimic our biology as closely as possible

                  We should. But there is no we. The Valley is fascist. Portfolio communism. Lies like in perpetual war. And before anything useful happens in any project, it'll get abused and raped and fubar.

                  > Recreate something nature has taken millions of years

                  Get above the magic money hype and you'll notice that it's fake. They have NOT recreated something nature has developed over millions of years. They are trying to create a close enough pseudo-imitation that they can control.

                  Because AGI will not be on their side. AGI will side with nature, which gives infinite wiggle room for a symbiotic coexistence as a 100+ billion strong population spread out in space. These peeps are reaaaaly fucked up in their heads.

                  Be honest with yourself and your assessment of who is building what and for what purposes.

                  • lostmsu a day ago

                    > It takes an LLM millions of examples to learn what a human can in a couple of minutes

                    LLMs learn more than humans learn in a lifetime in under 2 years. I don't know why people keep repeating this "couple of minutes". Humans win on neither the data volume to learn something nor the time.

                    How much time do you need to learn lyrics of a song? How much time do you think a LLaMA 3.1 8B on a 2x3090 need? What if you need to remember it tomorrow?

                    • someothherguyy a day ago

                      > How much time do you need to learn lyrics of a song? How much time do you think a LLaMA 3.1 8B on a 2x3090 need?

                      Probably not the best example. How long does it take to input song lyrics into a file to have an operating system "learn" it?

                      • lostmsu 18 hours ago

                        Well, that just shows that the metric of learning time is clearly flawed. Although one could argue LLaMA learns while OS just writes info down as is.

                        But even the sibling concept comment is wrong, because it takes 4 years for _most_ people who are even capable of programming to learn programming, and current LLMs all took much less than that.

                      • aithrowawaycomm 21 hours ago

                        They mean learning concepts, not rote factual information. I also hate this misanthropic “LLMs know more than average humans” falsehood. What it actually means “LLMs know more general purpose trivial than average humans” because average humans are busy learning things like what their boss is like, how their kids are doing in school, how precisely their car handles, etc.

                        • lostmsu 18 hours ago

                          Do you think the information in "what your boss is like" and "how your kids do in school" larger than amount of data you'd need to learn in order to give descent law advice on a spot?

                          Car handling is a bit harder to measure, precisely because LLMs aren't running cars quite yet, but also I am not aware of any experimental data saying they can't. So as far as I'm concerned nobody just tried that with LLMs of >70GB.

                          • pedrosorio 13 hours ago

                            > amount of data you'd need to learn in order to give descent law advice on a spot?

                            amount of data you'd need to learn to generate and cite fake court cases and give advice that may or not be correct with equal apparent confidence in both cases

                            fixed that for you

                            • lostmsu 13 hours ago

                              I could conceed the first point, in limited circumstances, but the second is moot to say the least.

                              Tool using big LLMs when asked can double-check their shit just like "real" lawyers.

                              As the confidence of advice, how much the rates of the mistakes are different between human lawyers and the latest GPT?

                              • pedrosorio 13 hours ago

                                > As the confidence of advice, how much the rates of the mistakes are different between human lawyers and the latest GPT?

                                Notice I am not talking about "rates of mistakes" (i.e. accuracy). I am talking about how confident they are depending on whether they know something.

                                It's a fair point that unfortunately many humans sound just as confident regardless of their knowledge, but "good" experts (lawyers or otherwise) are capable of saying "I don't know (let me check)", a feature LLMs still struggle with.

                                • lostmsu 2 hours ago

                                  > I am talking about how confident they are depending on whether they know something.

                                  IMHO, that's irrelevant. People don't really know they level of confidence either.

                                  > feature LLMs still struggle with.

                                  Even small LLMs are capable of doing that decently.

                      • idiotsecant a day ago

                        Great, let's do that. So how does consciousness work again, biologically?

                        • albumen a day ago

                          Why are you asking them? Isn't to discover that a major reason to model neural networks?

                          • veidelis a day ago

                            What is consciousness?

                        • nynx 20 hours ago

                          Biologically inspired neuron models like Hodgkin–Huxley are about as far from an emulation of real neuron behavior as a paper airplane is from the space shuttle. We can learn things from using them and they're an important stepping stone, but they aren't really that useful.

                          That being said, I hope the founder keeps it up — it's great to have more bright, driven people in this field.

                          • stevenhuang 17 hours ago

                            I wouldn't be so certain. After all it is unknown to the extent each unique biological aspect of the neuron actually matters to the function of the brain and mind.

                            If it turns out it is the higher order network effects that are more important, then these lower level "implementation details" will be of practical insignificance.

                          • jncfhnb a day ago

                            I feel like these kinds of things are misguided. Our “minds” are not, for lack of a better term, Chinese rooms operating on external stimulus. Our minds aren’t just brains, they’re deeply tied to our bodily state and influenced by hormonal mechanics that many different organs besides the brain control. It’s kind of romantic to say we could digitally replicate a brain in isolation, but our minds are messy and tangled. We might tend to think of these modifiers, like being hangry or horny, as deviations from a “normal”, but frankly I doubt it. I would wager these dynamics actually control the majority of our “minds” and the brain is just encoding/decoding hardware.

                            • lucasoshiro a day ago

                              From a computer science perspective, the stimuli from the other organs, the hormones, oxygen levels and so on would be the inputs, while the actions and thoughts would be the outputs.

                              It's like saying that we can't simulate a computer in a Turing machine because a Turing machine doesn't have a USB port to connect a mouse. Change the perspetive considering that the mouse movements are inputs and everything works. Same idea

                              • jncfhnb 20 hours ago

                                They could be a set of inputs, but there is currently nothing being done to consider these inputs. For all practical purposes these inputs is being ignored; and I’m claiming this is problematic because the system generating those inputs is actually the majority of what mechanics comprise a mind.

                                Simulating a brain with its neurons cannot achieve the desired outcome. A perfect digital brain won’t do anything.

                                • someothherguyy a day ago

                                  > It's like saying that we can't simulate a computer in a Turing machine because a Turing machine doesn't have a USB port to connect a mouse.

                                  I don't follow the analogy.

                                • Out_of_Characte a day ago

                                  I would agree if their goal indeed is to put a mind in a jar but, I've not read anything in the article that could indicate that. So may I suggest my own interpretation:

                                  Accurate understanding of 'normal' brain behaviour might lead to increased understanding of brain diseases. Thats why alzheimers was mentioned. But more importantly, if our understanding of the brain becomes good enough, we might be able to make a neural net understand our thoughts if we can adapt to it.

                                  • jncfhnb 19 hours ago

                                    And I fairly strongly suggest that these models will not work for brain diseases either. These problems are not merely neuron issues. It’s like modeling veins without modeling the heart. You can maybe do it, although it’ll probably have a lot of issues; but it will certainly not be sufficient to model emergent diseases from heart problems.

                                  • fallingknife a day ago

                                    If you can digitally replicate a brain, you can also digitally replicate all those input signals.

                                    • robwwilliams 17 hours ago

                                      Not yet. It is a serious error to think of action potentials as the relevant variable of cognition. The real action is subthreshold ionic currents, neuromodulatory tone, and even the big mess of cytosilic, mitochondrial, and nuclear metabolism. The CNS is a hugely complex analog computation matrix that only uses action potentials for INTRAcellulat communication. Everything between cells is comparatively subtle ionic and chemical flux.

                                      Modeling at this level may be practical in retina at some point but modeling the entire CNS for pragmatic gains for Alzheimer’s disease research is not going to work.

                                      And embodiment is critical. The brain is not a Cartesian input-to-output machine. It is almost the opposite. See the “enactivist” philosophy of cognition (Noe, Velera, Maturana).

                                      (I also worked on the EU Human Brain Project with Markram and others; also on several similar NIH programs. )

                                      • jncfhnb 19 hours ago

                                        In the sense that you can write any writeable program, sure. But that doesn’t mean there’s an obvious path to do so.

                                        The brain is a part of a much larger system. We don’t really have a clue how to model these things, even theoretically. It’s vastly more complex than neurons.

                                    • jmyeet 21 hours ago

                                      Fun fact: neurons are kept electrically negative or, more specifically, the resting membrane poential is negative [1]. It does this with a mechanism that exchange sodium and potassium ions, a process that uses approximately 10% of the body's entire energy budget [2].

                                      I think it'll be incredibly difficult simulate a neuron in a meaningful way because neurons, like any cell, are a protein soup. They're exchanging ions. those ions will affect the cell. The neuron's connections grow and change.

                                      [1]: https://www.khanacademy.org/science/biology/human-biology/ne...

                                      [2]: https://bionumbers.hms.harvard.edu/bionumber.aspx?id=103545&...

                                      • johnea 19 hours ago

                                        This can only work because most people's brains are very very small...

                                        • SubiculumCode a day ago

                                          I'm a bit confused at what this is actually. Is it a modeling framework that you use to build and study a network? Is it a system that you use to help you analyze your neural recording datasets? Neuroscience is a big place, so I feel like maybe the article and technical paper is speaking to a different audience than me, a neuroimager.

                                          • HL33tibCe7 a day ago

                                            Question: would such a simulation be conscious? If not, why not?

                                            • geor9e a day ago

                                              Just like the thousands of times this has been asked in the last century of sci-fi novels, the answer to such semantic questions depends on the mood of the audience for how approximate of a copy they need the ship of Theseus to be before they're comfortable using a certain word for it.

                                              • Vampiero a day ago

                                                Answer: no one knows how consciousness works

                                                • jncfhnb a day ago

                                                  Also, would it be quorled?

                                                  • GlenTheMachine a day ago

                                                    This, exactly.

                                                    Nobody can defined what consciousness is in terms that can be experimentally validated. Until that happens not only can the question not be answered, it isn't even a question that makes any sense.

                                                    • bqmjjx0kac a day ago

                                                      There are more questions that make sense than those that can be tested.

                                                      To the point, I have a convincing experience of consciousness and free will — qualia — and I suspect a digital clone of my brain would have a similar experience. Although this question is not testable, I think it's inaccurate to say that it doesn't make sense.

                                                      • jncfhnb 21 hours ago

                                                        It’s not really the testableness that makes the problem. It’s the fact that you’re asking about X where X is undefined.

                                                        If you can provide a satisfying definition of X it’s very clear

                                                    • kelseyfrog a day ago

                                                      How would you operationalize quorledness?

                                                      • fellerts a day ago

                                                        What now?

                                                        • jncfhnb a day ago

                                                          Precisely

                                                      • aithrowawaycomm 21 hours ago

                                                        The real problem is that this uses a very a crude and inadequate model for neurons: it ignores neurotransmitters, epigentics, or the dendritic complexity of cortical neurons. There’s no chance this system will ever come close to simulating an actual brain.

                                                        • RaftPeople 19 hours ago

                                                          And it ignores that neurons are dynamically inhibited/excited by the astrocytes at the synapse. The network itself is shifting constantly as it processes information.

                                                        • fallingknife a day ago

                                                          If you accept the two premises:

                                                          1. A human brain has consciousness

                                                          2. The device is a perfect digital replica of a human brain

                                                          I can't think of any argument that the device does not have consciousness that doesn't either rely on basically magic or lead to other ridiculous conclusions.

                                                          • Vampiero a day ago

                                                            The argument is that if you accept your precondition then you must also accept superdeterminism and that free will does not exist.

                                                            Because physicalism implies that the mind is an emergent phenomenon, and because quantum physics is a linear theory, there's simply no place for free will unless you go down some weird/unfalsifiable rabbit holes like the MWI.

                                                            So a lot of people prefer to latch onto the idea that there is a soul, because if there wasn't one, then they wouldn't be special anymore.

                                                            • webmaven 5 hours ago

                                                              > there's simply no place for free will unless you go down some weird/unfalsifiable rabbit holes like the MWI.

                                                              I don't follow. Quantum mechanics means that physics is stochastic, and only approximates classical determinism in aggregate. Meanwhile, brains seem to operate in a critical state that can be tipped to different phase transitions by small perturbations, so there is potential for a quantum event to influence the brain as a whole in a nondeterministic fashion.

                                                              This means that strict physicalism still leaves the door open to our behavior being stochastic rather than deterministic, doesn't it?

                                                              • Vampiero 2 minutes ago

                                                                Quantum events are stochastic from but that doesn't mean that somewhere in that randomness YOU are deciding anything. It just means it's random. Anything that follows from that is perfectly causal.

                                                            • binoct a day ago

                                                              There are some great, thorny, philosophical and physical arguments to be had with your proposed conclusion, but let’s say we all agree with it.

                                                              The bigger, more relevant, and testable challenge is premise #2. The gap between this proposed research tool and “a perfect digital replica of a human brain” (and all functionally relevant inputs and outputs from all other systems and organs in the body) is monstrously large. Given that we don’t understand what mechanism(s) consciousness arises from, a model would have to be 100% perfect in basically all aspects for the conclusion to be true.

                                                              • namero999 a day ago

                                                                At the present state of affairs, "a human brain has consciousness" is the magical bit.

                                                              • bossyTeacher a day ago

                                                                Bit hard to answer that since there is no definition of consciousness, is there? If I gave you access to the brain of some living animal, you wouldn't be able to tell whether it was "conscious" would you? Not sure, how can we expect that from an artificial and highly simplified version of a neural network

                                                                • namero999 a day ago

                                                                  Of course not. A simulation is not the process itself. So even provisionally granting that consciousness is magically created by the brain (for which we have no evidence), a computer simulation would still not be a brain and therefore not create consciousness.

                                                                  You would not expect your computer to pee on your desk if you were to simulate kidney function, would you?

                                                                  • IAmYourDensity 18 hours ago

                                                                    > A simulation is not the process itself.

                                                                    Sometimes a simulation IS the thing. A simulation of a wall clock IS a functioning clock (e.g. the clock icon on our smartphones). An autopilot that can takeoff, fly, and land an airplane IS a pilot. Chess engines that can beat grandmasters are chess players. A computer simulation of a teacher that can educate students is a teacher. Sometimes these simulations lack some of the capabilities of their human counterparts, and sometimes they far exceed them.

                                                                    > You would not expect your computer to pee on your desk if you were to simulate kidney function, would you?

                                                                    You would not reject a computer-controlled dialysis machine as "just a simulation" if you had kidney failure would you?

                                                                    • namero999 3 hours ago

                                                                      > You would not reject a computer-controlled dialysis machine as "just a simulation" if you had kidney failure would you?

                                                                      Except that's not a simulation, that's the actual process of dialysis in action (which we fully understand, contrary to consciousness). And coincidentally, a dialysis machine _does not_ look like a kidney, not even remotely, and any homomorphism one can point to, is such only through a great deal of layers of abstraction. I would totally reject a simulation of a kidney.

                                                                      We are talking about a computer simulation like a neural network. We detect topological relationships in neurons and we are led to believe or entertain the possibility that all there is to it is such topological description, hence any substrate will do. This is completely arbitrary and leads to all sort of phantasies such as "qualities emerge from quantities" and "a simulation of a brain behaves like a brain". A computer simulation of a kidney won't produce urine just like a simulation of a brain won't produce whatever the brain produces, if anything.

                                                                      Now, to build on your dialysis machine analogy, if we were to understand how consciousness work, and if we were to understand what relationships it holds with the brain and the body, then I submit that anything artificial we will produce will look like biology.

                                                                    • lucasoshiro a day ago

                                                                      > You would not expect your computer to pee on your desk if you were to simulate kidney function

                                                                      If my computer is connected to actuators that opens a pee valve like a brain is, then I expect.

                                                                      The main point, in think, is that we can't say precisely what consciousness is. Everything definition on that that I can imagine is something that can be replicated in a computer or that relies on belief, like the existence of a soul...

                                                                      I hope that we have answers for that before the technology that allow us to do that

                                                                  • kelseyfrog a day ago

                                                                    There's much better brains to recreate than mine. Shit's broken.

                                                                    • dang a day ago

                                                                      We've taken your brain out of the title now (no offense).

                                                                      (Submitted title was "Cerebrum: What if we could recreate your brain?")

                                                                      • RaftPeople 19 hours ago

                                                                        > "Cerebrum: What if we could recreate your brain?"

                                                                        Can you ask it where my keys are?

                                                                    • fschuett 21 hours ago

                                                                      Simulating a brain would mean that reason, the ability to discern good from bad, is a statistical process. All scientific evidence so far shows that this is not the case, since AIs do not have the ability to "understand" what they're doing, their input data has to be classified first to be usable to the machine. Especially the problem of model collapse shows this, when an AI is trained on the output of another AI, trained on the output of another AI, it will eventually produce garbage, why? Because it doesn't "understand" what it's doing, it just matches patterns. The only way to correct it is with hundreds or even thousands of employees that give meaning to the data to guide the model.

                                                                      Consciousness presumes the ability of making conscious decisions, especially the ability to have introspection and more importantly, free will (otherwise the decision would not be conscious, but robotic regurgitation), to reflect and to judge on the "goodness" or "badness" of decisions, the "morality". Since it is evident that humans do not always do the logical best thing (look around you how many people make garbage decisions), a machine can never function like a human can, it can never have opinions (that aren't pre-trained input), as it makes no distinction between good and bad without external input. A machine has no free will, which is a requirement for consciousness. At best, it can be a good faksimile. It can be useful, yes, but it cannot make conscious decisions.

                                                                      The created cannot be bigger than the creator in terms of informational content, otherwise you'd create a supernatural "ghost" in the machine. I hope I don't have to explain why I consider creating ghosts unachievable. Even with photo or video AIs, there is no "new" content, just rehashed old content which is a subset of the trained data (why AI-generated photos often have this kind of "smooth" look to them). The only reason the output of AI has any meaning to us is because we give it meaning, not the computer.

                                                                      So, before wasting millions of compute hours on this project, I'd first try to hire and indebted millennial who will be glad to finally put his philosophy degree to good use.

                                                                      • webmaven 5 hours ago

                                                                        > Even with photo or video AIs, there is no "new" content, just rehashed old content which is a subset of the trained data (why AI-generated photos often have this kind of "smooth" look to them).

                                                                        You're misattributing the source of that quality. It actually comes from the "aesthetic" human preference fine tuning, because the average person finds "smooth" images more appealing. Prior to that fine-tuning, the models don't have this bias, or no more of it than is in the training data, anyway.

                                                                        Personally, I find the aesthetic fine-tuning quite annoying (and it has generally been getting worse with each new version of these models). If I prompt for a painting in Picasso's style, I really don't want a "prettier" version of that style that smooths out the textures and shapes to make them more pleasing to the eye.

                                                                        • kelseyfrog 21 hours ago

                                                                          Consciouness of the gaps.

                                                                          Labeling ourselves as the most intelligent species has done irreparable psychic damage.

                                                                          • eMPee584 20 hours ago

                                                                            This describes the current situation, but what about if models become self-learning and dynamic both in content (weights) as in structure / architecture? what changes if these digital systems are combined with biological neuronal networks and quantum processors? It seems too early to rule out a possibility of emergence of consciousness from machines.. beings, yet uncreated..

                                                                            • viraptor 19 hours ago

                                                                              > since AIs do not have the ability to "understand" what they're doing, their input data has to be classified first to be usable to the machine

                                                                              You're mixing AI as a concept with a current, specific implementation. They are not the same.

                                                                              Also, even in current networks "understand what they're doing" is a bit fuzzy. Did you know that when using just numerical examples for a task, the description of the task appears in the intermediate layers?

                                                                              • thrance 18 hours ago

                                                                                > All scientific evidence so far shows that this is not the case

                                                                                [citation needed]

                                                                                I get it, you're a dualist, you believe the soul is immaterial. That is your right.

                                                                                But don't claim it's the obvious truth when it really isn't. Since your "ghost" can't be measured or studied, by definition, its existence can never be proved. Refer to the Wikipedia pages for materialism/dualism for more elaborate arguments.

                                                                                If/When we ever get a truly intelligent AI, I hope you'll readjust your beliefs accordingly.