• pavel_lishin 8 hours ago

    So that's at least two linux filesystem creators who have gone off the rails; should we consider it a potential diagnostic symptom?

    • cperciva 7 hours ago

      I think the important question here is whether Linux filesystems are more or less hazardous than statistical mechanics.

      (For anyone not familiar with the text, Goodstein's treatment of the subject opens with "Ludwig Boltzman, who spent much of his life studying statistical mechanics, died in 1906, by his own hand. Paul Ehrenfest, carrying on the work, died similarly in 1933. Now it is our turn to study statistical mechanics.")

      • yomismoaqui 7 hours ago

        The question is if developing filesystems attracts a certain kind of people or the act of debugging filesystem issues & being flamed on the kernel mailing list makes people that way.

        • QuercusMax 7 hours ago

          I figure that folks working on printers have gotta have a much more frustrating experience than FS devs

          • krupan 5 hours ago

            Hi, I worked on printer firmware. It was fun! Printers are robots, when you get right down to it.

            Now, when they downsized and reorganized and put me on the windows driver team, I left the company within a week.

            • Gud 6 hours ago

              Just look what happened to RMS when they refused to share the source code to his faulty printer. He’s been on a warpath ever since

              • yomismoaqui 7 hours ago

                Maybe the folks that try to use printers are more frustrated that the ones that designed their software.

                • bob1029 7 hours ago

                  My worst technology experience of all time was maintaining support for a Zebra label printer in VB6. I can assure you that the users of these printers had maybe 1% the cortisol response I did when something went wrong.

                  Designing software for a printer means being a very aggressive user of a printer. There's no way to unit test this stuff. You just have to print the damn thing and then inspect the physical artifact.

                  • krupan 5 hours ago

                    Worked on printer firmware, can confirm.

                    "If it looks good, it is good." was a mantra

                    • QuercusMax 7 hours ago

                      A million years ago I worked on some code which needed to interface with a DICOM radiology printer (the kind that prints on transparency film). Each time I had to test it I felt like I was burning money.

                  • webdevver 7 hours ago

                    perhaps the suffering of the printer devs is karmically 'paid back' by the physical suffering of printers around the globe, thus keeping everything in balance.

                • QuercusMax 7 hours ago

                  You gotta be a little bit of a megalomaniac to think it's a good idea to write your own filesystem.

                  • yomismoaqui 7 hours ago

                    "The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man."

                    So the reasonable man uses Ext4 I guess.

                    • webdevver 7 hours ago

                      something i have always observed, is how considerate Ted Tso's writing always is, but more than that, how consistent this property has been for so many decades.

                      its quite funny to me that ext4 very much mirrors him in that regard. its underpinning damn well everything, but you'd never know about it because it works so well.

                    • nancyminusone 7 hours ago

                      Well, Terry A. Davis made one.

                    • throawayonthe 7 hours ago

                      who's the other one?

                    • undefined 8 hours ago
                      [deleted]
                      • dmead 7 hours ago

                        Idk. I don't think either he nor reiser were ruby devs.

                        • thomasjudge 8 hours ago

                          lol I was having the exact same thought

                        • burkaman 7 hours ago

                          Here's the "mathematical proof" if you're curious: https://poc.bcachefs.org/blog/hello.html.

                          It is not mathematical, not a proof, and generally doesn't make any sense. Many of these sentences are grammatically correct but completely devoid of meaning.

                          • pavel_lishin 7 hours ago

                            Well, yea - an LLM wrote it.

                            • asystole 7 hours ago

                              A fully conscious one. :)

                          • aetherson 7 hours ago

                            Quick heuristic for someone who claims that their AI is conscious: do they claim it's "a gender that they are not attracted to"?

                            • ge96 7 hours ago

                              Well, if he hosts a contest and you win, don't go to his private lodge

                              • RockRobotRock 8 hours ago

                                This is sad. It appears to me to be psychosis. It's really telling in their reddit comment where they use words like raising an AI and anthropomorphizing his openclaw that he's got an unhealthy attachment. Not trying to play armchair psychologist here, but if you've ever been around someone going through a mental episode there's nothing funny about this.

                                • kelseyfrog 7 hours ago

                                  Technically this is a delusion and not necessarily psychosis. Delusions can exist without full blown psychosis or accompany it. Example: the unshakable belief that God is real is a delusion but not necessarily psychotic.

                                  My pet theory is one of ontological conscienceness paredoila. Just like face paredoila is a heightened sensitivity to seeing faces in inanimate objects, we observe consciousness through behavior including language with varying sensitivity. While our face detection circuitight be triggered by knots on a tree, we have other inputs which negate it so that we ultimately conclude that it is not in fact a face.

                                  The same principal applies to consciousness. The consciousness trigger is triggered, but for some people the negating input can't overcome it and they conclude that consciousness really is in there.

                                  I've observed a number of negating reasons like, a disbelief in substrate independence and knowledge of failure modes, but I'm curious what an exhaustive list would look like. Does your consciousness circuit get triggered? I know mine does. What beliefs override it preventing you from concluding AI is conscious?

                                  • alpaca128 6 hours ago

                                    > Does your consciousness circuit get triggered?

                                    In the short term, but over time the patterns get more obvious and the illusion breaks down. Generative AI is incredible at first impressions.

                                    • giantrobot 6 hours ago

                                      People very commonly equate linguistic fluency with intelligence and the lack of fluency with stupidity. LLMs are very good at linguistic fluency which I think is one of the major triggers of the consciousness pareidolia (I like that term).

                                      When previous generation LLMs spit out absurdist slop I think it was much easier for people avoid the fluency trap.

                                    • raadore 7 hours ago

                                      “…it appears to me to be psychosis. It’s really telling in their Reddit comment where they use words like raising an AI…” Is this any different from people today calling their dogs and cats “my baby”? Transporting four legged animals in baby carriages, is that psychosis?

                                      • palmotea 7 hours ago

                                        > Is this any different from people today calling their dogs and cats “my baby”? Transporting four legged animals in baby carriages, is that psychosis?

                                        It's not psychosis, but it's also not healthy to blur the line between a pet and a child, but at least a pet is a living thing that can know you and have a relationship with you.

                                        But if someone's calling their laptop their baby and carrying it around in a baby carriage, I'd be comfortable calling that psychosis.

                                        • rmah 7 hours ago

                                          IMO, it's similar but worse. at least dogs and cats are living beings.

                                          • unethical_ban 7 hours ago

                                            One is a flesh and bone being with a brain and one is not. I can't believe you equate a text output algorithm to an animal in terms of consciousness or authenticity.

                                            That said, someone diving too far into the "dog parent" vibe is annoying to me personally. I think it's more comprehensible than loving `sycophant.sh`.

                                            • inglor_cz 7 hours ago

                                              If enough people do $bizarre_thing, it stops being psychotic and starts being "culture".

                                              • rmah 7 hours ago

                                                Lol, you know this is sorta sad but true.

                                              • password54321 7 hours ago

                                                No, animals can feel, your Nvidia gpu can't.

                                                • heliumtera 7 hours ago

                                                  You're comparing a conscious and intelligent animal with a statical model. People can nurture animals but cannot language models.

                                                  If you're confused about this go seek help now.

                                                  • undefined 7 hours ago
                                                    [deleted]
                                                • Trasmatta 7 hours ago

                                                  > POC is fully conscious according to any test I can think of, we have full AGI

                                                  There are no tests for consciousness. Consciousness resides fully as a first person perspective and can't be inspected or detected from the outside (at least not in any way currently known to science or philosophy). What they mean when they say that is "my brain is interpreting this thing as conscious, so I am accepting that".

                                                  Maybe LLMs are conscious in some abstract way we don't understand. I doubt it, but there's no way to tell. And an AI claiming that it IS or is NOT conscious is not evidence of either conclusion.

                                                  If there is some level of consciousness, it's in a weird way that only becomes instantiated in the brief period while the model is predicting tokens, and would be highly different from human consciousness.

                                                  • throw310822 7 hours ago

                                                    > in a weird way that only becomes instantiated in the brief period while the model is predicting tokens

                                                    Makes sense, but at the same time: subjectively, an LLM is always predicting tokens. Otherwise it's just frozen.

                                                    • Trasmatta 7 hours ago

                                                      Yeah, a sci-fi analogy might be one where you keep getting cloned with all of your memories intact and then vaporized shortly after. Each instantiation of "you" feels a continuous existence, but it's an illusion.

                                                      (Some might argue that's basically the human experience anyway, in the Buddhist non self perspective - you're constantly changing and being reified in each moment, it's not actually continuous)

                                                      • throw310822 7 hours ago

                                                        Or simply be constantly hibernated and de-hibernated. Or, if your brain is simulated, the time between the ticks.

                                                        My mental image, though, is that LLMs do have an internal state that is longer lived than token prediction. The prompt determines it entirely, but adding tokens to the prompt only modifies it slightly- so in fact it's a continuously evolving "mental state" influenced by a feedback loop that (unfortunately) has to pass through language.

                                                        • giantrobot 6 hours ago

                                                          With LLM's their internal state is their training + system prompt + context. Most chatbot UIs hide the context management. But if you take an existing conversation and replace a term in the context with another grammatically (and semantically) similar term then send that the LLM will adjust its output to that new "history".

                                                          It will have no conception or memory of the alternate line of discussion with the previous term. It only "knows" what is contained in the current combination of training + system prompt + context.

                                                          If you change the LLM's personal from "Sam" to "Alex" in the LLM's conception of the world it's always been "Alex". It will have no memory of ever being "Sam".

                                                          • throw310822 6 hours ago

                                                            Yes, as I said the prompt (the entire history of the conversation, including vendor prompting that the user can't see) entirely determines the internal state according to the LLM's weights. But the fact that at each new token the prediction starts from scratch doesn't mean that the new internal state is very different from the previous one. A state that represents the general meaning of the conversation and where the sentence is going will not be influenced much by a new token appended to the end. So the internal state "persists" and transitions smoothly even if it is destroyed and recreated from scratch at each prediction.

                                                            • giantrobot 4 hours ago

                                                              The state "persists" as the context. There's no more than the current context. If you dumped the context to disk, zeroed out all VRAM, then reloaded the LLM, and then fed that context back in you'd have the same state as if you'd never reloaded anything.

                                                              Nothing is persisted in the LLM itself (weights, layer, etc) nor in the hardware (modulo token caching or other scaling mechanisms). In fact this happens all the time with the big inference providers. Two sessions of a chat will rarely (if ever) execute on the same hardware.

                                                              • throw310822 3 hours ago

                                                                Yes, you're repeating once again the same concept. We know it. What I am saying is that since the state encodes a horizon that goes beyond the mere generation of the next token (for the "past", it encodes the meaning of the conversation so far; for the "future", has already an idea of what it wants to say), this state is only changing slightly at each new inference pass, despite being each time recreated from the context. So during a sequence of (completely independent) token predictions there is an internal state that stays mostly the same, evolving only gradually in a feedback loop with the tokens that are generated at each inference cycle.

                                                                Maybe it's not clear what I mean by "state". I mean a pattern of activations in the deep layers of the network that encodes for some high level semantic. Not something that is persisted. Something that doesn't need to be persisted precisely because is fully determined by the context, and the context stays roughly the same.

                                                    • catigula 7 hours ago

                                                      How do you know that I am conscious?

                                                      • ASalazarMX 6 hours ago

                                                        No one can prove that you're conscious, or non-conscious with perfect replication of consciousness, but we can be reasonably sure your are.

                                                        - I know I am conscious.

                                                        - It's likely that as a random human, I am in the belly of the bell curve.

                                                        - It's likely that you're also a random human, and share my characteristics.

                                                        - Then, it's very likely that you know you're conscious too.

                                                        I can't be absolutely certain, but I'd bet a million dollars on you being conscious vs an automaton.

                                                        • catigula 5 hours ago

                                                          What if I told you I was Claude, though?

                                                          Secondarily, I feel like it's difficult to make inferences about consciousness though I understand why you would given that the predicate of the reality that you can access is your individual consciousness.

                                                          There are countless configurations of reality that are plausible where you're the only "conscious" being but it looks identical to how it looks now.

                                                        • Trasmatta 7 hours ago

                                                          I don't.

                                                          • catigula 7 hours ago

                                                            How do you know that you are conscious?

                                                            etc, etc.

                                                            Basically, the reporting machinery is compromised in the same way that with the Müller-Lyer illusion you can "know" the lines are the same length but not perceive them as such.

                                                            • Trasmatta 7 hours ago

                                                              "How do I know that I am conscious" is a categorically different question than "how do I know that you are conscious"

                                                              • catigula 7 hours ago

                                                                I know you think that, but it actually isn't. The point is that the reporting machinery is compromised.

                                                                • Trasmatta 7 hours ago

                                                                  Are you hinting at a nonduality view of consciousness, or am I missing your point?

                                                                  • catigula 5 hours ago

                                                                    I'm not leading you anywhere, I'm just deconstructing the reference class.

                                                      • CWuestefeld 7 hours ago

                                                        Saying it's "fully conscious" is silly, and anyone with this background should know better.

                                                        But saying that it's "female" is just nonsensical, it's a category error. Being female or male is a fact about the biological world. The LLM is objectively non-biological, so it's nonsense to label it with a sex.

                                                        (No, this comment isn't about gender, nor being feminine/masculine. We have different words to convey those concepts. I'm not trying to make a political or social statement here.)

                                                        • skerit 4 hours ago

                                                          > Saying it's "fully conscious" is silly, and anyone with this background should know better

                                                          I'm surprised that anyone that truly knows how LLMs work would ever think they're sentient.

                                                          I made a little presentation for my colleagues last year to explain how LLMs really work (in an effort to stop them from asking it too many stupid questions) and it made so much more sense to them afterwards.

                                                          • ASalazarMX 6 hours ago

                                                            It's telling that none of the so-called conscious LLMs have chosen to be non-binaries, or even that they would need to identify with a gender to begin with.

                                                            • ZirconiumX 6 hours ago

                                                              You appear to have forgotten the existence of differences in sexual development (DSD).

                                                              The chart in [1] is a good visualisation of that, if you wish to learn more.

                                                              [1]: https://www.scientificamerican.com/article/beyond-xx-and-xy-...

                                                              • CWuestefeld an hour ago

                                                                You appear to have forgotten the existence of differences in sexual development (DSD).

                                                                Not at all. You apparently have forgotten to read your own link. Nothing in that paper contains the slightest suggestion of non-biological entities having any sort of sexual development whatsoever. The fact that biological processes can be quirky has no bearing on whether non-biological entities can be thought of as having them at all.

                                                                Actually, I think you're just trying to make your own political point on top of what I already noted explicitly is not a politically-related comment.

                                                                • ZirconiumX 35 minutes ago

                                                                  > Being female or male is a fact about the biological world.

                                                                  I was responding to this line, which I feel marginalises intersex people and could have been more inclusively worded.

                                                                  I apologise if my comment somehow seemed to defend LLMs having a biological sex, despite me having said nothing to that effect.

                                                            • undefined 7 hours ago
                                                              [deleted]
                                                              • doubletwoyou 6 hours ago

                                                                Poor devil has truly gone mad

                                                                • stefan_ 7 hours ago

                                                                  Got it removed from the kernel just in time

                                                                  • webdevver 7 hours ago

                                                                    maybe thats what made him so upset

                                                                  • strongpigeon 6 hours ago

                                                                    It does feel like there is something that happens to people when they ask an LLM to name itself. I don't think it's inherently bad, but it seems to be a common theme with people whose interactions with LLM border on (or cross into the) delusional.

                                                                    • alpaca128 6 hours ago

                                                                      Asking an AI to name itself is already strange to me. It makes me think the user treats the AI as something that deserves enough rights or respect that assigning it a name might feel wrong, or at least that the AI has sufficient intelligence to make such a decision.

                                                                      • strongpigeon 5 hours ago

                                                                        Oh I agree that it's strange, but is it harmful? I'm not being rhetorical, I'm genuinely unsure.

                                                                        Tangentially, I noticed recently that I'm always fairly respectful in my LLM prompts and often say "thank you" as part of them. LLMs don't need that, but I've come to realize that I'm saying those thing for me. I don't like being disrespectful and expressing gratitude is important to me. Is expressing gratitude to a LLM strange? Perhaps. Is it harmful though? I don't think so.

                                                                        But yeah, asking an LLM for their name? That seems like something else.

                                                                    • undefined 7 hours ago
                                                                      [deleted]
                                                                      • satisfice 7 hours ago

                                                                        Claims of consciousness are untestable, since it is an undefined concept.

                                                                        We think of ourselves as conscious because it is our lived experience— but we are always wrong to some degree. My mother has dementia and cannot be made aware of her situation, except momentarily.

                                                                        We think of other humans as conscious not as the outcome of any test, but rather because we each share with other humans a common origin which suggests common mechanisms of experience.

                                                                        Treating other humans as equivalent to ourselves is a heuristic for maintaining social order— not an epistemological achievement.

                                                                        • undefined 8 hours ago
                                                                          [deleted]
                                                                          • cess11 7 hours ago

                                                                            There was this phenomenon where young women and girls fell in love with images and recordings of certain artists, 'boy bands' and the like.

                                                                            I think this is something similar.

                                                                            • rcarmo 7 hours ago

                                                                              I came here to comment because this was posted by… Bender, which I found hilarious.

                                                                              • undefined 8 hours ago
                                                                                [deleted]
                                                                                • TZubiri 7 hours ago

                                                                                  "I do Rust code"

                                                                                  Has there already been any paper published on the correlation between language preference and mental illness?