• thom 3 days ago

    How funny if we eventually come full circle and such knowledge bases end up like low-background steel in the economy of AI training. Either way, Eurisko remains one of the most interesting pieces of software ever created. I'd be fascinated if someone managed to create some sort of chain of thought system based on its heuristics but driven by modern LLMs. I don't think this is the direction in which AGI lies, but in the right domains it could be a powerful exploratory or debugging system.

    • thomastjeffery 2 days ago

      The ultimate irony of all these ontological portraits is that they are each so isolated that no matter how much I actively read about them, I keep finding completely new rabbit holes.

      If it weren't for your comment, how could I have heard or read about Eurisko?

      • AstralStorm 2 days ago

        Funny thing, early deployments of a few Garbage Producing Transformers, and the base will be drowned out by the mess created and even humans will have problems telling what is right, much less a machine. Ever increasing amount to work out what is sense and what is a convincing lie.

        Dead Internet Theory is already in motion.

        • entropicdrifter 2 days ago

          This was already true of the internet. IMO staying within relatively curated spaces keeps the occurrence of this kind of noise low.

          The issue of the public not being able to tell truth from fiction has existed since the beginning of society. Every improvement in communications technology has led to a massive increase in propaganda and confusion that spikes and then eventually levels out as people learn to tune it out.

          Increasing the rate at which people can deploy lies has happened many times in history. This is just one more instance.

      • fuzzfactor 3 days ago
        • gibsonf1 2 days ago

          The fundamental flaw with Lenat's approach is that humans do not think with first order predicate calculus. We do not juggle a bunch of statements/rules the way Cyc does, but have a very fast, efficient and powerful conceptual approach to the world with a massive amount of recursive conceptual inference.

          Effectively, you need to understand (compute with) what concepts are before using them in statements. And the most common mistake is to confuse the definition of a concept with the concept itself, as happens typically with knowledge graphs.

          • arethuza 2 days ago

            That argument sounds awfully like Drew McDermott's A Critique of Pure Reason from 1987 which was one of the things that made me get out of the "good old fashioned" AI world in the mid 1990s.

            • gbasin 2 days ago

              this was my main takeaway from 20th century philosophy lol

              • freilanzer 2 days ago

                > The fundamental flaw with Lenat's approach is that humans do not think with first order predicate calculus. We do not juggle a bunch of statements/rules the way Cyc does, but have a very fast, efficient and powerful conceptual approach to the world with a massive amount of recursive conceptual inference.

                Source?

                • asynchronous 2 days ago

                  This isn’t Reddit, evaluate the statement without needing to link to some mainstream media outlet.

                  • freilanzer 2 days ago

                    The OP is making a statement about how humans think and why this is at odds with Cyc. I don't think it is wrong to ask about a source on this. Otherwise I can claim anything. This has nothing to do with Reddit.

                    > Humans do think with first order predicate calculus. We juggle a bunch of statements/rules the way Cyc does, and don't have a very fast, efficient and powerful conceptual approach to the world with a massive amount of recursive conceptual inference.

                    Great discussion, thanks.

                    • gibsonf1 2 days ago

                      I am the source (working on conceptual computing since the late 90s), and have built a conceptual computing platform based on it (In common lisp of course): https://graphmetrix.com/trinpod-server

              • mikemorrow 3 days ago

                “Lenat argues that we just don't have the data needed to reach common sense through these newer methods. Common sense isn't written down. It's not on the Internet. It's in our heads. And that’s why he continues to work on Cyc.”

                “At one point, Lenat remembers, it suggested he could win the game by changing the rules. But each morning, Lenat would rejigger the system, pushing Eurisko away from the ridiculous and toward the practical. In other words, he would lend the machine a little common sense.”

                Really enjoyed this 2016 article.

                For safety-critical applications that use black-box ML models, I feel like a rules-engine paired with a knowledge graph needs to kinda sandwich the ML black box —- to “prompt engineer” the input and ensure the output aligns with common sense and safety standards.

                Edit: clarity

                • moomin 3 days ago

                  When Margaret Hamilton developed the Apollo guidance systems, she ran three teams which all wrote completely independent guidance systems. She then wrote herself a program to arbitrate between them when they disagreed.

                  • sgt101 2 days ago

                    I thought that the Apollo system was barely able to execute on the available hardware, so there was no redundancy.

                    I think that the space shuttle did feature this kind of system though.

                    • moomin 9 hours ago

                      I've got a nasty feeling you're right. I can't find any references at all really. My understanding was the shuttle was using very similar tech to Apollo but I can verify almost nothing.

                    • robertlagrant 2 days ago

                      Really they should've have three programs deciding.

                  • 12_throw_away 3 days ago

                    It appears that a 1981 version of the source code is both available and runnable: https://github.com/seveno4/EURISKO

                    • versteegen 2 days ago

                      (The source for both AM and Eurisko were discovered at the same time.)

                      I was reading through the source code for both (and the Interlisp manual, but mostly Eurisko) earlier this year and found it quite inspiring. Some of the most meta(-meta-heuristic search) code ever written! Now I'm working on a similar project and am looking for parts of AM's and Eurisko's algorithms which I can incorporate! I'll write something up if I have any success.

                      Eurisko is far more abstract than AM, but the data structures (structures with named 'slots' (members)) it uses makes the code vastly more readable (AM is pretty damn unreadable despite plenty of comments). Lenat himself said the better representations are what made Eurisko possible, unsurprisingly. If you do want to understand the implementation of AM in incredible detail you can read Lenat's PhD thesis. No detailed documentation of Eurisko exists. Also, both programs contain lots of code to print out what the system is currently doing, which is incredibly helpful for deciphering the code.

                      • JasserInicide 2 days ago

                        The inline comments of when a function was last edited as very rough version control is quite something

                        • wiz21c 3 days ago

                          At that time rules systems (expert systems) were very popular

                          • graemep 3 days ago

                            I wrote one as part of my first paid job (holiday job between school and university) in the late 80s. My prototype was eventually turned into a production system that was used for many years.

                          • undefined 2 days ago
                            [deleted]
                          • h4ck_th3_pl4n3t 2 days ago

                            Related:

                            [1] OpenCyc CVS repository on sourceforge: https://sourceforge.net/p/opencyc/code/

                            [2] Somebody made the effort to download/upload everything together on GitHub (with state of OpenCyc of 03/2018): https://github.com/asanchez75/opencyc

                            • versteegen 2 days ago

                              OpenCyc is mostly just the core ontology (knowledge graph). I'm told it is nothing in comparison to Cyc. None of the interesting algorithms (just some basic inference). It was so terribly named that they had to discontinue it.

                              But I imagine it's still a great resource. Haven't played with it.

                            • tempaway456456 2 days ago

                              Related thread from when Doug Lenat passed away in 2023

                              https://news.ycombinator.com/item?id=37354000

                              • nosianu 3 days ago

                                This does not distract from the potential usefulness of the effort, since the work does not at all depend on some word's definition, so this is merely a side note, "FYI": It appears that the "common" in common sense does not exist. It could only be found in "plainly worded, factual claims about physical reality"; "We also find limited presence of collective common sense, undermining universalist claims and supporting skeptics." (from the study abstract).

                                https://www.theguardian.com/commentisfree/2024/sep/30/i-took...

                                The article links to the study.

                                > Common sense is not that common: a recent study from the University of Pennsylvania concludes the concept is “somewhat illusory”. Researchers collected statements from various sources that had been described as “common sense” and put them to test subjects.

                                > The mixed bag of results suggested there was “little evidence that more than a small fraction of beliefs is common to more than a small fraction of people”.

                                What this study does show that is relevant for the submitted article is that whatever he trains the AI for, it may not be something "common".

                                • lupusreal 3 days ago

                                  I think nitpicking the commonness of common sense is taking the term a bit too literally. I see it as another way to talk about intuition or gut feelings; knowledge that has some uncertain or wishy-washy origin, probably an amalgamation of experience (which naturally differs for everybody) and informal automatic reasoning (which may often err, but is nonetheless important for the way people navigate life because it can be done more efficiently than more rigorous reasoning.)

                                  • undefined 2 days ago
                                    [deleted]
                                  • torginus 3 days ago

                                    I am not really on board with this study's interpretation of common sense - it seems to measure how close the things that I think are to the average. Which is a very literal take on 'common' sense. Considering the tests have a ton of questions about spirituality, and advanced technologies, for me the exercise devolved into trying to guess what others would think of it (which turned out to be the point after all).

                                    And the author's person seems really concerning. She is someone, who, by her own admission struggles with grade school math such as fractions, yet proclaims her intellectual superiority over people who think 'obviously' silly things, like Ivermectin curing covid.

                                    The idea of horse medicine curing Covid makes about as much sense as heart medicine helping with erections.

                                    Although, somewhat amusingly, she seems to score below average on the common sense test.

                                    The very cynical take of journalists being both substandard critical thinkers, and unwilling consider alternative viewpoints seems to be true in this case.

                                    • inglor_cz 2 days ago

                                      "heart medicine helping with erections."

                                      Why should it be nonsense? Erections are a function of the circulatory system. One could expect them improve if the circulatory system is improved as well.

                                  • chiph 2 days ago

                                    I'm wondering if Chris McKinstry used this as an inspiration for his MindPixel project, which was a curated collection of short, validated true/false statements. Cyc uses a structured grammar while Mindpixels were in English.

                                    His intent (as he stated online, and not mentioned in the Wikipedia page) was to have a collection of sentences which could be used to give an AI some idea of what it was like to be human.

                                    https://en.wikipedia.org/wiki/Mindpixel

                                    • barbs 3 days ago
                                      • spywaregorilla 2 days ago

                                        What exactly is it? I get the sense it's basically a knowledge graph and an inference engine. But what was it actually doing in terms of submitting queries and spitting out takeaways in human readable form?

                                        I'm extremely skeptical about the anecdotes about the game as an indicator of this thing's competence. It sounds unlikely that this thing actually encoded any sort of game state or nuanced simulations and was really just spitballing on vague strategies that just happened to find some cheese (twice?). I'm guessing they had to play the strats until one of them proved valuable, and it's kind of weird and surprising that they thought this was a good use of their time and model.

                                        • wpietri 3 days ago

                                          This should perhaps have [2016] in the title.

                                          • pfdietz 3 days ago

                                            RIP, Doug.

                                            He died August 31, 2023, age 72.

                                            • kolinko 3 days ago

                                              2016.

                                              Interesting read considering it was written right before the advent of llms

                                              • hydrolox 2 days ago

                                                Are there any good sources online for how this is used concretely/queried? And what CYCL looks like? I have tried finding it online out of interest, but all sources only describe the system generally and don't show any examples of what you could derive.

                                                • brrrrrm 2 days ago

                                                  an oracle filtering a random generator will always produce oracle-grade results. is the computer actually getting better or is Lenat just acting as an oracle and the writer is running with it?

                                                  • yu3zhou4 2 days ago

                                                    Perhaps the structured dataset he built can be now used to train language models on a small corpus of text to get a reasonable common sense?

                                                    • gyomu 3 days ago

                                                      Alright, talk is cheap, show me the code.

                                                      Is Cyc essentially a 15 million line long Prolog program? Based on the article’s hand wavy description that’s the best I can put together.

                                                      • wiz21c 3 days ago

                                                        It's a discovery system: but: https://en.wikipedia.org/wiki/Discovery_system_(AI_research)

                                                        And from that wikipedia article: "The dream of building systems that discover scientific hypotheses was pushed to the background with the second AI winter and the subsequent resurgence of subsymbolic methods such as neural networks. Subsymbolic methods emphasize prediction over explanation, and yield models which works well but are difficult or impossible to explain which has earned them the name black box AI. A black-box model cannot be considered a scientific hypothesis, and this development has even led some researchers to suggest that the traditional aim of science - to uncover hypotheses and theories about the structure of reality - is obsolete.[7][8] Other researchers disagree and argue that subsymbolic methods are useful in many cases, just not for generating scientific theories."

                                                        which sounds very intriguing to me :-)

                                                        • mepian 2 days ago

                                                          The inference engine is implemented in SubL (see https://cyc.com/archives/glossary/subl/ and https://web.archive.org/web/20120513111513/http://www.cyc.co...) which is a variant of Common Lisp, and the knowledge base is implemented in CycL which is divided into the Epistemological Level (EL) and the Heuristic Level (HL).

                                                          There was a source-available version called OpenCyc with a small subset of the knowledge base that is now retired and no longer officially available, but it's still easy to find on the net.

                                                          • transfire 3 days ago

                                                            That’s about right, but written in Lisp.

                                                          • the_origami_fox 3 days ago

                                                            2016

                                                            • injidup 3 days ago

                                                              Back in this time, early to mid 1980's, I did some work in a programming language called Savvy. It was some kind of tool to build expert systems based on data. Google doesn't seem to know about it. It was very obscure but my father and I built a tool for the Institute of Metals and Material Australia for assisting engineers/metallurgists pick the right kinds of metal alloy for a specific job. It's the first ever useful software I wrote but I can't find a record of it anywhere. Time has erased it.

                                                              We had used a primitive rolling hand scanner to scan in huge tables of metal alloy data along with recommended usages. My father wrote the query engine using Savvy and I wrote the text based windowing UI.

                                                              Would be neat if someone could turn up some record of it somewhere. My father has been retired since ever and has no records of the system either.

                                                            • undefined 2 days ago
                                                              [deleted]