• RicoElectrico 5 hours ago

    I've found that for the most part the articles that I want summarized are those which only fit the largest context models such as Claude. Because otherwise I can skim-read the article possibly in reader mode for legibility.

    Is llama 2 a good fit considering its small context window?

    • tcsenpai 4 hours ago

      Personally I use llama3.1:8b or mistral-nemo:latest which have a decent contex window (even if it is less than the commercial ones usually). I am working on a token calculator / division of the content method too but is very early

    • donclark 5 hours ago

      If we can get this as the default for all the newly posted HN articles please and thank you?

      • chx 2 hours ago

        Help me understand why people are using these.

        I presume you want information of some value to you otherwise you wouldn't bother reading an article. Then you feed it to a probabilistic algorithm and so you can not have any idea what the output has to do with the input. Like https://i.imgur.com/n6hFwVv.png you can somewhat decipher what this slop wants to be but what if the summary leaves out or invents or inverts some crucial piece of info?

        • andrewmcwatters 2 hours ago

          People write too much. Get to the point.

          • ranger_danger 41 minutes ago

            I think you just insulted every journalist on Earth.

            • throwup238 2 hours ago

              Even if I want to read the entirety of a piece of long form writing I'll often summarize it (with Kagi key points mode) so that I know what the overall points are and can follow the writing better. Too much long form writing is written like some mystery thriller where the writer has to unpack an entire storyline before they'll state their main thesis, so it helps my reading comprehension to know what the point is going in. The personal interest stories that precede the main content always land better that way.

              • chx 2 hours ago

                any point? regardless of what's written? does that work for you?