• kriro 17 minutes ago

    Really happy to see this and will give it a good spin. They seem to be doing things the right way in my subjective opinion:

    """ To implement this filter, we begin by ranking URL domains according to the volume of texts they contribute to the FineWeb (Penedo et al., 2024a) and FineWeb-2 (Penedo et al., 2025) corpus, as an approximation of web-level English and multilingual data. From this ranking, we select the top one million English domains and the top one million non-English domains. Due to domain overlap and the fact that some sites are now offline, the total number of accessible robots.txt files is smaller than two million. For each domain that remains reachable, we retrieve its robots.txt file as of January 2025 and examine the directives relevant to AI training. In particular, we focus on those targeting the AI-specific user agents listed in Appendix A. Any contents blocked by the current robots.txt is removed retroactively from the entire 2013-2024 range of the training dataset. We follow an opt-out policy, that is, if the corresponding robots.txt files are not available, we consider the data usable for training. The filtering process results in an estimated token loss of approximately 8% in English data and 4% in multilingual data. """

    • denysvitali 3 days ago

      Report: https://github.com/swiss-ai/apertus-tech-report/raw/refs/hea...

      Key features

      Fully open model: open weights + open data + full training details including all data and training recipes

      Massively Multilingual: 1811 natively supported languages

      Compliant: Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data

      • lyu07282 3 days ago

        Their struggle with Nvidia driver bugs they had to work around was very relatable. You'd think if someone buys 10,752 of their high-end GPUs you'd get some support with it.

        • _zoltan_ 3 hours ago

          did I miss a blog on this?

          • lllllm 12 minutes ago

            we didn't have time to write one yet, but there is the tech report which has a lot of details already

        • Bromeo 3 days ago

          Looks like the performance is pretty decent, somewhere around Llama3.1 for general knowledge (Tables 17) but still a bit behind in Code and Reasoning (Table 18). Llama3.1 was released about one year ago.

          • esafak 3 hours ago

            There's an interesting "Swiss AI Charter" on pg. 107.

          • dcreater 27 minutes ago

            I want and hope this to succeed. But the tea leaves don't look good at the moment:

            - model sizes that the industry was at 2-3 gens ago (llama 3.1 era) - Conspicuous lack of benchmark results in announcements - not on openrouter, no ggufs as yet

          • tarruda 33 minutes ago

            Is there any practical method to verify that the model was trained from the reported dataset?

            • lllllm 20 minutes ago

              we released 81 intermediate checkpoints of the whole pretraining phase, and the code and data to reproduce. so full audit is surely possible - still it would depend on what you consider 'practical' here.

            • nickpsecurity 3 days ago

              Upvoting to encourage discussion of these differentiators:

              "Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models. The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors."

              "pretrained on 15T tokens with a staged curriculum of web, code and math data"

              "open weights + open data + full training details including all data and training recipes"

              "Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data"

              • Mars008 3 days ago

                At least not "open source"

                > "open weights + open data + full training details including all data and training recipes"

                Is it reproducible?

                > respecting opt-out consent of data owners (even retrospectivey)

                Were they notified and given an option to opt out? Owners and authors are not the same. Data owners aren't copyright owners either.

                > avoiding memorization of training data

                Not convincing.

            • lastdong 3 days ago

              In my opinion, we need more models trained on fully traceable and clean data instead of closed models that we later find out were trained on Reddit and Facebook discussion threads.

              • WanderPanda 3 hours ago

                Imagine regulators doing their job for once and creating a clean regulation that removes the uncertainty about the liability for such releases. Such that they can just slap Apache or MIT on it and call it a day and don't require to collect personal data to comply with the "acceptable use policy".

                • SilverElfin 3 days ago

                  Apparently a project of https://www.swiss-ai.org/

                  • habi 2 hours ago

                    https://apertus.org/ exists since 15 years, interesting choice of name.

                    • titaniumrain 3 days ago

                      seems a DOA

                      • sschueller 3 days ago

                        How so?

                      • cmdrk 3 days ago

                        Does their training corpus respect copyrights or do you have to follow their opt out procedure to keep them from consuming your data? Assuming it’s the latter, it’s open-er but still not quite there.

                        • SparkyMcUnicorn 3 hours ago

                          Your question is addressed in opening abstract: https://github.com/swiss-ai/apertus-tech-report/raw/refs/hea...

                          > Unlike many prior models that release weights without reproducible data pipelines or regard for content-owner rights, Apertus models are pretrained exclusively on openly available data, retroactively respecting robots.txt exclusions and filtering for copyrighted, non-permissive, toxic, and personally identifiable content.

                          • traspler 3 hours ago

                            Afaik they respect robots.txt on crawl and later when using the data they re-check the robots.txt and will exclude the data if the new robots.txt was updated to deny access. They have further data filtering bit for that you better check the technical report.