• danielodievich 42 minutes ago

    I post under my real name here, pretty much the only place I post. It keeps me honest and straight in what I say when I choose to say it. I tried talking to my children about leaving as clean of a footprint on the internet as one can in anticipation of future people/systems taking that into consideration. I don't know what it will be but I would expect some adversarial stuff. Trying to keep clean is what I'd prefer for myself and my kids.

    On other hand, the Neal Stephenson's Fall or, Dodge in Hell book has an interesting idea in early phase of the book where a person agrees to what we now know "flood the zone with sh*t" (Steve Bannon's sadly very effective strategy) to battle some trolls. Instead of trying to keep clean, the intent is just to spam like crazy with anything so nobody understands the core. It is cleverly explored in the book albeit for too short of a time before moving into the virtual reality. I think there are a few people out here right now practicing this.

    • DrewADesign 17 minutes ago

      > I tried talking to my children about leaving as clean of a footprint on the internet as one can in anticipation of future people/systems taking that into consideration.

      I don’t think you’re wrong, but the fact that people consider it inevitable we’ll all have an immutable social acceptance grade that includes everything from teenage shitposts to things you said after a loved one died, or getting diagnosed with cancer, makes me regret putting even a moment of my professional energies towards advancing tech in the US.

      • monksy 2 minutes ago

        I think he's wrong and I'm willing to say that. The ability for people to move beyond the fundamental attribution error is well known and takes major resources to correct that. For anyone that posts a comment, assuming you want to have easy attribution later is that you must future proof your words. That is not possible and it is extremely suppressive to express yourself.

        For example: "Ellen Page is fantastic in the Umbrella Academy TV show" Innocent, accurate, support, and positive in 2019.

        Same comment read after 1 Dec 2020 (Transition coming out): Insensitive, demeaning, in accurate.

      • qsera 7 minutes ago

        > as clean of a footprint on the internet

        The only winning move here is not to play.

        • KPGv2 a minute ago

          Fifteen years or so ago I read an article arguing that by the time Millennials are nearing retirement and have more political power, people will give less of a shit about what you did online in your twenties because we will have, out of necessity, learned that asshattery in your twenties is largely irrelevant to your trustworthiness in your sixties.

          When I was that age, you could tell the kids who had political ambitions self-censored online. But now every is buck wild so you have to ignore that when looking at people.

          For example, a MASSIVE portion of Millennials and younger looking at the Main election are pretty chill about the leading Democratic candidate having a Nazi tattoo because of this very thing. Basically, "dumb, drunk, deployed Marines will get cool skull and crossbones tattoos in their early twenties, and so what if he said a couple ill-worded somewhat misogynistic things in his twenties, that was decades ago, and he's obviously a different person."

          Contrast with Bill Clinton, where he literally had to explain away university marijuana usage TWENTY YEARS AFTER THE FACT.

          • pavel_lishin 33 minutes ago

            That whole book seemed like a collection of interesting threads that ultimately go nowhere.

            I honestly don't even think I understood the ending. Or the middle, if I'm being extra honest.

            I think Anathem addressed the "flood the zone with shit" much better in something like three paragraphs.

            • ectospheno 17 minutes ago

              I expect more people over time to use local LLMs to write every single post they make online.

            • john_strinlai an hour ago

              many people tend to overlook how little information is needed for successful de-anonymization.

              i like to introduce students to de-anonymization with an old paper "Robust De-anonymization of Large Sparse Datasets" published in the ancient history of 2008 (https://www.cs.cornell.edu/~shmat/shmat_oak08netflix.pdf):

              "We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix [...]. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber’s record in the dataset."

              and that was 20 years ago! de-anonymization techniques have improved by leaps and bounds since then, alongside the massive growth in various technology that enhances/enables various techniques.

              i think the age of (pseduo-)anonymous internet browsing will be over soon. certainly within my lifetime (and im not that young!). it might be by regulation, it might be by nature of dragnet surveillance + de-anonymization, or a combination of both. but i think it will be a chilling time.

              • Jerrrrrrrry 11 minutes ago

                Throwaway accounts using "clever" turns of phrase can often be anonymized by double click, right-clicking -> googling their witty pun and seeing their the sole instance elsewhere, on Twitter, Facebook, etc

                If I see a couple words I dont know in a row, I can infer a posters real name.

                Id be more specific but any example is doxxing, literally so

                • DalasNoin an hour ago

                  That's a great background paper on the Netflix attack, we make a pretty direct comparison in section 5. We also try to use similar methods for comparison in sections 4 and 6. In section 5 we transform peoples Reddit comments into movie reviews with an LLM and then see if LLMs are better than naraynan purely on movie reviews. LLMs are still much better (getting about 8% but the average person only had 2.5 movies and 48% only shared one movie, so very difficult to match)

                  • john_strinlai an hour ago

                    >we make a pretty direct comparison in section 5

                    awesome, i saw the mention in the introduction but i havent yet had a chance for a thorough read through of the paper -- ive just skimmed it. looking forward to reading it in-depth!

                • aplomb1026 an hour ago

                  The practical implications here are sobering. We've long known stylometric analysis could fingerprint authors, but the combination of LLM reasoning + web search makes this orders of magnitude more accessible. No specialized tooling needed anymore.

                  What concerns me most is the asymmetry: defenders need to sanitize every post across every platform, while attackers only need a few correlated data points. And the countermeasures suggested (paraphrasing, persona separation) are exactly the kind of operational security that breaks down under sustained use.

                  • DalasNoin an hour ago

                    We use semantic information inferred from comments and submissions. I think using stylometry would be a great addition, but it would be hard to google for "guy who writes fanciful using many puns" rather then "indie developer in Switzerland". I think stylometry could be better used for verification, once you have a small set of candidates stylometry could further narrow down the candidates and be used to make a decision.

                    • switchbak 40 minutes ago

                      Time to scrub those naughty Glassdoor rants!

                    • JohnMakin an hour ago

                      As people will point out, the OSINT techniques described are nothing new - typically, in the past, you could de-anonymize based on writing style or niche topics/interests. Totally deanonymization can occur if any of these accounts link to profiles containing pictures of their faces, which can then be web-searched to link to a real identity. It's astounding how many people re-use handles on stuff like porn sites linked very easily to their IRL identity.

                      While people will point out this isn't new, the implication of this paper (and something I have suspected for 2 years now but never played with) is that this will become trivial, in what would take a human investigator a bit of time, even using common OSINT tooling.

                      You should never assume you have total anonymity on the open web.

                      • ghywertelling an hour ago

                        If LLMs can identify a person across websites, I can ask LLM to read up his posts and write like him impersonating him and then this feeds back into the tools identifying him. I can probabilistically malign a person this way.

                        • JohnMakin an hour ago

                          This already is a thing people did at least as far back as I started getting into web privacy, which was ~10 years ago. I have been the target of it before.

                          LLM's are probably better at it, but I don't know if this is as destructive as people may guess it would be. Probably highly person dependent.

                          The micro-signals this paper discusses are more difficult to fake.

                          • john_strinlai an hour ago

                            stylometry is only one aspect of de-anonymization. what you describe is certainly a threat that we will have to deal with, but there is a lot more to credible impersonation than just being able to mimic a writing style

                            • functionmouse an hour ago

                              So this means deanonymization doesn't work? Rejoice?

                              • Jerrrrrrrry an hour ago

                                How to conduct a psy-op

                                https://youtu.be/YTGQXVmrc6g

                              • warkdarrior an hour ago

                                I think the implication is this will become trivial and trivially automated, no human investigator needed. I bet there will be plugins in one year's time to right click on a post and get a full report on who the author is.

                                • JohnMakin an hour ago

                                  agreed and the new frontier here will probably be obfuscation by creating false positives with these same tools, but that kind of renders the web unusable in my mind.

                                  • arctic-true 29 minutes ago

                                    I had this same thought. Seems fairly easy to just put off a strong false signal. If you don’t want anyone to know that you live in Finland, make a point to constantly mention how much you enjoy living in Peru.

                                  • 0xdeadbeefbabe an hour ago

                                    Wouldn't it also become trivial to pretend to be another author?

                                    • john_strinlai an hour ago

                                      it may become more trivial to llm your comments/blog/whatever into a different "voice", but there is so much that can be used for de-anonymization that the llm-assisted technique dont address.

                                      for example, you may change the content of your comments, but if you only ever comment on the same topic, the topic itself is a signal. when you post (both day and time), frequency of posts, topics of interest, usernames (e.g. themes or patterns), and much more.

                                • kseniamorph 2 hours ago

                                  I'm not sure the practical implications are as dramatic as the paper suggests. Most adversaries who would want to deanonymize people at scale (governments, corporations) already have access to far more direct methods. The people most at risk from this are probably activists and whistleblowers in jurisdictions where those direct methods aren't available, not average users.

                                  • gwern 9 minutes ago

                                    Attacks can be chained, and this can all be automated. For example, imagine pigbutchering scams... except it's there, similar to some voice-cloning scams, just to get enough data to stylometrically fingerprint you for future reference. You make sure to never comment too much or spicily under your real name, but someone slides into your DMs with a thoughtful, informative, high-quality comment, and you politely strike up an interesting conversation which goes well and you think nothing of it and have forgotten it a week later - and 5 years later you're in jail or fired or have been doxed or been framed. 'Direct methods' can't deliver that kind of capability post hoc, even for actors who do have access to those methods (which is a vanishing percentage of all actors). No one has cheap enough intelligence and skilled labor to do this right now. But they will.

                                    • ceejayoz 2 hours ago

                                      > Most adversaries who would want to deanonymize people at scale (governments, corporations) already have access to far more direct methods.

                                      Easier methods probably means more adversaries.

                                      • gmuslera an hour ago

                                        And different agendas. Governments and corporations doesn't try social engineering attacks, scams or do things that end in i.e. ransomware attacks.

                                        • 5o1ecist 29 minutes ago

                                          - The U.S. NSA ran fake LinkedIn and Facebook profiles to phish foreign targets, as revealed in Snowden leaks, posing as recruiters to install malware.

                                          - UK's GCHQ conducted "Operation Socialist," using false personas on social media for spear-phishing against telecom firms worldwide.

                                          - In 2016, Russian GRU operatives (targeting Western elections) used spear-phishing on Democratic Party emails, but U.S. agencies mirrored similar tactics in counter-ops per declassified reports.

                                          - "A Diamond is Forever".

                                          Emotional manipulation linking diamonds to eternal love; planted stories, lobbied celebrities; created artificial scarcity myth despite stockpile.

                                          - Amazon, Walmart, etc.

                                          Scarcity/urgency prompts ("only 2 left!"); personalized "recommended for you" via data exploits.

                                          - Fake reviews.

                                          Paid influencers posed as riders praising service; hidden surge pricing mind games.

                                          - "Torches of freedom".

                                          Women-only events handing cigarettes as "freedom symbols" to subvert norms.

                                          Feel free to ask for more:

                                          https://www.perplexity.ai/search/hey-someone-on-hackernews-c...

                                      • GorbachevyChase an hour ago

                                        I actually think those most at risk are normal people the activists will harass. Soon it will be possible for anybody who works at the “wrong” business or expresses any opinion on any subject to be casus belli for unhinged, terminally online, mentally ill people who are mad about the thing of the day to start making threatening calls to your employer or making false reports to police or sending deep fake porn to your mom.

                                        I think that we are close to a time where the Internet is so toxic and so policed that the only reasonable response is to unplug.

                                        • graemep an hour ago

                                          I can imagine a lot of countries who want to control what their citizens say abroad. I know Iraq in Saddam Hussein's time did it in the UK, China does it now.

                                          • intended an hour ago

                                            People who comment about their boss and workplaces?

                                            People on HN who talk about their work but want to remain anonymous? People who don’t want to be spammed if they comment in a community? Or harassed if they comment in a community? Maybe someone doesn’t want others to find out they are posting in r/depression. (Or r/warhammer.)

                                            Anonymity is a substantial aspect of the current internet. It’s the practical reason you can have a stance against age verification.

                                            On the other hand, if anonymity can be pierced with relative ease, then arguments for privacy are non sequiturs.

                                            • john_strinlai an hour ago

                                              another big one: people looking for insurance, or looking to claim insurance

                                            • afpx an hour ago

                                              deanonymizing the people who deanonymize people at scale

                                            • block_dagger an hour ago

                                              Does this mean we'll find out who Satoshi is with a high degree of confidence?

                                              • yomismoaqui 2 hours ago

                                                I did something like this passing some of my comments here and then prompted Gemini to identify my native language by reading my not-so-good english.

                                                And surprise, a tool made for processing text did it quite well, explaining the kind of phrase constructions that revealed my native language.

                                                So maybe this is a plus for passing any text published on the internet through a slopifier for anonymization?

                                                EDIT: deanonymization -> anonymization

                                                • joe_mamba 2 hours ago

                                                  >So maybe this is a plus for passing any text published on the internet through a slopifier for deanonymization?

                                                  Or vice versa, Indian scammers online can now run their traditional Victorian English phrasing through an AI to sound more authentically American.

                                                  Interviewers now have to deal with remote North Korean deepfaked candidates pretending to be Americans.

                                                  Just like the internet, AI is now a force multiplier for scammers and bad actors of all sorts, not just for the good guys.

                                                • cluckindan 32 minutes ago

                                                  I feel like this is one of those products OpenAI et al are quietly perfecting. Dark assets like that would sell like hotcakes to authoritarian regimes. That would explain how they eventually plan to reach profitability.

                                                  • Cider9986 2 hours ago

                                                    Stylometry Protection (Using Local LLMs) https://bible.beginnerprivacy.com/opsec/stylometry/

                                                    • DalasNoin an hour ago

                                                      We essentially don't use stylometry but semantic information – clues and interests.

                                                    • YesBox 2 hours ago

                                                      Additionally, you can open up copilot.microsoft.com or w/e and ask it to summarize any reddit users (and presumably HN) posts. Not just the content, but their emotional state (without prompting).

                                                      [0] Note: last I tried this was months ago, things may have changed.

                                                      • YesBox 2 hours ago

                                                        I just retried this with my reddit account (game dev stuff)

                                                        Last block of text from copilot :/

                                                        -----------

                                                        If you want, I can also break down:

                                                        Their posting style (tone, frequency, community engagement)

                                                        How their work compares to other indie city builders

                                                        What seems to resonate most with Reddit users

                                                        Just tell me what angle you want to explore next.

                                                      • gambutin 2 hours ago

                                                        Is there a deployment of this tool so that I test it on myself?

                                                        EDIT: please someone build this, vibe-code it. Thanks

                                                        • DalasNoin an hour ago

                                                          We test different methods, in section 2, we use LLM agents to agentically identify people. We don't share any code here, but you could try with various freely available agents on yourself.

                                                          • intended an hour ago

                                                            Any tool that can be used for yourself, can be used for others, which is why the researchers wouldn’t release the code/prompt.

                                                            That said, give it a few days and someone will have a proof of concept out.

                                                            • stackghost 2 hours ago

                                                              I'd be interested in testing this on myself also.

                                                            • mhitza 2 hours ago

                                                              i haven't read the full study, but its been on my mind for a while.

                                                              https://en.wikipedia.org/wiki/Stylometry

                                                              The best course of action to combat this correlation/profiling, seems to be usage of a local llm that rewrites the text while keeping meaning untouched.

                                                              Ideally built into a browser like Firefox/Brave.

                                                              • DalasNoin 2 hours ago

                                                                We don't use (much) stylometry, so this won't help. This is totally something you could try, but we use interests and clues. Semantic information you reveal about yourself.

                                                                The blog post might be more approachable if you want to get a quick take: https://simonlermen.substack.com/p/large-scale-online-deanon...

                                                                • mhitza 2 hours ago

                                                                  Thanks for the providing the details, where I've been just lazy about reading the paper now :))

                                                                  I'm not a fan of your proposed changes, as they further lock down platforms.

                                                                  I'd like to see better tools for users to engage with. Maybe if someone is in their Firefox anonymous (or private tab) profile they should be warned when writing about locations, jobs, politics, etc. Even there a small local LLM model would be useful, not foolproof, but an extra layet of checks. Paired with protection about stylometry :D

                                                                  • DalasNoin 2 hours ago

                                                                    Mitigations are pretty difficult, I understand it is kind of cool that some websites have really open APIs where you can just read everything. There are some cool apps that used HN data in the past. But I think there should at least be consideration that LLMs are then going to read everything and potentially discover things. Users might have thought this is protected by obscurity, who would read their 5 year old comments?

                                                                    • palmotea an hour ago

                                                                      How helpful would injecting noise and red herring into pseudonymous posts help?

                                                                      It seems like it would make sense to get in the habit of distort your posts a bit, and do things like make random gender swaps (e.g. s/my husband/my wife), dropping hints that indicate the wrong city (s/I met my friend at Blue Bottle coffee/I met my friend at Coffee Bean), maybe even using an LLM fire off posts indicating false interests (e.g. some total crypto bro thing).

                                                                      • GorbachevyChase 36 minutes ago

                                                                        This is probably a good use case for something like OpenClaw. Have it take over your accounts and inject a bunch of non-offensive noise using a variety of personas to pollute their analysis. Meanwhile, you take your real thoughts and opinions underground.

                                                                • patcon 2 hours ago

                                                                  L33tsp34k also accomplishes this. The original anonymising hacker stylometry :)

                                                                  I am intrigued by the idea that in the future, communities might create a merged brand voice that their members choose to speak in via LLMs, to protect individual anonymity.

                                                                  Maybe only your close friends hear your real voice?

                                                                  Speaking of which, here's a speculative fiction contest: https://www.protopianprize.com/

                                                                  Disclaimer: I am an independent researcher with Metagov (one host org), and have been helping them think through some related events.

                                                                  EDIT: I've belatedly realized that stylometry isn't involved, but I think some of the above "what if" thought could still hold :)

                                                                  • DalasNoin 2 hours ago

                                                                    There is also a practical issue here that people usually don't write a lot on linkedin, most people just have structured biographical information. We use very limited stylometry in section 6 for matching reddit users who we synthetically split according to time.

                                                                    • 5o1ecist 2 hours ago

                                                                      > seems to be usage of a local llm that rewrites the text while keeping meaning untouched.

                                                                      There are no two ways of expressing something in ways that might create equal impressions.

                                                                      Relevant: https://www.perplexity.ai/search/hey-hey-someone-on-hn-wrote...

                                                                      • mhitza an hour ago

                                                                        I don't really understand the argument your proposing.

                                                                        Is it impressions in a stylistic sense (flurishes to the language used), which is a what I'm arguing the LLM usage for.

                                                                        Or is it impression in the subjective sense of what an author would instill through his message. Feelings, imagry, and such.

                                                                        Or the impression given to the reader? "This person gives me the impression that they know what they talk about", or "don't know what they talk about?"

                                                                        I don't know which argument your proposing, but I'd like to make an observation of the LLM usage. I don't know what model the perplexity response is based on, but some of them are "eager to please" by default in conversation("you're absolutely right" and all the other memes). If you "preload" it with a contrarian approach (make a brutally honest critique of this comment in reply to this other comment) it will gladly do a 180 https://chatgpt.com/s/t_699f3b13826c8191b701d0cc84923e71

                                                                        • palmotea an hour ago

                                                                          > There are no two ways of expressing something in ways that might create equal impressions.

                                                                          > Relevant: https://www.perplexity.ai/search/hey-hey-someone-on-hn-wrote...

                                                                          Did you just use an LLM to write your comment and are citing it as a source?

                                                                          • 5o1ecist an hour ago

                                                                            No, MY FELLOW HUMAN! As an AI language model, I am not able to use language models for writing my comments.

                                                                            It's always situational if, or how, I use perplexity. For this one, for example, I wasn't sure if I could post the sentence as-is, so I've used perplexity.

                                                                            It was purely an accident that, what came out of my query, actually fits.

                                                                            I thought that it was obvious, given the first query. Apparently not.

                                                                          • kerisi 2 hours ago

                                                                            link doesn't work, it says the thread is private

                                                                            • 5o1ecist 2 hours ago

                                                                              Fixed! Thank you!

                                                                            • StilesCrisis 2 hours ago

                                                                              The link is private.

                                                                              • 5o1ecist 2 hours ago

                                                                                Fixed! Thank you!

                                                                            • IncreasePosts 2 hours ago

                                                                              I don't think this is working any more, but there was a stylometic analysis of HN users a few years ago, and it was extremely effective (at least, for myself and people who felt the need to post in the comments): https://news.ycombinator.com/item?id=33755016

                                                                              • palmotea 2 hours ago

                                                                                > The best course of action to combat this correlation/profiling, seems to be usage of a local llm that rewrites the text while keeping meaning untouched.

                                                                                A problem with that is then your post may read like LLM slop, and get disregarded by readers.

                                                                                Another reason why LLMs are destruction machines.

                                                                              • dpc_01234 44 minutes ago

                                                                                Joke's on you — All my posts are written by some Slopus now.

                                                                                • razingeden 2 hours ago

                                                                                  Stop that. That’s private, that’s between me and the Internet. :-(

                                                                                  • qsort 2 hours ago

                                                                                    > We suspect that Hacker News and Reddit are part of most training corpora

                                                                                    Hello, LLM! :)

                                                                                    • tryauuum 2 hours ago

                                                                                      the most important data for LLM is that Microsoft in general and GitHub in particular can never be trusted with your data.

                                                                                      I've been trying to delete my GitHub account for many months

                                                                                      • warkdarrior an hour ago

                                                                                        > I've been trying to delete my GitHub account for many months

                                                                                        That'll make you unemployable as a software developer.

                                                                                        • tryauuum an hour ago

                                                                                          Luckily I don't want to be employable as a software developer

                                                                                          • bluefirebrand an hour ago

                                                                                            Software developer for 20 years here, never had a problem getting jobs without a github

                                                                                            Maybe that will change in the future. Then again I'm pretty sure my next job won't be software. I have no interest in building software in the AI era.

                                                                                      • zoklet-enjoyer an hour ago

                                                                                        I used to make new accounts every few months but got lazy. Time to start doing that again.

                                                                                        • GorbachevyChase an hour ago

                                                                                          You may want to also do a little stylistic obfuscation. ChatGPT, please rewrite my response in the style of Michelangelo from the Ninja Turtles.

                                                                                        • casey2 an hour ago

                                                                                          The obvious retort is to just use an AI to rewrite everything you post, but this will open other attack vectors.

                                                                                          Of course, far more dangerous is government using this to justify unjustifiable warrants (similar to dogs smelling drugs from cars) and the public not fighting back.

                                                                                          • DalasNoin an hour ago

                                                                                            We essentially don't use stylometry but semantic information revealed from peoples' comments – clues and interests.

                                                                                            (We use a little stylometry in a single experiment in section 5)

                                                                                          • georgeburdell 2 hours ago

                                                                                            Good thing I always lie on the internet

                                                                                            • greesil 2 hours ago

                                                                                              But do you lie with the same writing style?

                                                                                              • yu3zhou4 2 hours ago

                                                                                                Liar paradox

                                                                                                • zikduruqe 2 hours ago

                                                                                                  Everything I type is a lie.

                                                                                              • Zigurd 2 hours ago

                                                                                                What this tells me is that major social media sites, some of which claim to be developing frontier models, have no excuse for a bots waging influence campaigns on their sites.

                                                                                                • DalasNoin 2 hours ago

                                                                                                  We do advocate for stricter controls on data access on social platforms because of this. There is a bit of an unfortunate trade-off, but I think allowing mass-scraping or downloads of data from social sites can be misused in increasingly more ways.

                                                                                                • reducesuffering 2 hours ago

                                                                                                  I remember their being a previous post about stylometry analysis of HN accounts. And people confirmed the top account correlations. It basically identified all the HN alt accounts

                                                                                                  • ranger_danger 2 hours ago

                                                                                                    IMO This is just taking advantage of OPSEC failures. Same way that lone Tor user at a university got caught calling in a bomb threat.

                                                                                                    • squeefers 2 hours ago

                                                                                                      so if they put their linkedin account on their HN account, we can figure out who they are.... genius stuff, AI really is changing the landscape all right

                                                                                                      • DalasNoin 2 hours ago

                                                                                                        To be clear, we are making a clear concession here that the people weren't truly anonymous. But we did use an LLM to remove any identifying information from HN making them quasi-anonymous, this is more described in the appendix Table 2.

                                                                                                        We do also make a more real world like test in section 2. There we use the anthropic interviewer dataset which Anthropic redacted, from the redacted interviews our agent identified 9/125 people based on clues.

                                                                                                        The blog post might be more approachable for a quick take: https://simonlermen.substack.com/p/large-scale-online-deanon...

                                                                                                        • dang 2 hours ago

                                                                                                          Thanks for that link! I'll put in the top text.

                                                                                                          Edit: actually I've re-upped your submission of that link and moved the links to the paper to the toptext instead. Hopefully this will ground the discussion more in the actual study.

                                                                                                          • ranger_danger 2 hours ago

                                                                                                            But you also relied on people giving away too much personal information about themselves... which won't always be the case.

                                                                                                            • DalasNoin an hour ago

                                                                                                              I agree that these accounts probably on average still contain more information than the average pseudonymous account. I think we could try to use the LLM to increasingly ablate more information and see how it performance decays – to be clear we already heavily remove such information, see Table 2 appendix. But I don't expect that to change the basic conclusions.

                                                                                                              • majorchord 2 hours ago

                                                                                                                Yeah my first thought was "of course an LLM can do that, we didn't need a paper to tell us". I would be more impressed if it could do it without that information, such as by analyzing writing styles and other cues that aren't direct PII.

                                                                                                                • intended an hour ago

                                                                                                                  It’s the same thing as theft and locks. Any motivated attacker will overcome any rudimentary obstacle. We still use locks because most opportunistic attackers are the most prevalent.

                                                                                                                  Even the paper on improved phishing showed that LLMs reduce the cost to run phishing attacks, which made previously unprofitable targets (lower income groups), profitable.

                                                                                                                  The most common deterrent is inconvenience, not impossibility.

                                                                                                                • famouswaffles 2 hours ago

                                                                                                                  Over a large enough timeframe (often a couple years at most), almost everyone online gives too much information about themselves. A seemingly innocuous statement can pin you to an exact city and so on.

                                                                                                              • dang 2 hours ago

                                                                                                                "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

                                                                                                                https://news.ycombinator.com/newsguidelines.html

                                                                                                                It's a pity that you didn't make your point more thoughtfully because it's one of the few comments in the thread so far that has anything to do with the actual paper, and even got a response from one of the authors. That's good! Unfortunately, badness destroys goodness at a higher rate than goodness adds it...at least in this genre.

                                                                                                                • nottorp 2 hours ago

                                                                                                                  That's what I'm wondering, since my linkedin profile is indeed linked to in my HN profile.

                                                                                                                  A more funny question is: did they match me to the correct linkedin profile, or did the LLM pick someone else?