Some psychologists finetuned a Llama-family LLM on some psychology data - “a data set called Psych-101, which contained data from 160 previously published psychology experiments, covering more than 60,000 participants who made more than 10 million choices in total”, so I guess maybe 10 million tokens?
That part seems to make sense, but I cannot rightly comprehend the confusion that follows.
Some psychology researchers are claiming it has become a model of human cognition? (Because it can imitate the way a psychology study participant answers psychology study questions?)
Other psychology researchers are disputing this by testing its reaction time and digit span memory? (Are they administering an iq test? A cranial nerves exam?)
Reminds me of Jipi and the Paranoid Chip[1] by Neal Stephenson.
[1] https://web.archive.org/web/20060830131222/http://www.vanemd...
Well that was a disturbing read...
I don’t think healthy scepticism is (or should be) controversial. But I find it interesting how willing certain people are to confidently claim that a model does or does not accurately model human cognition when we clearly still _barely understand human cognition_.
Where do people derive their certainty, which seems to me largely misplaced?
Is the goal of the pitch to try to meaningfully and objectively advance the reaches of the knowledge of humanity, in which case self scrutiny is certainly critical? Or is the goal of the pitch to obtain $$$ funding and investment, get something publishable, gain personal renown, etc in which case self scrutiny is counter productive?
The tech industry has a very long and proud history of gaining a very surface level understanding of a different industry then immediately claiming to "disrupt" it.
The unearned confidence of tech bros should be studied
But I find it interesting how willing certain people are to confidently claim that a model does or does not accurately model human cognition when we clearly still _barely understand human cognition_.
Wait a second. Certainly being confident of some claim of understanding some very-not-understood thing is dubious.
But consider some rando who says "I understand X very-not-understood-thing" without strong evidence of one sort or another. Yes I feel moderately confident they are wrong. And I think your statement is presently a rather problematic false-equivalence between these situations.
I don't understand the "does or does not" framing, as if there's symmetry to the debate. One side is making the claim that we're on the verge of creating that which you say we barely understand. The other side should remain at certainty that no such thing is happening until there's proof that it is. And what would that proof look like when, without said understanding, all we can do is point to output from a program that looks or sounds like its training data?
Looking and sounding like your training data != recreating the source.
Is ChatGPT a human mind?
The burden of proof is on the one making the (extraordinary) claim.
I have the same objections to the term "IQ".
Neuroscience is still struggling to understand the basic operations of the brain, let alone the "mind." There's no agreed-upon definition of "intelligence." Can you define cognition (literally, "knowing") without defining intelligence?
These fields are making remarkable strides, but they're still in their infancy. Whoever writes these breathless press releases, they probably have a degree in marketing.
The science article (published one day earlier) seems to have more information: https://www.science.org/content/article/researchers-claim-th...
Current HN post points to a different article, which points to that science article; perhaps the HN post’s link should be updated.