I'm interested in this topic, but it seems to me that the entire scientific pursuit of copying the human brain is absurd from start to finish. Any attempt to do so should be met with criminal prosecution and immediate arrest of those involved. Attempting to copy the human brain or human consciousness is one of the biggest mistakes that can be made in the scientific field.
We must preserve three fundamental principles: * our integrity * our autonomy * our uniqueness
These three principles should form the basis of a list of laws worldwide that prohibit cloning or copying human consciousness in any form or format. This principle should be fundamental to any attempts to research or even try to make copies of human consciousness.
Just as human cloning was banned, we should also ban any attempts to interfere with human consciousness or copy it, whether partially or fully. This is immoral, wrong, and contradicts any values that we can call the values of our civilization.
I wouldn't be surprised if in (n hundreds/thousands years) we find out that copying consciousness if fundamentally impossible (just like it's fundamentally impossible to copy an elementary particle).
It’s named after the multi-decade data compression test image https://en.wikipedia.org/wiki/Lenna
Buy the book! https://qntm.org/vhitaos
Just sharing that I bought Valuable Humans in Transit some years ago and I concur that it's very nice. It's a tiny booklet full of short stories like Lena that are way out there. Maximum cool per gram of paper.
That feels grossly inappropriate
If you read the original text, what happens in that story is also grossly inappropriate. Maybe that's the parallel.
could you be more specific?
Lena is no longer used as a test image because it's porn. It's banned from several journals because it's porn. As in they will reject any paper that uses Lena no matter the technical content.
The reasons usually given for choosing this image are all just rationalisations — Lena is used the most because it's porn and image compression researchers are all male. It belongs as part of a test set, sure, but there's no reason it should be the single most used image. Except because its porn.
The woman herself says she never had a problem with it being famous. The actual test image is obviously not porn, either. But anything to look progressive, I guess.
From the link above
> Forsén stated in the 2019 documentary film Losing Lena, "I retired from modeling a long time ago. It's time I retired from tech, too... Let's commit to losing me."
> Lena is no longer used as a test image because it's porn.
The Lenna test image can be seen over the text "Click above for the original as a TIFF image." at [0]. If you consider that to be porn, then I find your opinion on what is and is not porn to be worthless.
The test image is a cropped portion of porn, but if a safe-for-work image would be porn but for what you can't see in the image, then any picture of any human ever is porn as we're all nude under our clothes.
For additional commentary (published in 1996) on the history and controversy about the image, see [1].
[0] <http://www.lenna.org/>
[1] <https://web.archive.org/web/20010414202400/http://www.nofile...>
Same person who wrote SCP Antimemetics Division which is great too
If you liked this piece, please, go play SOMA, you will love it.
qntm is really talented sci-fi writer. I have read Valuable Humans in Transit and There is no Antimemetics division and both were great, if short. Can only recommend.
I loved There is no Antimemetics division. I haven't read the new updated to the end but the prose and writing is greatly improved. The idea of anomalous anti-memes is scary. I mean, we do have examples of them, somewhat, see Heaven's Gate and the Jonestown massacre, though they're more like "memes" than "antimemes" (we know what the ideas were and they weren't secrets).
I remember being very taken with this story when I first read it, and it's striking how obsolete it reads now. At the time it was written, "simulated humans" seemed a fantastical suggestion for how a future society might do scaled intellectual labor, but not a ridiculous suggestion.
But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.
A high variance story! It could have been prescient, instead it's irrelevant.
This is a sad take, and a misunderstanding of what art is. Tech and tools go "obsolete". Literature poses questions to humans, and the value of art remains to be experienced by future readers, whatever branch of the tech tree we happen to occupy. I don't begrudge Clarke or Vonnegut or Asimov their dated sci-fi premises, because prediction isn't the point.
The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
Yeah, that's like saying Romeo and Juliet by Shakespeare is obsolete because Romeo could have just sent Juliet a snapchat message.
You're kinda missing the entire point of the story.
100% agree, but I relish the works of Willam Gibson and Burroughs who pose those questions AND getting the future somewhat right.
I think that's a little harsh. A lot of the most powerful bits are applicable to any intelligence that we could digitally (ergo casually) instantiate or extinguish.
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
when you read this and its follow-up "driver" as a commentary on how capitalism removes persons from their humanity, it's as relevant as it was on day one.
good sci fi is rarely about just the sci part.
That is the same categorical argument as what the story is about: scanned brains are not perceived as people so can be “tasked” without affording moral consideration. You are saying because we have LLMs, categorically not people, we would never enter the moral quandaries of using uploaded humans in that way since we can just use LLMs instead.
But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
I actually think it was quite prescient and still raises important topics to consider - irrespective of whether weights are uploaded from an actual human, if you dig just a little bit under the surface details, you still get a story about ethical concerns of a purely digital sentience. Not that modern LLMs have that, but what if future architectures enable them to grow an emerging sense of self? It's a fascinating text.
“Irrelevant” feels a bit reductive while the practical question of what actually causes qualia remains unresolved.
Lena isn't about uploading. https://qntm.org/uploading
I have not seen as prediction as actual technology, but mostly as a horror story.
And a warning, I guess, in unlikely case of brain uploading being a thing.
Found the guy who didn't play SOMA ;)
I always laugh at such fantasies.
You can't copy something you have not even the slightest idea about: and nobody at the moment knows what consciousness is.
We as humanity didn't even start going on the (obviously) very long path of researching and understanding what consciousness is.