I have to say that I think the photo without the Ligthroom processing actually looks better. The second one hasn't just added a bitcoin, it has also added the "AI shimmer" that seems to be a part of a lot of generated images. I can't put my finger on exactly what the characteristic is, but my first instinct (separate from the bitcoin, which was hard to find) is "that's an AI picture." Someone should just spend some time in the blur tool if they don't like those glints.
I don't think there's any AI-fication going on in that photo. The modified version has a more compressed tone curve to bring out more detail, along with jacked up saturation (especially evident for water). This is similar to what most cell phones do by default to make photos look more pleasing.
I do agree that the original looks better, but the author of the post clearly prefers the modified version.
Clipped highlights in digital photography are simply impossible to eliminate in post-processing without conjuring nonexistent information. Even if you shoot raw. Different raw processors use different tricks, color propagation and such, but naturally they are best used when highlights are really small. I would not be surprised if tools like Lightroom invoke ML at the first hint of clipping (because why not, if you all you have is a hammer…).
Pro tip: Digital sensors are much less forgiving than negative film when it comes to exposing highlights. With a bit of foresight they are best tackled at shooting time. Highlights from water/glass reflections are tamed by a fairly cheap polarizing filter, and if you shoot raw you should do the opposite of negative film and always underexpose a scene with bright highlights (especially if highlights are large or are in your subject of interest). Let it be dark, you will have more noise, but noise is manageable without having to invent what doesn’t exist in the parts that are the most noticeable to human eye.
> Clipped highlights in digital photography are simply impossible to eliminate in post-processing without conjuring nonexistent information. Even if you shoot raw.
Huh? This used to be true twenty years ago, but modern sensors in prosumer cameras capture a lot more dynamic range than can be displayed on the screen or conveyed in a standard JPEG. If you shoot raw, you absolutely have the information needed to rescue nominally clipped highlights. You get 2-4 stops of latitude without any real effort.
The problem here is different. If the rest of the scene is exposed correctly, you have to choose one or another. Overexposed highlights or underexposed subject. The workaround is to use tone mapping or local contrast tricks, but these easily give your photos a weird "HDR" look.
You may be confused on at least two counts:
— If you think there is “clipping” and the somehow different “nominal clipping”.
Clipping is binary. Either you camera’s pixel clipped (i.e., accumulated enough photons to get fully saturated, and therefore offers no useful information for debayering), or it did not. If it clipped, then you lost colour data in that part of the image. If you lost colour data, and you want a colour photo, then in almost all cases one way or another you will have to rescue it by conjuring information up for the photo to look authentic and aesthetically good. (A lot of raw development software does it for you by default.) If the clipped bit is small, it is easy to do subtly. If it is big, which is a real danger with sun reflection bokeh, then… whoopsie.
— If you think modern sensors are safe from clipping.
This applies to any digital camera, from consumer to top professional models. Sensor bit depth is not even remotely enough to expose a scene with extreme highlights (e.g. sun or its reflections) without taking special measures regarding exposure at capture time. If you don’t tame highlights with a polarizing filter, you must dramatically underexpose the scene or they will clip.
They specifically clarified that they were talking about the JPEG/picture on screen, which has less dynamic range than the sensor and can clip without losing pixel information.
We are specifically discussing scene-referred image data—the raw pixel values obtained from a digital camera sensor. That is where unrecoverable information loss occurs at capture time. Parsing that data is when LR et al. engage various highlight reconstruction algorithms that fill in the missing information. My point is that a lot of that loss is often avoidable at capture time.
nah-- even in the supplied jpg the histogram shows they're not clipping the highlights, and if you crank down the brightness, you can see the detail.
You may be confusing display-referred and scene-referred image data.
1. Clipping occurs in the scene-referred raw camera sensor pixel data at capture time.
2. Raw processing software fills in the highlight by supplying the missing colour information. Often this happens by default.
3. When you obtain a display-referred JPEG, the missing information for clipped highlights was already supplied. Tone curves and other adjustments apply already after highlight reconstruction, making the final result look more organic.
In other words, with modern digital photography processing workflows you will never see any clipping on a histogram for the final JPEG (unless either something went really wrong or it was intentionally done for a look), and it is a poor foundation to make assumptions about what clipping had or had not occurred at capture stage.
What you can safely assume, however, is that a normally exposed digital photo featuring any sun reflections from water or glass will almost certainly have blown highlights.
Without the raw image, we won't really know whether there was clipping. However, it is clear that the poster here wanted to remove the highlights and was okay with introducing some information that wasn't in the photo to do it, hence the use of an AI filter. That means that there is no issue with using the clone stamp or the paintbrush tool.
This sounds about right. However, regarding your first sentence, I’d be a bit less cautious: sun reflections from water on a clear day guarantee clipping unless a polarizing filter is used or the scene is dramatically underexposed, and based on the general look of the photo (even knowing it’s after processing) my intuition suggests me that neither took place.
Hence my original advice: don’t leave it up to ML when a little forethought can do wonders in terms of preserving information.
What are you talking about? I see clipped histograms come out of cameras all the time. I’ve been working with digital images and digital photo manipulation for thirty years.
Regardless, it’s not generative AI making up details. The differentiated pixels are there in the camera jpeg even if traditional image processing techniques have to be used to make them visible onscreen. The complete structure of the feathers that isn’t visible in the camera jpeg, for example, is plainly visible just modifying the contrast.
Subject domain knowledge mismatch appears to be too strong for further constructive discussion.
Oh, yeah, compression of the dynamic range combined with increase brightness makes sense. It's not exactly just stable diffusion that produces that look, but also things like ML filters, etc.
This reminds me of the soap-opera effect[0] on modern tvs. I have difficulty watching a movie on someone’s tv with it enabled, but they don’t even seem to notice.
A truly bizarre effect. One of the first times in my life I was ever thinking "wait, this looks TOO smooth. it's weird". As if my eyes just instinctively knew there were fake frames (before I understood the concept of "frames").
That could be the VAE? The "latent" part of latent diffusion models is surprisingly lossy. And however much the image is getting inpainted, the entire thing gets encoded and decoded.
Edit: I'll note some new models (SD3 and Flux) have a wider latent dim and seem to suffer from this problem less.
AI generated images are also biased strongly towards medium lightness. The photographer's adjusting of the tone curve may simply give it that "look".
You absolutely do not need to encode and decode the whole image, even in ComfyUI. All you need to do is composite the changed areas back in the original photo. There are nodes for that and I’m sure that’s what Adobe does as well, if they even encode in the first place. These tools don’t really work quite like inpainting. There’s no denoise value - it’s all or nothing.
I’ve used Photoshop’s generative fill many times on singular images and there’s no loss on the ungenerated parts.
If you zoom in and squint your eyes, it does look like some kind of shiny coin.
What I'd like to know though... is how is the model so bad that when you tell it to "remove this artifact" ... instead of it looking at the surroundings and painting over with some DoF-ed out ocean... it slaps an even more distinct artifact in there? Makes no sense.
A lot of current inpainting models have quite a lot of "signal leak". They're more for covering stuff vs removing it entirely.
Ironically, some older SD1/2-era models work a lot better for complete removal.
I mean, this is notable because it screwed up. It usually does a pretty good job. Usually.
In this case there are better tools for the job anyways. Generative fill shines when it’s over something that’d be hard to paint back in - out of focus water isn’t that.
Heaven forbid your picture has a woman in it somewhere though, Adobe's AI will refuse to fill half the time.. I've taken to censoring out blocks of image with black squares of it has any body parts showing (still clothed), fill, copy, then undo the censoring. It's pretty ridiculous for a paid subscription.
For a paid product even if the content explicitly contained nudity or depicted sexual activity it should had still been allowed as they are valid cases that Lightroom and Photoshop could be used. The censorship in AI is stupid, babysitting users should not be part of the tool's responsibility. Its like banning kitchen knives to keep people from using them for violence.
Apparently this isn't an isolated incident: https://www.reddit.com/r/photoshop/comments/1e5nyt7/generati...
“Large company ships half baked product” has gotta be the least interesting story to read
Sure, if you view this as an isolated incident. But I think of it more as the latest example of the larger trend of how the industry has gone mad actively making their products worse with half-baked AI features. That is a more interesting story.
And this is the closest thing the professional imaging world has to a broadly-available tool designed for high-end professional use cases. It's barely consistently good enough for throwaway blog headers in its current state for large edits, and for small edits it's exactly 0 percent better than the removal tools they added 20 years ago. Adobe better start giving some love to its professional users because the alternatives are getting better, and their ecosystem is getting worse. It's like they're trying to put themselves in a position where they're competing with Canva and Instagram rather than Affinity, Procreate, Rebelle, etc. If it's not clear to them yet that they're not right-around-the-corner from having their AI tools be a drop-in replacement for their regular stuff, they're gonna have a bad time.
Is it actively worse, though? My impression is that all of the other, classical-based in-painting methods are still alive and well in Adobe products. And I think their in-painting works well, when it does work. To me, this honestly sounds like an improvement, especially in a creative tool like Lightroom or Photoshop --- the artist has more options to achieve their vision; it's usually understood to be up to the artist to use the tools appropriately.
I'm not an artist or an Adobe customer, but when I see products adding half-baked or defective features, it tarnishes their brand and would definitely make me re-consider trusting their product. Especially for professional use, and regardless of whether the rest of the product still works fine. It's a general indicator of carelessness and tolerance of quality problems.
Unfortunately, the Adobe ecosystem on a whole is like OS-level complex, and they're pretty much ignoring anything that isn't directly related to generative AI, and that stuff is only consistently good for the lowest-level use cases. Miles more useful than Comfy, etc. for most artists and designers, but not close for people that need to do more skillful work. The value of Adobe as an ecosystem was their constant upgrading and keeping up with the times, and now, if it's not some cockamamie generative AI feature, it's going nowhere. They're even worse with bug fixes than they were before.
THe fact that we call it least interesting shows exactly how interesting it is that we just accept that companies are expected to ship broken slop.
I dont see where in the picture it is zoomed from to see the bitcoin
You have to click into it as it's not visible from the preview.
Bottom left.
it's in the second picture not the first
Down from the tip of the birds right wing near the very bottom of the image.
Question to people knowing adobe lightroom, could this feature be compromised? Is this just doing API calls to some remote thing?
Lightroom has a local Heal/Remove feature, and at least with LR Classic you have to tick a box for the AI remove, which processes it on Adobe servers.
As for whether it can be compromised... Probably? It sends all or some of your photo to a remove server, so that can certainly be taken.
I mean, having the model behave this way looks too easy and I guess that adobe does qc on the features it releases, so I'm not sure to see an alternative explanation - or adobe's qc is poor/inexistent.
I'm not sure what you mean by compromised but I'm pretty sure Adobe Firefly AI features are server-based. These features are too good to be done locally.
Plus even if it could be done locally, doing it server-side has the side benefit (for Adobe) of making it trivial to prevent pirates from ever being able to use those features.
By compromised I mean something like someone having access to adobe's servers where this is running and uploading troll models or toying with the model's responses
It’s almost like integrating a poorly understood black box with your software is a bad idea.
It's telling you what it's mining in the background with those extra gpu cycles.
I had to look up "butlerian jihad" (from one of the bsky comments) and now I want one too. Yow.
They'll just add a disclaimer somewhere.
So HN, any theories on how this happened ?
They used a circular mask and the model overfitted on Bitcoins as likely examples of circles? Adobe's models are only trained on their stock image library and they have a whopping 646,136 Bitcoin images.
that seems like too many
Correction, they have nearly a million Bitcoin images once you unfilter the ones tagged as AI generated, which are hidden by default. I assume they don't train their models on those though.
All of them AI slop of course. They train on this? I guess it's garbage in, garbage out.
Seems like they used some AI tool to remove speckles from an image. The tool has to generate a likely replacement. And one of the speckles looked a bit like a coin.
That's the bit that puzzles me the most though: you want that bit gone, so why would it fill in what that shimmer looks like? If there's an airplane in the sky that you want gone from a medieval movie frame, so you select the airplane and select "remove", surely it doesn't fill in an airplane because that's the closest match for what's being selected?
I must be missing something obvious but I don't see it mentioned in the submission or comments, or perhaps I'm not making the connection
What does gone mean though? You aren’t just drawing a black or transparent spot underneath what you want removed. You’re filling it in with the most likely background. Which is not a trivial operation. It’s an AI generative fill which is obviously unpredictable. You’d expect it to generate ocean but it drew a bitcoin.
That’s just how AI generative fill works. You keep running it until it looks how you want.
Your explanation could make sense for TFA, but definitely not in the case below
https://www.reddit.com/r/photoshop/comments/1e5nyt7/generati...
Maybe cryptobros with zero skills and no talent created too much images copy-pasting the same bitcoin image over everything, which caused poisoning of training data.
I also expect some bug involved, where user selects a circular mask, but outer edge is slightly blurred (feathered), which, when multiplied with white background, gives a light circle-shaped contour. After that Photoshop fills not a "hole in the sky", but a "light circle-shaped object" in the sky.
I read through Google News without logging in, and ever since that thing happened in November, it’s been flooded with crypto stories decorated with AI art. It totally makes sense to me that BTC would appear in this photo, since crypto bros seem to be the ones using AI images the most.
Sorry, tangent, but does anyone remember the AI zoom in some old phone camera that was hallucinating a celebrity's face? It was much before the moon zoom hallucinations.
I think you are referring to this: https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-a...
Discussed 4 years ago: https://news.ycombinator.com/item?id=24196650
[dead]
AI is starting to backmask. Even it recognizes Bitcoin as the way forward
Or AI is falling into the same hype event horizon.
Any reason why "AI made a flub" is still newsworthy? We know it does this.
Because of people posting on the singularity and AGI prospects: we need to keep the feed of counter examples active. After all, this is training data too.
Gotta find out the vulnerabilities of the paperclip maximizer, disguising ourselves as a crypto currency could be our only hope.
Maybe AI is using bitcoin to transact in the black market and this is evidence of it
You only had to put "with drones" and you had the trifecta. No wait: quantum drones.
Because this is Adobe Lightroom, which is meant to be, like, a real product that is used for real things.
It causes a lot of emotions which is good for attention online.
1. It makes people feel good because bad AI can’t take jobs away
2. It makes people feel bad because it’s further enshittification of experiences that are already getting worse
Because many don't know how fallible and useless these unnecessary products are!
Ok I'm getting tired of this habit of deep linking to apps. The apps either override existing web pages featuring the same content, or there's a web redirect to an app I might have installed but am not signed into. So in multiple cases, it's difficult to just read whatever the information is.
Web pages are universally accessible. Everyone has a browser on their device of choice. The web is 35 years old. Access to information is a solved problem.
Guess I'm just complaining. There are just so many links being submitted to HN these days that require accounts or specific apps installed. Feels like there should be a rule around this.
The link works fine for me? Not logged in nor do I have bluesky. Loads in chrome, I can see the 3 images, and quite a few comments beneath their post
What is being deeplinked? I was able to read the page and click on the pictures all in my browser (iOS safari).
They mean how the phone lets apps handle browser urls instead of just opening the url in the browser.
That’s not deep linking (i.e., something set up by the person posting the link). It’s simply the specific device being configured to handle regular web links differently.
I know what you're referring to. However there's no ability on iOS to control this behavior.
There is but it's not intuitive. Long press on a link and you can open it without it being sent away and apparently this sticks.
I think this must be a setting on your device to open links in the app.
If you’re on iOS, long-pressing on the bsky.app link in Safari and choosing “open” will open the link in Safari and remember your choice; the same works for YouTube links, etc.
Nah totally agreed, pet peeve of mine is trying to open a link in my browser to a website and my phone decides it knows better and insists on redirecting me to "The App" (which I've never installed and don't want, so it sends me to download it)
I don't have the bsky app, nor am I logged in on my phone's browser, and this opened fine for me. No app redirects.
What are you even talking about? It is literally a website.
I have the Bluesky app installed but I'm not logged in. So the website link is also being treated as an app deep link. It opens the app instead of just opening the website.
I don't have a lot of familiarity with app deep links, but my understanding is that originally deep links required a special non-domain to be registered that was separate from a normal web url.
Somewhere that's changed over the years and now regular web links like this default to opening in apps if the user has them installed.
There is no such thing as a deep link. If you don't like how your client device handles links, re-configure it or, if that's not possible, use a different device. The problem here is on your end.
But that's not OP deep linking to an app - it's just a normal website URL. It's you having your user agent configured to launch an app for this particular domain. Nothing that OP can do about that.
I use Bluesky all the time and I have never installed the app on any platform. I suggest getting rid of it, if you're such a fan of the web.
> my understanding is that originally deep links required a special non-domain to be registered that was separate from a normal web url.
You might be thinking of protocol handlers, like oacon:// to open something in software installed to handle that (in this case, launching openarena to connect directly from a dpmaster mirror with such application links included). I don't think they were called deep links back then, just different protocols, like http in http://example.org is a protocol that your browser is configured to handle and ftp:// used to be as well
These are still in relatively common use today, but on mobile devices it has become the norm to hijack specific domain names or even a path (e.g. F-Droid will try to handle repositories on third-party domains for you by trying to hook¹ any URL that contains */fdroid/repo/* -- so far, this has always been useful to me, but I can see the flip side). This link hijacking is often a pain for me as anyone linking to any Google product will make my phone try to open some Play Services component, which is largely not functional. I can't get rid of the system component (e.g. replace it with microG) without installing a custom ROM, which I can't do without getting rid of half the device's special features (no point having this phone then), but I also don't want it pinging back to the mothership so... a pain it shall be
As for your problem, reset the app's settings and it'll re-prompt you the next time you click one of these links in which app these links should open. It should do that any time there is (newly) more than one app that can handle any given URL
¹ https://github.com/f-droid/fdroidclient/blob/be028d71c2a25b9...
Is your device running iOS or iPadOS? This is default and non-configurable behavior. Even github.com will open in the GitHub app if you have it installed.
I understand the confusion (and frustration), but this is just a normal URL. The .app is just an ordinary TLD (but one of the newer ones.)
On iOS the app can ask to take over certain https urls. The web site also needs to have a special file to grant this access. Bluesky is choosing to do this to you. I used this for a login / reset flow about a decade ago, so it's been around for a while.
There also is the ability to register for a URL "scheme" (the bit that replaces "https"), which I believe is what you're thinking of and it does predate the https thing, but both have been around for a while. I'm guessing companies have just gotten more aggressive about using the https one.
Edit: and yes it is annoying, I've uninstalled the GitHub app because of this.
On Apple devices at least, this is a feature of Universal Links, which are generally more secure than deeplinks for various reasons [1]. Not sure you can disable it completely, but in some cases you can override it by long-pressing the link and choosing to open it in the browser.
[1] https://developer.apple.com/documentation/xcode/defining-a-c...
Remove the app then?
Having an app installed but logged out is such a rare case, and one that’s easily solved.
[dead]
The nefarious thing is some people will tell you there’s nothing wrong with the link and gaslight you into thinking you’re crazy – because they don’t have the Bluesky app installed.
What is it that is supposed to be wrong with the link? .app is a regular TLD, the host name resolves to a regular IP address and there is a regular web server returning regular HTML.
On iOS and iPadOS, even github.com links will do this with the GitHub app installed.
We're complaining about a completely normal URL that links to a normal webpage. This is very, very silly.
What is the use of a Github app on iPhone or iPad? I guess I could see it useful for reviewing pull requests and making comments if it's urgent (it's not) or your main dev device is out of reach (it can wait)...
I wanted to have a widget of my commit graph; it's nice to see the little squares filled up. When I clicked a github link and my screen started bouncing and sliding around, I was very displeased.
There is nothing wrong with the link. The only thing wrong is your poor understanding of your user agent behavior !
> and gaslight you
I have the Bluesky app installed and did not see a pop-up. It’s not “gaslighting”. It’s people having different experiences due to different OSes and settings.
It’s amazing how meaningless the word gaslighting has become.
You're saying the OP has configured their user agent to launch an app for this URL and somehow that's me gaslighting?
I imagine that's the joke they're making, yes
I don’t know why you’re getting dogpiled over this, it’s a terrible experience of creeping non-consensual computation.
I had the official bsky app installed, but I ended up with Graysky at some point and just forgot about it, and somewhere along the way my browser vendor (Apple) and the app vendor (bsky) decided that I would probably tolerate this horseshit. Reddit pulls the same shit but that’s extra special because it’s broken, so it goes to the App Store screen for Reddit and you just can’t load it at all without a laptop.
If you’re one of the people arguing this is cool you’re ugly and stupid like people who disagree with Linus.
Your device is misconfigured. That's not HN's fault, what has been posted is a completely ordinary link that would function in a perfectly ordinary way if you had configured your device properly.
This wouldn't even be CLOSED WONTFIX, there is literally nothing that anyone but you could do to fix this.
> non consensual computation
Gonna steal this one.
Because the problem is perfectly avoidable with very little technical understanding, and this is Hacker News.