Trying to reason about postgres is somewhat of an enigma when you are forced to do it; generally the only reason as a programmer you have to is because something went wrong, and then the mindset is a mix of nervousness and panic; then incredulity at some of the seemingly unintuitive behaviors. I suspect this might be true of any large, complex system at the edges.
Interesting! MVCC mechanics aside, it's also worth remembering that work_mem is only 4MB by default [0], so large intermediate results will likely spill to disk (e.g. external sorts for ORDER BY operations).
[0] https://www.postgresql.org/docs/current/runtime-config-resou...
In Oracle, this happens because uncommitted transactions are found to be committed by a later reader, which cleans them out.
https://www.databasejournal.com/oracle/delayed-block-cleanou...
Things get even weirder when you use extensions. I remember being profoundly confused using Timescale 1 and doing a lot of concurrent writes on a hypertable with a foreign key (while also inserting into the other table) when I would get transaction deadlocks even in scenarios where it wouldn't normally be possible. This is how I found out doing DML on a "hypertable" actually does DDL under the hood, with all of the associated problems that brings.
That’s confusing. What DDL did it do? Create new partitions?
Likely creating child tables for the various chunks that kick in periodically (e.g., depending on your hypertable chunking policy). Used to hit these all the time, quite annoying.
Greate article! I have learned about block/page long time ago when I needed to debug performance issue but not as deep as this article. Will share it with my teammate and its funny to see their emotional face :D
Similar things can also happen with file systems: ext4 mounted -o ro will let the driver do filesystem recovery even if userspace writes are prevented.
That seems like it violates the principle of least surprise.
Hmmm yes and no. If I set / to mount read-only in some embedded Linux system context, my intention is just that the contents of disk shouldn't change just because some program decided to write something somewhere; I would be quite surprised if some recoverable metadata bit flip or something caused the system to irrecoverably fail to boot just because the readonly flag also prevented fsck from fixing errors.
However if I have a faulty drive that I connect to my system to recover data from it and I don't want it to experience any more writes because I'm worried further writes may break it further, I would be quite surprised if 'mount -o ro' caused the driver to write to it.
Recovery and mounting should be separate operations. If filesystem is not clean, it should not be allowed to mount at all.
You can disable the journal. It should(! haven't checked !) not touch the recovery information then. You also need this when you have a decade of version difference and an error on mount: `mount -oro,noload`
“Recovering” an otherwise error free journaled or logged filesystem is considered a normal operation. Unclean just doesn’t mean an error. That’s how this works and I don’t see very many interested in changing this behavior.
At the same time, you want to be able to read files in normal use-case. Being able to read them (after recovery) only if mounted read-write seems counterintuitive. This is the kind of times where right or wrong depends on the use.
Do changes need to go on disk for that to work?
Also how you can end up with silly things like ro-but-i-really-mean-it-this-time flags
The forensics people I know don't worry about flags, and just use a write blocker for everything.
Yeah and clone everything before even touching (the copy) too.
Haha
TLDR: it can be caused by hint bit updates, as well as page pruning - both can be kicked off by a select query, and will be counted as part of the query’s statistics.
However, the article as a whole is both a much wider and deeper dive. I recommend giving it a read in full!
Thanks, a TLDR should be mandatory for articles of this length :)
As articles (especially about postgres) go, this isn't that long, but you can always get your own AI summary if it's too long for you.
Firefox reader mode (necessary to read this, as the font size and color choices are poor) estimated this at a 30+ minute read. It would be a courtesy to readers for authors to provide a summary. That way people can decide if they want to spend time reading further. This is why academic papers have an abstract up front.