We also know the optimal circuits if you want to compute two boolean functions from four variables at the same time: https://cp4space.hatsya.com/2020/06/30/4-input-2-output-bool....
Surprised not to see Karnaugh maps mentioned here, as a tool for humans to intuitively find these simplifications.
Could one do this directly with transistors or standard cells? Seems very useful for ASICs, particularly structured ASICs which are mapped from FPGA lookup tables of size 4-6.
This isn't quite as useful in practice as it seems, since NOT isn't always free, you almost always can eliminate common subexpressions, and gates with more than two inputs are often cheaper than doing everything with two-input gates.
The standard Floyd–Warshall is fairly easily parallelizable. I wonder how fast you could solve this problem with today's GPUs, and whether a(6) might be attainable in some reasonable time.
The example parity function for 3 variables appears to be flipped. Instead of being true if the number of true inputs is odd, it's true if the number of true inputs is even.
How is Russ so f'ing cracked. The brain on this human. 99.9% of us will never touch his intelligence.
Using the * operator for AND is very non-standard. Unicode provides ¬ for negation, ∧ for conjunction and ∨ for disjunction. These are commonly used in CS literature, along with bar(s) over variables or expressions to denote negation, which are definitely a mixed bag for readability.
From what I’ve had exposure to conjunction, disjunction, and negation symbols are common if you’re discussing logic [1].
Boolean algebra then use product, sum, and complement [2].
Both can express the same thing. In this case `*` is easier to type than `·`.
Isn't the AND operation often represented using multiplication notation (dot or star) because it is basically a boolean multiplication?
It's not so much that it is "boolean multiplication" (because how do you define that, also because digital representation of booleans implies that integer multiplication still applies) so much as AND follows similar Laws as multiplication, in particular AND is distributive across OR in a similar way multiplication is distributive over addition. [Example: a * (b + c) <=> a * b + a * c] Because it follows similar rules, it helps with some forms of intuition of patterns when writing them with the familiar operators.
It's somewhat common in set notations to use * and + for set union and set intersection for very similar reasons. Some programming languages even use that in their type language (a union of two types is A * B and an intersection is A + B).
Interestingly, this is why Category Theory in part exists to describe the similarities between operators in mathematics such as how * and ∧ contrast/are similar. Category Theory gets a bad rap for being the origin of monads and fun phrases like "monads are a monoid in the category of endofunctors", but it also answers a few fun questions like why are * and ∧ so similar? (They are similar functions that operate in different "categories".) Admittedly that's a very rough, lay gloss on it, but it's still an interesting perspective on what people talk about when they talk about Category Theory.
Do you really need to introduce category theory for that?
Seems like overkill, abstract algebra seems sufficient to categorize both boolean logic and integer operations as having the common structure of a ring.
Of course you don't "need" to introduce category theory for that, which is why I saved it for fun at the end. I just think it is neat. It's also one of those bridges to "category theory is simpler than it sounds", which is also why I disagree with it being "overkill" in general in part because that keeps category theory in the "too complex for real needs" box, which I think is the wrong box. Which, case in point:
> […] abstract algebra seems sufficient to categorize both boolean logic and integer operations as having the common structure of a ring.
I don't think Ring Theory is any easier than Category Theory to learn/teach, I rather think that Category Theory is a subset of some of best parts of abstract algebra, especially Group Theory, boiled down to the sufficient parts to describe (among other things) practical function composition tools for computing.
I would normally interpret "Boolean multiplication" as multiplication over GF(2), where + would be XOR. This notation is fairly common when discussing things like cryptography or CRCs.
Thx for your thorough explanation! I don’t know much about these things, just thought about similarities in the algebraic properties, especially with regards to the zero-element: 0*1=0.
> digital representation of booleans implies that integer multiplication still applies
Yes. Multiplication of unsigned 1-bit integers is the same function as boolean AND.
boolean multiplication is well-defined: it is multiplication mod 2, which is exactly the AND operator.
It is not so uncommon to see it represented by a dot. I guess a star is like a dot, but doesn’t require finding any weird keys. It isn’t ideal but it is obvious enough what they mean.
(From 2011)