After a 2 year Clojure stint I find it very hard to explain the clarity that comes with immutability for programmers used to trigger effects with a mutation.
I think it may be one of those things you have to see in order to understand.
I think the explanation is: When you mutate variables it implicitly creates an ordering dependency - later uses of the variable rely on previous mutations. However, this is an implicit dependency that isn't modeled by the language so reordering won't cause any errors.
With a very basic concrete example:
x = 7
x = x + 3
x = x / 2
Vs
x = 7
x1 = x + 3
x2 = x1 / 2
Reordering the first will have no error, but you'll get the wrong result. The second will produce an error if you try to reorder the statements.
Another way to look at it is that in the first example, the 3rd calculation doesn't have "x" as a dependency but rather "x in the state where addition has already been completed" (i.e. it's 3 different x's that all share the same name). Doing single assignment is just making this explicit.
The immutable approach doesn't conflate the concepts of place, time, and abstract identity, like in-place mutation does.
In mutating models, typically abstract (mathematical / conceptual) objects are modeled as memory locations. Which means that object identity implies pointer identity. But that's a problem when different versions of the same object need to be maintained.
It's much easier when we represent object identity by something other than pointer identity, such as (string) names or 32-bit integer keys. Such representation allows us to materialize us different versions (or even the same version) of an object in multiple places, at the same time. This allows us to concurrently read or write different versions of the same abstract object. It's also an enabler for serialization/deserialization. Not requiring an object to be materialized in one particular place allows saving objects to disk or sending them around.
The hardware that these programs are running on store objects in linear memory, so it doesn't not make sense to treat it as such.
DRAM is linear memory. Caches, less so. Register files really aren't. CPUs spend rather a lot of transistors and power to reconcile the reality of how they manipulate data within the core against the external model of RAM in a flat linear address space.
I agree that the explicit timeline you get with immutability is certainly helpful, but I also think its much easier to understand the total state of a program. When an imperative program runs you almost always have to reproduce a bug in order to understate the state that caused it, fairly often in Clojure you can actually deduct whats happening.
That's right - immutability enables equational reasoning, where it becomes possible to actually reason through a program just by inspection and evaluation in one's head, since the only context one needs to load is contained within the function itself - not the entire trace, where anything along the thread of execution could factor into your function's output, since anybody can just mutate anybody else's memory willy-nilly.
People jump ahead using AI to improve their reading comprehension of source code, when there are still basic practices of style, writing, & composition that for some reason are yet to be widespread throughout the industry despite already having a long standing tradition in practice, alongside pretty firm grounding in academics.
In theory it’s certainly right that imperative programs are harder to reason about. In practice programmers tend to avoid writing the kind of program where anything can happen.
> In practice programmers tend to avoid writing the kind of program where anything can happen.
My faith in this presumption dwindles every year. I expect AI to only exacerbate the problem.
Since we are on the topic of Carmack, "everything that is syntactically legal that the compiler will accept will eventually wind up in your codebase." [0]
Yet even Rust allows you to shadow variables with another one with the same name. Yes, they are two different variables, but for a human reader they have the same name.
I think that Rust made this decision because the x1, x2, x3 style of code is really a pain in the ass to write.
In idiomatic Rust you usually shadow variables with another one of the same name when the type is the only thing meaningfully changing. For example
   let x = "29"
   let x = x.parse::<i32>()
   let x = x.unwrap()
Once you actually "change" the value, for example by dividing by 3, I would consider it unidiomatic to shadow under the same name. Either mark it as mutable for preferably make a new variable with a name that represents what the new value now expresses
In a Clojure binding this is perfectly idiomatic, but symbolically shared bindings are not shadowed, they are immutably replaced. Mutability is certainly available, but is explicit. And the type dynamism of Clojure is a breath of fresh air for many applications despite the evangelism of junior developers steeped in laboratory Haskell projects at university. That being said, I have a Clojure project where dynamic typing is throughly exploited at a high level, allows for flexible use of Clojure's rational math mixed with floating point (or one or the other entirely), and for optimization deeper within the architecture a Rust implementation via JVM JNI is utilized for native performance, assuring homogenous unboxed types are computed to make the overall computation tractable. Have your cake and eat it too. Types have their virtues, but not without their excesses.
Another idiomatic pattern is using shadowing to transform something using itself as input:
let x = Foo::new().stuff()?; let x = Bar::new(x).other_stuff()?;
So with the math example and what the poster above said about type changing, most rust code I write is something like:
let x: plain_int = 7
let x: added_int = add(x, 3);
let x: divided_int = divide(x, 2);
where the function signatures would be fn add(foo: plain_int, int); fn divide(bar: added_int, int);
and this can't be reordered without triggering a compiler error.
Or they got inspired by how this is done in OCaml, which was the host language for the earliest versions of Rust. Actually, this is a behaviour found in many FP languages. Regarding OCaml, there was even a experimental version of the REPL where one could access the different variables carrying the same name using an ad-hoc syntax.
I do find shadowing useful. If you're writing really long code blocks in which it becomes an issue, you are probably doing too much in one place.
Sometimes keeping a fixed shape for the variable context across the computation can make it easier to reason about invariants, though.
Like, if you have a constraint is_even(x) that's really easy to check in your head with some informal Floyd-Hoare logic.
And it scales to extracting code into helper functions and multiple variables. If you must track which set of variables form one context x1+y1, x2+y2, etc I find it much harder to check the invariants in my head.
These 'fixed state shape' situations are where I'd grab a state monad in Haskell and start thinking top-down in terms of actions+invariants.
whats the difference between immutable and constant, which has been in use far longer? why are you calling it mutable?
"Constant" is ambiguous. Depending on who you ask, it can mean either:
1. A property known at compile time.
2. A property that can't change after being initially computed.
Many of the benefits of immutability accrue properties whose values are only known at runtime but which are still known to not change after that point.
"Constant" implies a larger context.
As in - it's not very "constant" if you keep re-making it in your loop, right?
Whereas "immutable" throws away that extra context and means "whatever variable you have, for however long you have it, it's unchangeable."
> As in - it's not very "constant" if you keep re-making it in your loop, right?
you cant change a constant though
He’s implying that the variable it’s being defined within the loop. So, constant, but repeatedly redefined.
That's the opposite of what any reasonable engineer means by "constant".
That’s the point, you’re just haggling about scopes now. All the way from being new per program invocation to new per loop.
Immutability doesn’t have this connotation.
How? I think the same argument applies: If it's changing from loop to loop, seems mutable to me.
I think you’re after something other than immutability then.
You’re allowed to rebind a var defined within a loop, it doesn’t mean that you can’t hang on to the old value if you need to.
With mutability, you actively can’t hang on to the old value, it’ll change under your feet.
Maybe it makes more sense if you think about it like tail recursion: you call a function and do some calculations, and then you call the same function again, but with new args.
This is allowed, and not the same as hammering a variable in place.
No? It has a lifetime of one loop duration, and is constant during that duration. Seems perfectly fine to me.
In plenty of languages, there's not really a difference. In Rust, there is a difference between a `let var_name = 10;` and `const var_name: u64 = 10;` in that the latter must have its value known at compile-time (it's a true constant).
> why are you calling it mutable?
Mostly just convention. Rust has immutable by default and you have to mark variables specifically with `mut` (so `let mut var_name = 10;`). Other languages distinguish between variables and values, so var and val, or something like that. Or they might do var and const (JS does this I think) to be more distinct.
Immutable and constant are the same. rendaw didn't use the word mutable. One reason someone might use the word "mutable" is that it's a succinct way of expressing an idea. Alternative ways of expressing the same idea are longer words (changeable, non-constant).
In languages like JavaScript, immutable and constant may be theoretically the same thing, but in practice "const" means a variable cannot be reassigned, while "immutable" means a value cannot be mutated in place.
They are very, very different semantically, because const is always local. Declaring something const has no effect on what happens with the value bound to a const variable anywhere else in the program. Whereas, immutability is a global property: An immutable array, for example, can be passed around and it will always be immutable.
JS has always hade 'freeze' as a kind of runtime immutability, and tooling like TS can provide for readonly types that provide immutability guarantees at compile time.
Arrays are a very notable example here. You can append to a const array in JS and TS, even in the same scope it was declared const.
That’s always felt very odd to me.
That's because in many languages there is a difference between a stored reference being immutable and the contents of the thing the reference points to being immutable.
but we already had the word variable for values that can change. on both counts it seems redundant
They aren't the same for object references. The reference can't be changed, but the properties can.
I would be nicer if you gave x1 and x2 meaningful names
What would those names be in this example?
In a real application meaningful names are nearly always possible, eg:
    const pi = 3.1415926
    const 2pi = 2 * pi
    const circumference = 2pi * radiusMade a similar experience with Scheme. I could tell people whatever I wanted, they wouldn't really realize how much cleaner and easier to test things could be, if we just used functions instead of mutating things around. And since I was the only one who had done projects in an FP language, and they only used non-FP languages like Java, Python, JavaScript and TypeScript before, they would continue to write things based on needless mutation. The issue was also, that using Python it can be hard to write functional style code in a readable way too. Even JS seems to lend itself better to that. What's more is, that one will probably find oneself hard pressed to find the functional data structures one might want to use and needs to work around recursion due to the limitations of those languages.
I think it's simply the difference between the curious mind, who explores stuff like Clojure off the job (or is very lucky to get a Clojure job) and the 9 to 5 worker, who doesn't know any better and has never experienced writing a FP codebase.
JS is much more of a functional language than it was given credit for a long time. It had first-class functions and closures from day one if I'm not mistaken.
I would say it's more than immutability - it's the "feel" of working with values. I've worked with at least 6 languages professionally and likely more for personal projects over last 20 years. I can say that Clojure was the most impactful language I learned.
I tried to learn Haskel before but I just got bogged down in the type system and formalization - that never sat with me (ironically in retrospect Monads are a trivial concept that they obfuscated in the community to oblivion, yet another Monad tutorial was a meme at the time).
I used F# as well but it is too multi paradigm and pragmatic, I literally wrote C# in F# syntax when I hit a wall and I didn't learn as much about FP when I played with it.
Clojure had the lisp weirdness to get over, but it's homoiconicty combined with the powerful semantics of core data structures - it was the first time where the concept of working with values vs objects 'clicked' for me. I would still never use it professionally, but I would recommend it to everyone who does not have a background in FP and/or lisp experience.
I have dreams of being at a “Clojure shop” but I fear daily professional use might dull my love for the language. Having to realize that not everyone on my team wants to learn lisp (or FP) just to work with my code (something I find amazing and would love to be paid to do) was hard.
On a positive note I have taken those lessons from clojure (using values, just use maps, Rich’s simplicity, functional programming without excessive type system abstraction, etc) and applied them to the rest of my programming when I can and I think it makes my code much better.
The way I like to think about is that with immutable data as default and pure functions, you get to treat the pure functions as black boxes. You don't need to know what's going on inside, and the function doesn't need to know what's going on in the outside world. The data shape becomes the contract.
As such, localized context, everywhere, is perhaps the best way to explain it from the point of view of a mutable world. At no point do you ever need to know about the state of the entire program, you just need to know the data and the function. I don't need the entire program up and running in order to test or debug this function. I just need the data that was sent in, which CANNOT be changed by any other part of the program.
Sure modularity, encapsulation etc are great tools for making components understandable and maintainable.
However, don't you still need to understand the entire program as ultimately that's what you are trying to build.
And if the state of the entire programme doesn't change - then nothing has happened. ie there still has to be mutable state somewhere - so where is it moved to?
In functional programs, you very explicitly _do not_ need to understand an entire program. You just need to know that a function does a thing. When you're implementing a function-- sure, you need to know what it does. But you're defining it in such a way that the user should not know _how_ it works, only _what_ it does. This is a major distinction between programs written with mutable state and those written without. The latter is _much_ easier to think about.
I often hear from programmers that "oh, functional programming must be hard." It's actually the opposite. Imperative programming is hard. I choose to be a functional programmer because I am dumb, and the language gives me superpowers.
I think you missed the point. I understand that if you writing a simple function with an expected interface/behaviour then that's all you need to understand. Note this isn't something unique to a functional approach.
However, somebody needs to know how the entire program works - so my question was where does that application state live in a purely functional world of immumutables?
Does it disappear into the call stack?
It didn't disappear; there's just less of it. Only the stateful things need to remain stateful. Everything else becomes single-use.
Declaring something as a constant gives you license to only need to understand it once. You don't have to trace through the rest of the code finding out new ways it was reassigned. This frees up your mind to move on to the next thing.
A pretty basic example: I write a lot of data pipelines in Julia. Most of the functions don't mutate their arguments, they receive some data and return some data. There are a handful of exceptions, e.g. the functions that write data to a db or file somewhere, or a few performance-sensitive functions that mutate their inputs to avoid allocations. These functions are clearly marked.
That means that 90% of the time, there's a big class of behavior I just don't need to look for when reading/debugging code. And if it's a bug related to state, I can pretty quickly zoom in on a few possible places where it might have happened.
> However, don't you still need to understand the entire program as ultimately that's what you are trying to build.
Of course not, that's impossible. Modern programs are way to large to keep in your head and reason about.
So you need to be able to isolate certain parts of the program and just reason about those pieces while you debug or modify the code.
Once you identify the part of the program that needs to change, you don't have to worry about all the other parts of the program while you're making that change as long as you keep the contracts of all the functions in place.
> Once you identify the part of the program that needs to change,
And how do you do that without understanding how the program works at a high level?
I understand the value of clean interfaces and encapsulation - that's not unique to functional approaches - I'm just wondering in the world of pure immutability where the application state goes.
What happens if the change you need to make is at a level higher than a single function?
> However, don't you still need to understand the entire program as ultimately that's what you are trying to build.
Depends on what I'm trying to do. If what I'm trying to handle is local to the code, then possibly not. If the issue is what's going into the function, or what the return value is doing, then I likely do need that wider context.
What pure-functional functions do allow is certainty the only things that can change the behaviour of that function are the inputs to that function.
> there still has to be mutable state somewhere - so where is it moved to?
This is one way of thinking about it: https://news.ycombinator.com/item?id=45701901 (Simplify your code: Functional core, imperative shell)
It lets you refine when and where it happens more than other methods of restricting state change, such as in imperative OOP.
It's moved toward the edges of your program. In a lot of functional languages, places that can perform these effects are marked explicitly.
For example, in Haskell, any function that can perform IO has "IO" in the return type, so the "printLine" equivalent is: "putStrLn :: String -> IO". (I'm simplifying a bit here). The result is that you know that a function like "getUserComments :: User -> [CommentId]" is only going to do what it says on the tin - it won't go fetch data from a database, print anything to a log, spawn new threads, etc.
It gives similar organizational/clarity benefits as something like "hexagonal architecture," or a capabilities system. By limiting the scope of what it's possible for a given unit of code to do, it's faster to understand the system and you can iterate more confidently with code you can trust.
I think the advantage is often oversold and people often miss how things actually exist on a continuum and just plainly opposing mutable and immutable is sidestepping a lot of complexity.
For exemple, it's endlessly amusing to me to see all the efforts the Haskell community does to basically reinvent mutability in a way which is somehow palatable to their type system. Sometimes they even fail to even realise that it's what they are doing.
In the end, the goal is always the same: better control and warranties about the impact of side effects with minimum fuss. Carmack approach here is sensible. You want practices which make things easy to debug and reason about while mainting flexibility where it makes sense like iterative calculations.
>For exemple, it's endlessly amusing to me to see all the efforts the Haskell community does to basically reinvent mutability in a way which is somehow palatable to their type system.
That's because Haskell is a predominantly a research language originally intended for experimenting with new programming language ideas.
It should not be surprising that people use it to come up with or iterate on existing features.
If you read through the Big Red Book¹ or its counterpart for Kotlin², it's quite explicit about the goals with these techniques for managing effects, and goes over rewriting imperative code to manage state in a "pure" way.
I think the authors are quite aware of the relationship between these techniques and mutable state! I imagine it's similar for other canonical functional programming texts.
Besides the "pure" functional languages like Haskell, there are languages that are sort of immutability-first (and support sophisticated effects libraries), or at least have good immutable collections libraries in the stdlib, but are flexible about mutation as well, so you can pick your poison: Scala, Clojure, Rust, Nim (and probably lots of others).
All of these go further and are more comfortable than just throwing `const` or `.freeze` around in languages that weren't designed with this style in mind. If you haven't tried them, you should! They're really pleasant to work with.
----
1: https://www.manning.com/books/functional-programming-in-scal...
2: https://www.manning.com/books/functional-programming-in-kotl...
> If you read through the Big Red Book
This is a thoughtful response, but I can't help but chuckle at a response that starts with, just read this book!.
> Sometimes they even fail to even realise that it's what they are doing.
Because that’s not what they’re doing. They’re isolating state in a systemic, predictable way.
Lenses is mutation by another name. You are basically recreating states on top of an immutable system. Sure, it's all immutable actually but conceptually it doesn't really change anything. That's what makes it hilarious.
In the end, the world is stateful and even the purest abstractions have to hit the road at some point. But the authors of Haskell were fully aware of that. The monadic type system was conceived as a way to easily track side effects after all, not banish them.
But there isn’t anything hilarious about that.
It’s a clear-minded and deliberate approach to reconciling principle with pragmatic utility. We can debate whether it’s the best approach, but it isn’t like… logically inconsistent, surprising, or lacking in self awareness.
I guess I'm not that good a programmer, because I don't really understand why variables that can't be varied are useful, or why you'd use that.
How do you write code that actually works?
The concept is actually pretty simple: instead of changing existing values, you create new values.
The classic example is a list or array. You don't add a value to an existing list. You create a new list which consists of the old list plus the new value. [1]
This is a subtle but important difference. It means any part of your program with a reference to the original list will not have it change unexpectedly. This eliminates a large class of subtle bugs you no longer have to worry about.
[1] Whether the new list has completely new copy of the existing data, or references it from the old list, is an important optimization detail, but either way the guarantee is the same. It's important to get these optimizations right to make the efficiency of the language practical, but while using the data structure you don't have to worry about those details.
> It means any part of your program with a reference to the original list will not have it change unexpectedly.
I don't get why that would be useful. The old array of floats is incorrect. Nothing should be using it.
That's the bit I don't really understand. If I have a list and I do something to it that gives me another updated list, why would I ever want anything to have the old incorrect list?
State exists in time, a variable is usually valid at the point it's created but it might not be valid in the future. Thus if part of your program accesses a variable expecting it to be from a certain point in time but it's actually from another point in time (was mutated) that can cause issues.
If you need new values you just make new things.
If you want to do an operation on fooA, you don't mutate fooA. You call fooB = MyFunc(fooA) and use fooB.
The nice thing here is you can pass around pointers to fooA and never worry that anything is going to change it underneath you.
You don't need to protect private variables because your internal workings cannot be mutated. Other code can copy it but not disrupt it.
> If you want to do an operation on fooA, you don't mutate fooA. You call fooB = MyFunc(fooA) and use fooB.
This is the bit I don't get.
Why would I do that? I will never want a fooA and a fooB. I can't see any circumstances where having a correct fooB and an incorrect fooA kicking around would be useful.
As Carmack points out, naming the intermediate values aides in debugging. It also helps you write code as you can give a name to every mutation.
But also keep in mind that correct and incorrect is not binary. You might want to pass a fooA to another class that does not want the fooB mutation.
If you just have foo, you end up with situations where a copy should have happened but didn't and then you get unwanted changes.
> If you need new values you just make new things. > If you want to do an operation on fooA, you don't mutate fooA. You call fooB = MyFunc(fooA) and use fooB.
The beautiful thing about this is you can stop naming things generically, and can start naming them specifically what they are. Comprehension goes through the roof.
It forces you to consider when, where and why a change occurs and can help reason later about changes. Thread safety is a big plus.
Okay, so for example I might set something like "this bunch of parameters" immutable, but "this 16kB or so of floats" are just ordinary variables which change all the time?
Or then would the block of floats be "immutable but not from this bit"? So the code that processes a block of samples can write to it, the code that fills the sample buffer can write to it, but nothing else should?
Sounds like you have a data structure like `Array<Float>`. The immutable approach has methods on Array like:
   Array<Float> append(Float value);
   Array<Float> replace(int index, Float value);
The trick is: How do you make this fast without copying a whole array?
Clojure includes a variety of collection classes that "magically" make these operations fast, for a variety of data types (lists, sets, maps, queues, etc). Also on the JVM there's Vavr; if you dig around you might find equivalents for other platforms.
No it won't be quite as fast as mutating a raw buffer, but it's usually plenty fast enough and you can always special-case performance sensitive spots.
Even if you never write a line of production Clojure, it's worth experimenting with just to get into the mindset. I don't use it, but I apply the principles I learned from Clojure in all the other languages I do use.
> The methods don't mutate the array, they return a new array with the change.
But then I need to update a bunch of stuff to point to the new array, and I've still got the old incorrect array hanging around taking up space.
This just sounds like a great way to introduce bugs.
It ends up being quite the opposite - many, many bugs come from unexpected side effects of mutation. You pass that array to a function and it turns out 10 layers deeper in the call stack, in code written by somebody else, some function decided to mutate the array.
Immutability gives you solid contracts. A function takes X as input and returns Y as output. This is predictable, testable, and thread safe by default.
If you have a bunch of stuff pointing at an object and all that stuff needs to change when the inner object changes, then you "raise up" the immutability to a higher level.
    Universe nextStateOfTheUniverse = oldUniverse.modifyItSomehow();
Old states don't hang around if you don't keep references to them. They get garbage collected.
Clojure also makes it very easy, it'd require too much discipline to do such a thing in Python. Even Carmack, who I think still does python mostly by himself instead of a team, is having issues there.
> it'd require too much discipline to do such a thing in Python
Is Python that different from JavaScript? Because it's easy in JavaScript. Just stop typing var and let, and start typing const. When that causes a problem, figure out how to deal with it. If all else fails: "Dear AI, how can I do this thing while continuing to use const? I can't figure it out."
I agree that Python is not too different and in general I treat my Python variables as const. One thing, however, where I resort to mutating variables more often than I'd like is when building lists & dictionaries. Lambdas in Python have horrible DX (no multi-line, no type annotations, bad type checker support even in obvious cases), which is why the functional approach to build your list, using map() and filter() is much more cumbersome than in JS. As a result, whenever a list comprehension becomes too long, you end up building your list the old-fashioned way, using a for loop and the_list.append().
Carmack is talking about variable reassignment here, which Clojure will happily let you mutate.
For example:
  (let [result {:a 1}
        result (assoc result :b 2)]
    ...)
clj-kondo has a :shadowed-var rule, but it will only find cases where you shadow a top-level var (not the case in my example).
That's not mutation though.
The `assoc` on the second binding is returning a new object; you're just shadowing the previous binding name.
This is different than mutation, because if you were to introduce an intermediate binding here, or break this into two `let`s, you could be holding references to both objects {:a 1} and {:a 1 :b 2} at any time in a consistent way - including in a future/promise dereferenced later.
It's more nuanced, because the shadowing is block-local, so when the lexical scope exits the prior bindings are restored.
I think in practice this is the ideal middle ground of convenience (putting version numbers at the end of variables being annoying), but retaining mostly sane semantics and reuse of prior intermediate results.
the flash of enlightment I had when I understood the incredible power the rules of functional programming give you as a coder is probably the biggest one I've had in my career so far. idempotence, immutability and statelessness on their own let you build a thing once in a disciplined way and then use it all willy nilly anywhere you want without having to think about anything other than "things go into process, other things come out" and it's so nice.
salutes from a WestLondonCoder
I try to keep deeper mutation to where it belongs, but I'll admit to shadowing variables pretty often.
If I have a `result` and I need to post-process it. I'm generally much happier doing `result = result.process()` rather than having something like `preresult`. Works nicely in cases where you end up moving it into a condition, or commenting it out to test an assumption while developing. If there's an obvious name for the intermediate result, I'll give it that, but I'm not over here naming things `result_without_processing`. You can read the code.
You're using really generic terms which I have to think is mostly because you're talking about it in the abstract. In most scenarios I find there are obvious non-generic names I can use for each step of a calculation.
I disagree you'd find "obvious" non-generic names easily. After all, "naming" is one of the hardest things in computer science.
I mean, I use `result` in a function named `generate` within a class `JSON < Generator`. Stuff like this is pretty common.
if you're already committing to generic names, what's wrong with a name like `processed_result`?
In the flow he describes you end up with processed_processed_processed_result.
Java mentioned!
AbstractFactoryResultFactoryProcessedResultProcessedResultProcessorBeanFactory
That name is kind of redundant, since `result` implies `processed` in the first place.
I think what they're getting at is that they sometimes use composition of functions in places where other people might call the underlying functions as one procedure and have intermediate results.
At the end of the day, you're all saying different ways of keeping track of the intermediate results. Composition just has you drop the intermediate results when they're no longer relevant. And you can decompose if you want the intermediates.
> Stuff like this is pretty common.
Common != Good
Yes, but there are often FP tricks and conveniences that make this unnecessary.
Like chaining or composing function calls.
result = x |> foo |> bar |> baz (-> x foo bar baz)
Or map and reduce for iterating over collections.
Etc.
Yea, very true. Not every language makes this nice though.
> result.process()
What result? What process?
...says every person who has to read your code later.
result.process()
That doesn’t make logical sense. You already have a result. It shouldn't need processing to be a result.
It also doesn't make sense for `process()` to be an attribute of `result`. Why would you instantiate a class and call it result‽
> Why would you instantiate a class and call it result‽
Are you suggesting that the results of calculations should always be some sort of primitive value? It's not clear what you're getting hung up on here.
A more common example for me at work is getting a response from url. Then you gotta process it further like response.json() or response.header or response.text etc etc. and then again select the necessary array index or doc value from it. Giving a name like pre_result or result_json etc etc would just become cumbersome.
I would never do `response = response.json()`. I use it when it's effectively the same type, but with further processing which may be optional.
Depends on how clear it is.
I usually write code to help local debug-ability (which seems rare). For example, this allows one to trivially set a conditional breakpoint and look into the full response:
    response = get_response()
    response = response.json()
and I think is just as clear as this:
    response = get_response().json()
I completely agree with the assertion and the benefits that ensue, but my attention is always snagged by the nomenclature.
I know there are alternate names available to us, but even in the context of this very conversation (and headline), the thing is being called a "variable."
What is a "variable" if not something that varies?
Variables are called variables because their values can vary between one execution of the code and the next. This is no different for immutable variables. A non-variable, aka a constant, would be something that has the same value in all executions.
Example:
  function circumference(radius)
      return 2 * PI * radius
It doesn’t have to be a function parameter. If you read external input into a variable, or assign to it the result of calling a non-pure function, or of calling even a pure function but passing non-constant expressions as arguments to it, then the resulting value will in general also vary between executions of that code.
Note how the term “variable” is used for placeholders in mathematical formulas, despite no mutability going on there. Computer science adopted that term from math.
In the cases we're interested in here the variable does vary, what it doesn't do is mutate.
Suppose I have a function which sums up all the prices of products in a cart, the total so far will frequently mutate, that's fine. In Rust we need to mark this variable "mut" because it will be mutated as each product's price is added.
After calculating this total, we also add $10 shipping charge. That's a constant, we're (for this piece of code) always saying $10. That's not a variable it's a constant. In Rust we'd use `const` for this but in C you need to use the C pre-processor language instead to make constants, which is kinda wild.
However for each time this function runs we do also need to get the customer ID. The customer ID will vary each time this function runs, as different customers check out their purchases, but it does not mutate during function execution like that total earlier, in Rust these variables don't need an annotation, this is the default. In C you'd ideally want to label these "const" which is the confusing name C gives to immutable variables.
Even if the term 'variable' has roots in math where it is acceptable that it might not mutate, I think for clarity, the naming should be different. It's uneasy to think about something that can vary but not mutate. More clear names can be found.
> In the cases we're interested in here the variable does vary, what it doesn't do is mutate.
Those are synonyms, and this amounts to a retcon. The computer science term "variable" comes directly from standard mathematical function notation, where a variable reflects a quantity being related by the function to other variables. It absolutely is expected to "change", if not across "time" than across the domain of the function being expressed. Computers are discrete devices and a variable that "varies" across its domain inherently implies that it's going to be computed more than once. The sense Carmack is using, where it is not recomputed and just amounts to a shorthand for a longer expression, is a poor fit.
I do think this is sort of a wart in terminology, and the upthread post is basically right that we've been using this wrong for years.
If I ever decide to inflict a static language on the masses, the declaration keywords will be "def" (to define a constant expression) and "var" (to define a mutable/variable quantity). Maybe there's value in distinguishing a "var" declaration from a "mut" reference and so maybe those should have separate syntaxes.
> Those are synonyms, and this amounts to a retcon.
The point is that it varies between calls to a function, rather than within a call. Consider, for example, a name for a value which is a pure function (in the mathematical sense) of the function's (in the CS sense) inputs.
Or between iterations of the loop scope in which it's defined, const/immutable definitions absolutely change during the execution of a function. I understand the nitpicky argument, I just think it's kinda dumb. It's a transparent attempt to justify jargon that we all know is needlessly confusing.
Ah! Actually this idea that the immutable variables in a loop "change during execution" is a serious misunderstanding and some languages have tripped themselves up and had to fix it later when they baked this mistake into the language.
What's happening is that each iteration of the loop these are new variables but they have the same name, they're not the same variables with a different value. When a language designer assumes that's the same thing the result is confusing for programmers and so it usually ends up requiring a language level fix.
e.g. "In C# 5, the loop variable of a foreach will be logically inside the loop"
Seems like you're coming around to my side of the fence that calling these clearly distinct constant expressions "variables" is probably a mistake?
I don't think so? I've been clear that there are three distinct kinds of thing here - constants, immutable variables, and mutable variables.
In C the first needs us to step outside the language to the macro pre-processor, the second needs the keyword "const" and the third is the default
In Rust the first is a const, the second we can make with let and the third we need let mut, as Carmack says immutable should be the default.
There are surely more than three! References can support mutation or not, "constants" may be runtime or compile time.
The point is that the word "variable" inherently reflects change. And choosing it (a-la your malapropism-that-we-all-agree-not-to-notice "immutable variables") to mean something that does (1) is confusing and (2) tends to force us into worse choices[1][2] elsewhere.
A "variable" should reflect the idea of something that can be assigned.
[1] In rust, the idea of something that can change looks like a misspelled dog, and is pronounced so as to imply that it can't speak!
[2] In C++, they threw English out the window and talk about "lvalues" for this idea.
Well maybe global constants shouldn't be called "variables", but I don't see how your definition excludes local immutable variables from being called "variables". E.g.
  fn sin(x: f64) -> f64 {
    let x2 = x / PI;
    ...
Anyway this is kind of pointless arguing. We use the word "variable". It's fine.
It's a variable simply because it doesn't refer to a specific object, but any object assigned to it as either function argument or by result of a computation.
It's in fact us programmers who are the odd ones out compared to how the word variable has been used by mathematics and logicians for a long time
> What is a "variable" if not something that varies?
If I define `function f(x) { ... }`, even if I don't reassign x within the function, the function can get called with different argument values. So from the function's perspective, x takes on different values across different calls/invocations/instances.
I try to avoid this ambiguity by calling such variables "values".
Some languages like Kotlin have var and val introducing the distinction between variables (that are expected to get reassigned, to vary over time, and values, which are just that, a value that has been given a name. I like these small improvements.
(unfortunately, Kotlin then goes on and introduces "val get()" in interfaces, overloading the val term with the semantics of "read only, but may very well change between reads, perhaps you could even change it yourself through some channel other than simple assignment which is a definite no")
You could always interpret a variable from the perspective of it's memory address. It is clearly variable in the sense that it can and will change between allocations of that address, however an immutable variable is intended to remain constant as long as the current allocation of it remains.
Right, yeah, it’s a funny piece of terminology! The sense in which a ‘variable’ ‘varies’ isn’t that its value changes in time, but that its value is context-dependent. This is the same sense of the word as used in math!
A common naming is value. You can call them immutable values and mutable variables.
Another way to look at it is a variables are separate from compile time constants whether you mutate them or not.
The term 'variable' is from mathematics. As others have said, the values of variables do vary but they do not mutate.
Yes, and math has the notion of "free variable" and "bound variable" [1].
[1] https://en.wikipedia.org/wiki/Free_variables_and_bound_varia...
That's why in some languages they don't call them variables, but bindings instead.
(let [a 10] a)
Let the symbol `a` be bound to the value `10` in the enclosing scope.
Yeah I wish variables were immutable by default and everything was an expression
Oh well continues day job as a Clojure programmer that is actively threatened by an obnoxious python take over
As a Python programmer at day job, that is Clojure-curious and sadly only gets to use it for personal projects, and is currently threatened by an obnoxious TypeScript take over, I feel this.
If you avoid metaprogramming and stick to the simple stuff, python and typescript are almost the same language.
To be fair, comprehensions (list/object expresions) are a nice feature that I miss a lot in JS/TS. But that's about it.
In the context of the original discussion, TypeScript (and ES6) has const and let.
Neither let nor even const are immutable (const prevents reassignment but not mutation if the value is of a mutable type like object or array).
Fair enough about const and let, the obnoxiousness for me is a combination of the language ergonomics, language ecosystem, but mostly the techno-political decision making behind it.
well yeah except const doesn't make objects or arrays immutable
I feel that Java’s “final” would have been a better choice than “const”. It doesn’t have the same confusing connotation.
Removing barriers to sloppy code is a language feature.
That is why vibe coding, JavaScript and Python are so attractive.
Removing barriers to civil engineering building design is a feature.
Who needs to calculate load bearing supports, walls, and floors when you can just vibe oversize it by 50%.
Well if it does the job. So what?
Rust taught me that a language does not have to be purely functional to have everything be an expression, and ever since I first used Rust years ago I've been wishing every other language worked that way. It's such a nice way to avoid or limit the scope of mutations
Clojure will always be faster than Python. So you have that, at least.
You are not a Clojure programmer. You use Clojure to solve problems in a professional context. I'm sorry that there's a political tribal war based on language going on at your workplace.
But especially now that coding agents are radically enabling gains in developer productivity, you don't need to feel excluded by the artificial tribal boundaries.
If you haven't, I recommend reading: https://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-pr...
You know I read this when it came out but have gotten out of the habit of applying it.
Thanks for the reminder. Will work on putting these ideas back into practice again.
> I wish it was the default, and mutable was a keyword.
I wish the IDE would simply provide a small clue, visible but graphically unobtrusive, that it was mutated.
In fact, I end up wishing this about almost every language feature that passes my mind. For example, I don't need to choose whether I can or can't append to a list; just make it unappendable if you can prove I don't append. I don't care if it's a map, list, set, listOf, array, vector, arrayOf, Array.of(), etc unless it's going to get in my way because I have ten CPU cores and I'll optimize the loop when I need to.
In my IntelliJ (a recent version), if I write a small Java function like this:
    private static void blah()
    {
        final int abc = 3;
        for (int def = 7; def < 20; ++def)
        {
            System.out.print(def);
        }
    }
As an aside, you might also enjoy the inline inferred annotations.
https://www.jetbrains.com/help/idea/annotating-source-code.h...
Seeing @NotNull in there even if the author hasn't specifically written that can help in understanding (and not needing to consider) various branches.
1st) you use ++def in a loop, don't be weird; 2nd) if 'abc' is to be used in the loop body, define in the loop, e.g. for (int def = 7, abc =3; ...); 3rd) this is an IntelliJ bug - both 'def' and 'abc' in the sample are always defined.
3) looks like you read 'underlined' as 'undefined'
true that, thanks!
the only thing that is weird is your lack of understanding temporary variables
perhaps... yet, Java doesn't have a definition for temporary variables
This works in RustRover as well! Super useful.
Rust's type system specifically facilitates more powerful tools: https://github.com/willcrichton/flowistry
I don't think this is the best option, there could be very hard bugs or performance cliffs. I think I'd rather have an explicit opt-in, rather than the abstraction changing underneath me. Have my IDE scream at me and make me consider if I really need the opt-in, or if I should restructure.
Although I do agree with the sentiment of choosing a construct and having it optimize if it can. Reminds me of a Rich Hickey talk about sets being unordered and lists being ordered, where if you want to specify a bag of non-duplicate unordered items you should always use a set to convey the meaning.
It's interesting that small hash sets are slower than small arrays, so it would be cool if the compiler could notice size or access patterns and optimize in those scenarios.
Right, sql optimizers are a good example - in theory it should "just know" what is the optimal way of doing things, but because these decisions are made at runtime based on query analysis, small changes to logic might cause huge changes in performance.
I use Swift for work. The compiler tell you this. If a mutable variable is never mutated it suggests making it non-mutable. And vice versa.
Yup, it's pretty great. You get into the habit of suspiciously eyeing every variable that's not a constant.
As will Typescript, at least using Biome to lint it does.
My very minor complaint about TypeScript is you use to use `const` which is 2 additional letters.
Seriously though, I do find it slightly difficult to reason about `const` vars in TypeScript because while a `const` variable cannot be reassigned, the value it references can still be mutated. I think TypeScript would benefit from more non-mutable values types... (I know there are some)
Swift has the same problem, in theory, but it's very easy to use a non-mutable value types in Swift (`struct`) so it's mitigated a bit.
eslint has this too: https://eslint.org/docs/latest/rules/prefer-const
Your IDE probably supports this as an explicit action. JetBrains has a feature that can find all reads and writes to a variable
It also has the ability to style mutated variables differently.
Yes, depending on your highlighting scheme. Not every highlighting scheme shows this by default, unfortunately.
To me, this seems initially like some very minor thing, but I find this very helpful working with non-trivial code. For larger methods you can directly discern whether a not-as-immutable-declared variable behaves immutable nonetheless.
I don’t have any useful ideas here but if you make a linter for this sort of thing, I suggest calling it “mutalator.”
Could Pylint help? It has atleast check for variable redefinition: https://pylint.pycqa.org/en/latest/user_guide/messages/refac...
If you write in erlang, emacs does this by default ;)
Clang-tidy's misc-const-correctness warns for this. Hook it up to claude code and it'll const all non mutated mutables.
Agree. After working seriously on a large production Haskell codebase for several years I definitely took it for granted. Now that I’m writing stuff in C again I do think immutability should be the default.
const isn’t really it though. It could go further.
Well in C actually you can not mutate something, you can only reassign, as it is always pass-by-value. You need to work around that, by passing a pointer to the object instead. In that sense mutability is kind of a language keyword: '&'. When you want to just get the object, you pass object it, if you need to modify it, you need to pass &object. This is something I hate in C++, that random function invocations can mutate arguments without it being obvious in the call syntax.
I think that's why the * is generally preferred over the & for this purpose. It also can give some hints about ownership issues. This "pass by reference" thing is syntactic sugar and sometimes is great to have, but as Perlis said, "Syntactic sugar causes cancer of the semicolon" [1].
Are Rust's defaults far enough?
> ... making almost every variable const at initialization is good practice. I wish it was the default, and mutable was a keyword.
Rust mentioned!
Here's a relevant comment on Rust: https://x.com/ID_AA_Carmack/status/1094419108781789184
That was 6 years ago. I'd like to see how that feeling developed.
And Zig :)
Plus F# and a whole family of FP languages.
Years ago I did a project where we followed a lot of strict immutability for thread safety reasons. (Immutable objects can be read safely from multiple threads.)
It made the code easier to read because it was easier to track down what could change and what couldn't. I'm now a huge fan of the concept.
You should check out Rust
Rust wasn't available at the time.
It probably won't come as a surprise to you, but I am a big fan of Rust.
I like the idea of immutable-by-default, and in my own musings on this I've imagined a similar thing except that instead of a mutable keyword you'd have something more akin to Python's with blocks, something like:
    # Immutable by default
    x = 2
    items = [1,2,3]
    with mutable(x, items):
        x = 3
        items.append(4)
    # And now back to being immutable, these would error
    x = 5
    items.append(6)  
This is in essence a mutable borrow - by looking at Rust's borrow checker, one can see the complexities of the concept.
Clojure has transients—a similar idea I believe. Basically bounded mutation.
Without a borrowck, inside your mutable block, another variable can reference to the mutable version of your x or items, and be mutated outside of that block.
No if you're allowed to only get an immutable reference from an immutable variable.
When I started programming in Haskell, where all variables are immutable, I felt like I was in a straitjacket
Then, suddenly, the enlightenment
How do you know it’s real? When one is in a straitjacket for a long time, the trauma might make the mind disassociate from the body, giving an illusion of freedom.
The brain is essentially dreaming it’s escaped while the body is still programming in Java.
That made me laugh, thank you.
The lovely thing with Haskell is you have the ST monad that lets you have as many mutable variables as you want, as long as they stay within ST.
How does Haskell deal with things like memory-mapped I/O or signal handlers where the value of a variable can be changed by factors outside of the programmer's control?
IORefs usually, which can only be manipulated within the IO monad, so they tend to only get used at the top level and passed down to pure functions as parameters.
Side effecting computations that depend on the "real world" go into an IO monad. The game in Haskell is shifting as much of the codebase as possible into pure functions/non-side-effecting code, because it's easier to reason about and prove correct.
I've had this experience with going from PHP and JS to typed languages. Many years ago I was a type sceptic, but after being forced to use them, I now can't stand not having strict typing.
I'm sure others have written about this, but these days I think good code is code which has a very small area of variability. E.g code which returns a value in a single place as a single type, has a very limited number of params (also of single type), and has no mutable variables. If you can break code into chunks of highly predictable logic like this it's so so much easier to reason about your code and prevent bugs.
Whenever I see methods with 5+ params and several return statements I can almost guarantee there will be subtle bugs.
Personally I'd tweak your last sentence to "return statements in the middle of a function."
Early returns at the very top for things like None if you pass in an Option type don't increase the risk of bugs, but if you have a return nested somewhere in the middle it makes it easier to either write bugs up front or especially have bugs created during a refactor. I certainly have had cases where returns in the middle of a beefy function caused me headaches when trying to add functionality.
> Whenever I see methods with 5+ params
I don't see why that's a problem. If a function implements an algorithm with several parameters (e.g. a formula with multiple variables), those values have to be passed somehow. Does it make a difference if they're in a configuration object or as distinct parameters?
I initially started programming with C++, then did a bunch of scripting languages and then Rust and C#. I feel like there are pros and cons to strictness.
I don't like all scripting languages, but Python, for example, has a simple syntax and is easy and fast to learn and write. I think it also has a good standard library. Some things can be simpler because of the missing guardrails. But that's also the weakness and I wouldn't use it for large and complex software. You have to be more mindful to keep a good style because of the freedoms.
C++ and Rust are at the opposite end. Verbose to write and more difficult syntax. More stuff to learn. But in the end, that's the cost for better guarantees on correctness. But again, they don't fit all use cases as well as scripting languages.
I've experienced the good and the bad in both kind of languages. There are tradeoffs (what a surprise) and it's probably also subjective what kind of language one prefers. I think my _current_ favorites are Rust, C# and Python.
How fast this got to the top, you would think John Carmack just invented nuclear fusion.
I have no doubt that he could, not only invent nuclear fusion, but get it running on a Pentium 90.
Right? This isn't even a hot take - it's just standard software engineering advice we all learn in school or on the job.
I was part of the Carmack cult but the illusion was broken when I saw him use the same authoratative tone on a subject I'm more knowledgable about.
I don't think gellman amnesia is really a material issue. Carmack is indubitably an expert in his field, but that doesn't mean he's an expert in every field (like aerospace or AI). I'm an expert in some things, but I've probably said some stupid shit in other fields where I dabble such as cooking, playing music, raising cats.
Such as? Any links to this?
Good old Gell-Mann Amnesia! It doesn't mean he's incompetent in his core area of expertise, though.
How many hours of discussions went into topics like code formatting and naming things like variables, endpoints, classes, etc?
Sometimes it's nice to be reminded of some basic good ideas. Even if you already know. Also https://xkcd.com/1053/
Just like AGI he was supposedly brought on board for but…..checks notes. Nothing.
>Just like AGI he was supposedly brought on board for but…..checks notes. Nothing.
His AGI work was entirely his own? As in he literally stepped down from a high level corporate role where he was responsible for Oculus (3D games/applications) to do this in his own time. Similar to his work on Armadillo Aerospace.
Dude isn't a god.
That said it's worth listening when he chimes in about C/C++ or optimisation as he has earned his respect when it comes to these fields.
Will he crack AGI? Probably not. He didn't crack rockets either. Doesn't make him any less awesome, just makes him human.
There are certain figures who are very experienced and knowledgeable in certain domains, so when they speak up about a topic it's usually worth listening to them. That doesn't mean they're always going to be correct, and they shouldn't be worshiped as superhuman entities, but it's almost always a bad idea to completely ignore them.
People worship this guy, but other than being a good C++ graphics programmer, it isn't clear what he's actually done.
Well some people do, I'm sure, but most people just pay ordinary levels of attention to him. And they do that because he's made interesting contributions to multiple products that people like using - which is enough, surely?
(Regarding this specific tweet, this seems to be him visiting his occasional theme of how to write C++ in a way that will help rather than hinder the creation of finishable software products. He's qualified to comment.)
Doom which was countless fun for people up to this day who make mods? (E.g. the great Myhouse.wad that was perhaps FPS of the year... 2023)
Quake, which was a good game, but arguably a better engine that lead to things like Half-Life 1?
Other games?
Shared the code to Doom and Quake?
I guess you dont understand how big of a game Doom was. The first episode holds suprisngly well up to this day, even after hundreds of doom-clones as they used to call FPS games.
But he didn't design the Doom game. He designed its graphics engine.
His engines are open source, and graphics are far from the only interesting thing about them. If you don't know what he's done that's on you; it's no secret.
So again, what has he done successfully besides C++ graphics?
I also default to const in javascript. Somehow a "let" variable feels so dirty, but I don't really know why. I guess at this point my experience forged an instinct but I can't even put a theory on it. But it's sometimes necessary and I use it of course.
JS's const doesn't go far enough since you can still mutate the object via its methods. In C++, you can only call const methods (which can't mutate the object) on const variables.
In JS, by using const, you are signalling to the reader that they don’t need to look out for a reassignment to understand the code. If you use let, you are saying the opposite.
In JetBrains editors it's possible to highlight mutable variables, at least in the languages where the distinction exists. My go to setting in Kotlin is to underscore all `var`'s, for two reasons:
- this makes them really stand out, much easier to track the mutation visually,
- the underscore effect is intrusive just-enough to nudge you to think twice when adding a new `var`.
Nothing like diving into a >3k lines PR peppered with underscores.
I'm going to maybe out myself as having limited experience here ...
I don't mind the idea here, seems good. But I also don't move a block of code often and discover variable assignment related issues.
Is the bad outcome more often seen in C/C++ or specific use cases?
Granted my coding style doesn't tend to involve a lot of variables being reassigned or used across vast swaths of code either so maybe I'm just doing this thing and so that's why I don't run into it.
In Python, no user object is modified by a simple assignment to a name. It just binds it.
It is not about mutable/immutable objects , it is about using a name for a single purpose within given scope.
    a = 1
    b = 2
    a = b
Though both "single purpose" and immutability may be [distinct] good ideas.
So essentially he gives 2 arguments:
1) You get intermediate results visible in the debugger / accessible for logs, which I think is a benefit in any language.
2) You get an increased safety in case you move around some code. I do think that it applies to any language, maybe slightly more so in C due to its procedural nature.
See, the following pattern is rather common in C (not so much in C++):
- You allocate a structure S
- If S is used as input, you prefill it with some values. If it's used as output, you can keep it uninitialized or zfill.
- You call function f providing a pointer to that struct.
Lots of C APIs work that way: sockets addresses, time structures, filesystem entries, or even just stack allocated fixed size strings are common.
CMV: HN should just automatically replace x links with xcancel
In python its common to see code like this:
  df = pd.concat(df,other_df)
  df = df.select(...)
  ...
  df |> 
    rbind(other_df) |> 
    select(...)
My eyes would hurt more if I had to look all day at the vertical misalignment of that |> operator
You could always switch to a better font like Fira Code which has a ligature for this.
The gp's comment wasn't made regarding the look of the operator in its ascii representation `|>` but about the vertical misalignment.
Typically you align a pipeline like so:
     df
     |> rbind(other_df)
     |> select(...)
Elixir too (Explorer library; default backend is Pola.rs based)
- https://github.com/elixir-explorer/explorer - https://hexdocs.pm/explorer/Explorer.html
pandas has a .pipe operator which works exactly like this
Wouldn't that be simply:
  df = pd.concat(df,other_df).select(...)> Making almost every variable const at initialization is good practice. I wish it was the default, and mutable was a keyword.
It's funny how functional programming is slowly becoming the best practice for modern code (pure functions, no side-effects), yet functional programming languages are still considered fringe tech for some reason.
If you want a language where const is the default and mutable is a keyword, try F# for starters. I switched and never looked back.
I want a language that helps me write software. I do not need a language that's hellbent on expressing a particular ideology.
Pure functional works great until it doesn't. For a lot of systems-y and performance-oriented code you need the escape hatches or you'll be in for a lot of pain and annoyance.
As a practical observation, I think it was easier to close this gap by adding substantial functional capabilities to imperative languages than the other way around. Historically, functional language communities were much more precious about the purity of their functional-ness than imperative languages were about their imperative-ness.
"Pure functional works great until it doesn't. "
That's why F# is so great.
Functional is default, but mutable is quite easy to do with a few mutable typedefs.
The containers are immutable, but nobody stops you from using the vanilla mutable containers from .net
It's such a beautiful, pragmatic language.
Unfortunately unless there's an explicit way to state what has side effects like a mut keyword, a lot of fp programming advantages lose value because most devs default to mutable stuff, so the fp benefits don't compound
I think this works quite well with IO in Haskell. Most of my code is pure, but the parts which are really not, say OpenGL code, is all marked as such with IO in their type signatures.
Also, the STM monad is the most carefree way of dealing with concurrency I have found.
F# isn’t purely functional, though, and strikes a nice balance. I just don’t really like the .NET ecosystem, being 100% Linux these days, as it always seems slightly off to me somehow.
Same. I strikes me a great shame that there aren't great F#-like languages that targeted the JVM.
Really? What's wrong with .NET? It's one of the nicest cross-platform platforms out there. What Microsoft has done with it is amazing.
Hehe, purity is one helluva drug!
For some reason, this makes me think of SVG's foreignObject tag that gives a well-defined way to add elements into an SVG document from an arbitrary XML namespace. Display a customer invoice in there, or maybe a Wayland protocol. The sky's the limit!
On the other hand, HTML had so many loose SVG tags scattered around the web that browsers made a special case in the parser to cover them without needing a namespace.
And we all know how that played out.
Posted from an xhtml foreignObject on my SVGphone
F# is not a pure functional language. It is a functional-first language that does imperative and OOP better than imperative and OOP languages.
You can program F# just like you would Python, and it will be more readable, concise, performant, and correct.
> you need the escape hatches
Isn't this a strawman? Even Haskell has unsafePerformIO and Debug.Trace etc. It just also provides enough ergonomics that you don't need them so often, and they "light up" when you see them, which is what we want: to know when we're mutating.
> If you want a language where const is the default and mutable is a keyword, try F# for starters. I switched and never looked back.
Rust is also like this (let x = 5; / let mut x = 5;).
Or you can also use javascript, typescript and zig like this. Just default to declaring variables with const instead of let / var.
Or swift, which has let (const) vs var (mutable).
FP got there first, but you don't need to use F# to have variables default to constants. Just use almost any language newer than C++.
In Java you can use final[1]. And yes, if final points to an ArrayList you can change it, but you can also use final together with immutable data structures[2].
Did you know that "final" does not actually mean final in Java (as in: the variable can be constant folded)? Reasons include reflection and serialization (the feature that nowadays nobody uses, but due to backwards compatibility the Java language developers always have to worry about?). There was an excellent talk about this recently, I think triggered by a new JEP "stable values": https://youtu.be/FLXaRJaWlu4
In typescript and js you get immutable references, but the data is mutable. Definitely not the same thing
You have `as const`. Yes I know it's not enforced at runtime, but the type system does support it.
There’s Object.freeze for enforcing at runtime.
Are there languages that automatically extend this to things like data structure members? One of the things I like about the C++ const keyword is that if you declare an instance of a struct/class as const it extends that to its members. If the instance isn’t const, you can still mutate them (as long as they aren’t declared const within the structure itself)
Rust works this way, yes. There are escape hatches though, which allow interior mutability.
Yeah, here's an example: https://play.rust-lang.org/?version=stable&mode=debug&editio...
If I understand what you’re asking correctly, rust is also like this. If you have a non-mut value of (or non-mut reference to) an object, you only get non-mut access to its members.
Agreed, and Carmack as always was ahead of the curve here.
In case anyone here hasn’t seen it, here’s his famous essay, Functional Programming in C++:
http://sevangelatos.com/john-carmack-on/
He addresses your point:
> I do believe that there is real value in pursuing functional programming, but it would be irresponsible to exhort everyone to abandon their C++ compilers and start coding in Lisp, Haskell, or, to be blunt, any other fringe language. To the eternal chagrin of language designers, there are plenty of externalities that can overwhelm the benefits of a language, and game development has more than most fields.
Functional programming languages (almost always?) come with the baggage of foreign looking syntax. Additionally, imperative is easier in some situations, so having that escape hatch is great.
I think that's why we're seeing a lot of what you're describing. E.g. with Rust you end up writing mostly functional code with a bit of imperative mixed in.
Additional, most software is not pure (human input, disk, network, etc), so a pure first approach ends up being weird for many people.
At least based on my experience.
Rust is not very suitable for functional programming because it is aggressively non-garbage-collected. Any time Rustaceans want to do the kind of immutable DAG thing that gives functional languages so much power, they seem to end up either taking the huge performance and concurrency hit of fine-grained reference counting, or they just stick all their nodes in a big array.
Using a big array has good performance though?
Computer memory is already a big array. Probably what you are thinking of is that processing array items sequentially is a lot faster than following pointers, but in the cases I'm talking about, you end up using array indices instead of pointers, which isn't much faster.
Yeah, but now logic bugs can cause memory leaks, doing use-after-frees, etc, without any kind of tooling to prevent it (nothing like valgrind to catch them). Sure, they won't crash the program, but sending money from a wrong account is worse than a segfault, imo.
Foreign looking is just a person's biases/experience. C is just as foreign looking to a layperson, you just happened to start programming with C-family languages. (Also, as a "gotcha" counterpoint, I can just look up a Haskell symbol in hoogle, but the only language that needs a website to decipher its gibberish type notation is C https://cdecl.org/ )
Nonetheless, I also heavily dislike non-alphabetical, library-defined symbols (with the exception of math operators), but this is a cheap argument and I don't think this is the primary reason FPs are not more prevalent.
I think you could construct a stronger version of this complaint as FP languages not prioritizing usability enough. C was never seen as a great language but it was relatively simple to learn to the point that you could have running code doing something you needed without getting hit with a ton of theory first. That code would probably be unsafe in some way but the first impression of “I made a machine do something useful!” carries a lot of weight and anyone not starting with an academic CS background would get that rush faster with most mainstream languages.
> Functional programming languages (almost always?) come with the baggage of foreign looking syntax.
That increases the activation energy, I guess, for people who have spent their whole programming life inside the algol-derived syntax set of languages, but that’s a box worth getting out of independently of the value of functional programming.
> Functional programming languages (almost always?) come with the baggage of foreign looking syntax.
At least for me, this was solved by Gleam. The syntax is pretty compact, and, from my experience, the language is easily readable for anyone used to C/C++/TypeScript and even Java.
The pure approach may be a bit off-putting at first, but the apprehensions usually disappear after a first big refactoring in the language. With the type system and Gleam's helpful compiler, any significant changes are mostly a breeze, and the code works as expected once it compiles.
There are escape hatches to TypeScript/JavaScript and Erlang when necessary. But yeah, this does not really solve the impure edges many people may cut themselves on.
> come with the baggage of foreign looking syntax
Maybe they're right about the syntax too though? :)
Which one, Erlang, Lisp, or ML?
ML superfan here. I don’t mind lisp simplicity either. Erlang is too alien for me, but maybe once I was used to it.
If you start programming in it though, syntax only matters during the first day. Familiarity comes very fast, and if you do five programming exercises, maybe one a day, 'implement a hash map', 'make a small game', etc. you will have no problems whatsoever once the week is done.
If you have a course where one day you're supposed to do Haskell and another Erlang, and another LISP, and another Prolog, and there's only one exercise in each language, then you're obviously going to have to waste a lot of time on syntax, but that's never a situation you encounter while actually programming.
Eh, I'd say it depends.
I write way more algol-derived language code than ML, yet piped operations being an option over either constant dot operators where every function returns the answer or itself depending on the context or inside out code is beautiful in a way that I've never felt about my C#/C++/etc code. And even Lisp can do that style with arrow macros that exist in at least CL and Clojure (dunno about Racket/other schemes).
I disagree, at least for my own case. I greatly prefer reading ML code than C style syntax.
Lisp syntax is objectively superior because you can write a predicate like thus:
  (< lower-bound x-coordinate upper-bound)
In Python or Icon (or Unicon) that's
    lower_bound < x_coordinate < upper_bound
    lower_bound <= x_coordinate < upper_bound
In Python this is a magical special case, but in Icon/Unicon it falls out somewhat naturally from the language semantics; comparisons don't return Boolean values, but rather fail (returning no value) or succeed (returning a usually unused value which is chosen to be, IIRC, the right-hand operand).
And in SQL you have
    `x-coordinate` between `lower-bound` - 1 and `upper-bound` + 1
Honestly I find ML derived languages the most pleasant to look at.
Exactly this! I’d love a modern C++ like syntax with the expressiveness of python and a mostly functional approach.
C# is not that far I suppose from what I want
Everybody's mileage will vary, but I find contemporary C# to be an impressively well rounded language and ecosystem. It's wonderfully boring, in the most positive sense of the word.
I can't stand modern C#. They've bung in a bunch of new keywords and features that are of dubious benefit every release.
I'm interested what are those new keywords and features that are of dubious benefit?
There is a huge amount of syntactic sugar that has been added over the years that doesn't do whole lot IMO. It is often imported from other languages (usually JavaScript and/or Python).
e.g. Just a very simple example to illustrate the point
    if (customer != null)
    {
        customer.Order = GetCurrentOrder();
    }
    if (customer is not null)
    {
        customer.Order = GetCurrentOrder();
    }
In your sample there really is no benefit to using the "is" operator over just checking for null (assuming you haven't overloaded the "!=" operator). However, the "is" operator is a lot more powerful, you can match an expression against a pattern with it. Would you say that these samples show no benefit to using the "is" operator?
if (obj is string s) { ... }
if (date is { Month: 10, Day: <=7, DayOfWeek: DayOfWeek.Friday }) { ... }
https://learn.microsoft.com/en-us/dotnet/csharp/language-ref... https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
> In your sample there really is no benefit to using the "is" operator over just checking for null
Microsoft give the same example though. I understand what hes saying, theres conceptual overlap between is and ==. Many ways to do the same thing.
Why couldnt it just be...
if (obj == string s) { ... }
The issue is that I dislike the overall mentality of just adding a bunch of language features. Things just seem to be dumped in each release and I think to myself "When I am going to use that?".
> Would you say that these samples show no benefit to using the "is" operator?
I didn't say no benefit. I said dubious benefit.
I didn't really want to get into discussing specific operators, but lets just use your date example:
   if (date is { Month: 10, Day: <=7, DayOfWeek: DayOfWeek.Friday }) { ... }
    static bool IsFirstFridayOfOctober(DateTime date)
    {
        return date.Month == 10
            && date.Day <= 7
            && date.DayOfWeek == DayOfWeek.Friday;
    }
    if IsFirstFridayOfOctober(date) {
       ...
    }
Each release there seems to be more of these language features and half the time I have a hard time remembering that they even exist.
Each time I meet with other .NET developers either virtually or in Person they all seem to be salivating over this stuff and I feel like I've walked in on some sort of cult meeting.
I agree that they should not add new stuff lightly, but the "is" operator actually should be looked together with switch expression in the context of pattern matching. How else could you enable powerful and succint pattern matching in c#?
Arguments about whether the is and switch operators should exist is missing the forest for the trees. I am sure there are circumstances where it very useful.
It isn't any one language feature it is the mentality of both the developer community and Microsoft.
> I agree that they should not add new stuff lightly
It seems though kinda do though. I am not the first person to complain that they add syntactic sugar that doesn't really benefit anything.
e.g. https://devclass.com/2024/04/26/new-c-12-feature-proves-cont...
I have a raft of other complaints outside of language features. Some of these are to do with the community itself which only recognise something existing when Microsoft has officially blessed it, it is like everyone has received the official permission to talk about a feature. Hot Reload was disabled in .NET 6 IIRC for dubious reasons.
In Python '==' and 'is' are not the same thing. '==' checks for equality, 'is' for identity.
I am aware. I probably should have said "inspired".
Scala
We are still living with the hangover of C, which was designed for the resource-starved machines of eons ago, and whose style later languages felt they had to copy to get any kind of adoption. (And as you point out, that is how things turned out.)
My bet is functional programming will become more and more prevalent as people figure out how to get AI-assisted coding to work reliably. For the very reasons you stated, functional principles make the code modular and easy to reason about, which works very well for LLMs.
However, precisely because functional programming languages are less popular and hence under-represented in the training data, AI might not work well with them and they will probably continue to remain fringe.
Just use OCaml in which you can mix imperative, functional, and OOP. I use all of them in a single codebase, whichever, wherever appropriate.
> It's funny how functional programming is slowly becoming the best practice for modern code (pure functions, no side-effects),
I once mentioned both these concepts to a room of C# developers. Two of them were senior to me and it was a blank expression from pretty much everyone.
> yet functional programming languages are still considered fringe tech for some reason.
You can use the same concepts in non-functional programming languages without having to buy into all the other gumpf around functional programming languages. Also other programming languages have imported functional concepts either into the language itself or into the standard libraries.
Past that. It is very rare to be able to get a job using them. The number of F# jobs I've seen advertised over the last decade, I could count on one hand.
I’ve done a significant amount of functional programming (including F#) and still reach for it sometimes, but I don’t think it provides substantial advantages for most use-cases. Local mutability is often clearer and more maintainable.
Also, category theorists think how excited people get about using the word monad but then most don’t learn any other similar patterns (except maybe functors) is super cringe. And I agree.
It's because you want a tasteful mix of both.
I believe Scala was pretty ahead here by building the language around local mutability with a general preference for immutable APIs, and I think this same philosophy shows up pretty strongly in Rust, aided by the borrow checker that sort of makes this locality compiler-enforced (also, interior mutability)
Worth nothing that idiomatic scala uses constants by default, variables are discouraged and frankly rare.
one thing I've learned in my career is that escape hatches are one of the most important things in tools made for building other stuff.
dropping down into the familiar or the simple or the dumb is so innately necessary in the building process. many things meant to be "pure" tend to also be restrictive in that regard.
Functional languages are not necessarily pure though. Actually outside Haskell don't most functional first languages include escape hatches? F# is the one I have the most experience with and it certainly does.
For what it's worth, Haskell has plenty of escape hatches itself as well.
While it's amusing, I think it's sensible: One of the main tasks in most businessy programming is to take what a human wants, translate it to code, reverse-translate it back to human understanding later, modify it, and translate it again to slightly different code.
This creates friction between casual stakeholder models of a mutable world, versus the the assumptions an immutable/mostly-pure language imposes. When the customer describes what they need, that might be pretty close to a plain loop with a variable that increments sometimes and which can terminate early. In contrast, it maps less-cleanly to a pure-functional world, if I'm lucky there will at least be a reduce-while utility function, so I don't need to make all my own recursive plumbing.
So immutability and pure-functions are like locking down a design or optimizing--it's not great if you're doing it prematurely. I think that's why starting with mutable stuff and then selectively immutable'ing things is popular.
Come to think of it, something similar can be said about weak/strong typing. However the impact of having too-strong typing seems easier to handle with refactoring tools, versus the problem of being too-functional.
I think it's because (I'm looking at Haskell in particular) there are a lot of great ideas implemented in them, but the purity makes writing practical or performant time-domain programs high friction. But you don't need both purity and the various tools they provide. You can use the tools without the pure-functions model.
In particular: My brain, my computing hardware, and my problems I solve with computers all feel like a better match for time-domain-focused programming.
My problem with functional languages is there never seems to be any easy way to start using them.
Haskell is a great example here. Last time I tried to learn it, going on the IRC channel or looking up books it was nothing but a flood of "Oh, don't do that, that's not a good way to do things." It seemed like nothing was really settled and everything was just a little broken.
I mean, Haskell has like what, 2, 3, 4? Major build systems and package repositories? It's a quagmire.
Lisp is also a huge train wreck that way. One does not simply "learn lisp" There's like 20+ different lisp like languages.
The one other thing I'd say is a problem that, especially for typed functional languages, they simply have too many capabilities and features which makes it hard to understand the whole language or how to fit it together. That isn't helped by the fact that some programmers love programming the type system rather than the language itself. Like, cool, my `SuperType` type alias can augment an integer or a record and knows how to add the string `two` to `one` to produce `three` but it's also an impossible to grok program crammed into 800 characters on one line.
> Lisp is also a huge train wreck that way. [...] There's like 20+ different lisp like languages.
Lisp is not a language, but a descriptor for a family of languages. Most Lisps are not functional in the modern sense either.
Similarly, there are functional C-like languages, but not all C-likes are functional, and "learn c-likes" is vague the same way "learn lisp" is.
You’re right and this is also a bit of a pet peeve of mine. “Lisp” hasn’t described a single language for more than forty years, but people still talk about it as if it were one.
Emacs lisp and Clojure are about as similar as Java and Rust. The shared heritage is apparent but the experience of actually using them is wildly different.
Btw, if someone wants to try a lisp that is quite functional in the modern sense (though not pure), Clojure is a great choice.
> I mean, Haskell has like what, 2, 3, 4? Major build systems and package repositories? It's a quagmire.
Don't know when was the last time you've used Haskell, but the ecosystem is mainly focused on Cabal as the build tool and Hackage as the official package repository. If you've used Rust:
- rustup -> ghcup - cargo -> cabal - crates.io -> hackage - rustc -> ghc
It's admittedly been years.
ghcup didn't exist, AFAIK. Cabal was around but I think there was a different ecosystem that was more popular at the time (Started with an S, scaffold? Scratch? I can't find it).
> If you want a language where const is the default and mutable is a keyword
whats the difference between const and mutable?
"const" means something can't be changed. "mutable" means it can be changed.
You don't need both a "const" keyword and a "mutable" keyword in a programming language. You only need 1 of the keywords, because the other can be the default. munchler is saying the "const" keyword shouldn't exist, and instead all variables should be constant by default, and we should have a "mutable" keyword to mark variables as mutable.
As opposed to how C++ works today, where there is no "mutable" keyword, variables are mutable by default, and we use a "const" keyword to mark them constant.
> you don't need both a "const" keyword and a "mutable" keyword
What if the lang has pointers? How express read-only?
compiler flags with line and column number seems like the easiest way
Don't get too comfortable, history likes oscillation. If it becomes the norm then someone will have a "actually writing imperatively makes things go super fast".
Decades of blog posts, articles and books about "best practices" led to a suspicion of anything that wasn't bog standard OOP. This site in particular has also contributed to that, especially in the early 2000s, where you can easily find comments disparaging FP.
I think there's a deeper mechanism in these discussions. They all tend to go "If you do <insert_arbitrary_method_here> fully correct[0], it solves all of your problems."
But [0] is never possible.
I get your sentiment, but I side on it's infuriating that it's taken this long. Lol.
F# is wonderful. It is the best general purpose language, in my opinion. I looked for F# jobs for years and never landed one and gave up. I write Rust now which has opened up the embedded world, and it's nice that Rust paid attention to F# and OCaml.
Const by default is not functional programming.
Indeed, but it's one of the (many) good ideas from functional programming that have filtered into more mainstream languages.
It’s because FP, great as it is, is most beneficial when 80/20’d.
Trying to do it 100% pure makes everything take significantly longer to build and also makes performance difficult when you need it. It’s also hard to hire/onboard people who can grok it.
functional programming has a lot of wonderful concepts, which are very interesting in theory, but in practice, the strictness of it edges on annoying and greatly hurts velocity.
Python has a lot of functional-like patterns and constructs, but it's not a pure functional language. Similarly, Python these days allow you to adds as much type information as you want which can provide you a ton of static checks, but it's not forced you like other typed languages. If some random private function is too messy to annotate and not worth it, you can just skip it.
I like the flexibility, since it leads to velocity and also just straight up more enjoyable.
> In C/C++, making almost every variable const at initialization is good practice. I wish it was the default, and mutable was a keyword.
I know it's irrelevant to his point, and it's not true of C, and it doesn't have the meaning he wants, but the pedant in me is screaming and I'm surprised it hasn't been said in the comments:
In C++ mutable is a keyword.
It's also really funny, since C++ is like a malicious genie with this. The author wished for mutable to be a keyword, and C++ made the wish come true: it's a keyword that removes some of the guarantees of const!
Rediscovering one of the many great decisions that Erlang made
To be fair, Carmack has advocated for immutability and other good practices at least since the 2000s.
erlang predates that.
I doubt the GP tried to imply the opposite.
maybe, i don't really know.
his comment was in response to the
```
    Rediscovering one of the many great decisions that Erlang made
and seemed to me to insinuate just that.
Maybe he read SICP then. But he still didn't write a compiler since
I still have a habit of naming variables foo0, foo1, foo2, from a time working in Erlang many years ago.
I had enormous fun optimizng C++ via self-modifying assembly to squeeze the utmost performance of some critical algorithms, and now this drive towards immutable everything feels like cutting my hands and legs off and forcing me to do subpar engineering.
Compiler optimizations make up for it, usually. Thats been my experience at least.
An important thing to keep in mind is just how far compilers have come over the least 40 years. Often, with immutable languages, they can do extremely efficient compile only optimizations because of said guaranteed immutability by default.
Its not true in every case, of course, but I think for most situations compilers do a more than good enough job with this.
Same here, I grew up in a world where you had a handful of registers and a bunch of memory locations, and those were your variables.
                clc
                lda value
                adc #1
                sta value
                lda value+1
                adc #0
                sta value+1
    value       .byte 0,0
    x = 20
    x = x + func(y)
    x = x / func(z)
    x = 20
    x1 = x + func(y)
    x2 = x1 / func(z)
To me this also seem to be wasteful, even if not for the computer, but it wastes my amount of working state I can keep in my head, which is very limited.
Totally agree. const auto for me is the default.
Const/final by default - preach it, brother, amen and hallelujah. Mutability by default is the rule in the big languages and the world would be a better place if it weren’t.
Even better is to use tail calls instead of loops and eliminate mutable variables entirely.
Would be beneficial if Rich Hickey's opinion and experience regarding mutable state was given more weight than John Carmack's.
Immutability was gaining huge traction with Java... then large parts of the industry switched to golang and we hardly make anything immutable.
I would kill for an immutable error type
Go desperately needs support for immutable structs that copy cleanly.
I want to be able to write `x := y` and be sure I don't have mutable slices and pointer types being copied unsafely.
Go is the new PHP.
Much like PHP, you can actually get stuff done unlike a lot of other programming languages.
Oh? Which “lot of” other programming languages can’t you “actually get stuff done” in? Are you sure the problem lies with the programming language?
I find there are some environments where you have a positive feedback loop while working in them. PHP is one of them, Go is another at least for me.
I find many of languages I am constantly fighting with dependency managers, upgrades and all sorts of other things.
It’s an exaggeration perhaps but I get the sentiment. FP is elegant and beautiful and everything, but it can lead you to spend all day puzzling out the right abstractions for some data transformation that takes 5 minutes with a dumb for loop in Go.
I'm curious where this fits in with single assignment semantics:
    int x = 3;
    x = 4; // error!
    int* p = &x;
    *p = 4; // is that an error?At least with clang it's a warning:
    f.c:4:8: warning: initializing 'int *' with an expression of type 'const int     *' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers]
    4 |   int* p = &x;
      |        ^   ~~
    1 warning generated.Yes, because line 3 would implicitly be: const int * p
Is he referring to something specific with "true" iterative calculations vs. plain old iterative ones, assuming they are in some way "non-true" or less "true"? Like, avoiding i+=x in favor of ++i or something? Or maybe am I just tired today.
I think he's just saying that mutation is ok if it's something loopy, like changing the loop counter or updating some running sum. So both i+=1 and ++i are fine.
Love Carmack, but hard disagree on this and a lot of similar functional programming dogma. I find this type of thing very helpful:
    classList = ['highlighted', 'primary']
    if discount:
        classList.append('on-sale')
    classList = ' '.join(classList)
Some potential alternatives to consider:
1.
    classList = ['highlighted', 'primary']
        .concatif(discount, 'on-sale')
        .join(' ')
    classList = ' '.join(['highlighted', 'primary'] + (['on-sale'] if discount else []))
    mut classList = ['highlighted', 'primary']
    if discount:
        classList.append('on-sale')
    classList = ' '.join(classList)
    freeze classList
    def get_class_list(discount):
        mut classList = ['highlighted', 'primary']
        if discount:
            classList.append('on-sale')
        classList = ' '.join(classList)
        return classList
    classList = get_class_list(discount)We're all already doing this as the compiler turns everything into SSA form, silly goose.
> and it avoids problems where you move a block of code and it silently uses a version of the variable that wasn’t what it originally had.
I find that keeping functions short also helps a ton with that.
No, shorter than that. Short enough that the only meaningful place to "move a block of code" is into another function. Often, by itself.
It helps with that, but it has other trade-offs: indirection isn't free for readability.
> indirection isn't free for readability
Yes, but also no. If its a mostly side-effect free function with a good name and well defined input/output types its basically free.
I'll go further than that and say that indirection significantly increases cognitive load and hurts readability.
I have consistently found the opposite to be the case across decades of programming experience, as regards the extraction of helper functions. This is not "indirection" in the same sense as with data structures. It is abstraction of currently-irrelevant detail.
Reading and understanding code is a process of answering "what are the immediate steps of this task?", without thinking about what those steps consist of or entail. It is not a process of answering "where is the variable representing ..., and the code that manipulates this?", especially since this makes assumptions that may prove incorrect.
Yeah, I also wish it was the default. But it's a little too verbose to just sprinkle on every variable in C++. Alas. Rust gets this right, but I'm stuck with C++ at work.
> But it's a little too verbose to just sprinkle on every variable in C++
It's worth it, though. Every variable that isn't mutated should be const. Every parameter not mutated should be const, and so should every method that doesn't mutate any fields. The mutable keyword should be banned.
100%, life's too short.
Ultra-pedantic const-correctness (vs tasteful const-correctness on e.g. pass-by-reference arguments or static/big objects) catches nearly no bugs in practice and significantly increases the visual noise of your code.
If you have luxury of designing a new language or using one with default mutability then do so, but don't turn C coding styles into C++-envy, or C++ coding styles into Rust-envy.
This would require coming up with an order of magnitude more variable names which is just unnecessary cognitive load.
An order of magnitude? That sounds like pretty outrageous hyperbole. A variable getting reassigned 10 times sounds extremely rare, the average in my experience has to be less than 1 reassignment. I think the approach requires coming up with maybe 10% more names.
Usually there are good, obvious names for intermediate calculations in my experience.
I'm open though - what kinds of things are you doing that require reassigning variables so much?
Probably exaggerated a bit with that phrasing (“outrageous” seems similarly hyperbolic ;))
But any variable which I’ve not already marked as const is pretty much by definition going to be modified at least once. So now instead of 1 variable name you need at least two.
So now the average number of variables per non-const variable is >= 2 and will be much more if you’re doing for example DSP related code or other math heavy code.
You can avoid it with long expressions but that in principle is going against the “name every permutation” intention anyway.
Fair enough re: "outrageous"!
It's actually math heavy code (or maybe medium heavy?) where I really like naming every intermediate. fov, tan_fov, half_tan_fov, center_x, norm_x
I spent a decade or so working on video codecs with an international team, and there was sort of an unwritten rule that code shouldn’t have comments and variable names shouldn’t be descriptive (english language couldn’t be assumed).
Which sounds really awful, but after a while it forces you to parse the logic itself instead of being guided by possibly-out-of-date comments and variable names.
I now prefer less verbosity so that probably explains why I’m a little out of distribution on this topic.
If you looked at any of my code prior to that job, it was the polar opposite with very “pretty” code and lengthy comments everywhere.
No, not at all. You still have the advantages of scopes, name shadowing, namespaces, and collection types. If your language supports them, you can also use algebraic data types to further reduce the number of names you need to deal with.
Either you don't understand what you're talking about, or you've missed the word "strive" in the tweet.
For performance critical code, you want to reuse L1D cache lines as much as possible. In many cases, allocation of a new immutable object boils down to malloc(). Newly allocated memory is unlikely to be found on L1D cache. OTOH, replacing data in recently accessed memory and reusing the memory is very likely to become L1D cache hit in runtime.
For performance critical code, you wouldn't use malloc()-allocation at all, though whether using an arena allocator or putting stuff on the stack, your argument is still sane. Data locality is speed.
"Immutability" from a programming language statement perspective doesn't necessarily imply that the implementation duplicates memory or variables.
Similar to how "tail recursion can (usually) be lifted/lowered to a simple loop...", immutability from language statements can often be "collapsed" into mutating a single variable, and there may be one or two "dances" you need to do to either add helper functions, structure your code _slightly_ differently to get there, but it's similar to any kind of performance sensitive code.
Example foo(bar(baz(), bar_alt(baz_alt(), etc...))) [where function call nesting is "representing" an immutability graph] ...yeah, that'd have a lot of allocations and whatever.
But: foo().bar().bar_alt().baz().baz_alt().etc(...) you could imagine is always just stacking/mutating the same variable[s] "in place".
...don't get hung up on the syntax (it's wildly wrong), but imagine the concept. If all the functions "in the chain" are pure (no globals, no modifications), then they can be analyzed and reduced. Refer back to the "Why SSA?" article from a week or two ago: https://news.ycombinator.com/item?id=45674568 ...and you'll see how the logical lines of statements don't necessarily correspond to the physical movement of memory and registers.
You’re describing an edge case. Generally speaking, memory is only reused after old objects are deallocated. And here’s the relevant quote from the OP’s post:
> Having all the intermediate calculations still available is helpful in the debugger
This is the kind of wisdom that comes after hours of debugging and discovering the bug was your own variable reuse.
Wouldn't this be an easy task for SCA tool e.g. Pylint? It has atleast warning against variable redefinition: https://pylint.pycqa.org/en/latest/user_guide/messages/refac...
This is only for redefinitions that change the type. If you re-assign with the same type, there's no warning. However, pylint does issue warnings for other interesting cases, such as redefining function arguments and variables from an outer scope.
Mutability was by far the most difficult thing when learning Python and mutating objects by iterating over its items do get confusing, even as a senior.
When I was first learning I thought all methods would mutate. It has a certain logic to it
I use Javscript mostly. Or Typescript actually, these days. I remember when ES2015 introduced `let` because `var` had weird scoping issues. But ever since, I barely use either of them. Everything is `const` these days, as it should.
> I use Javscript mostly. Or Typescript actually, these days. I remember when ES2015 introduced `let` because `var` had weird scoping issues. But ever since, I barely use either of them. Everything is `const` these days, as it should. reply
const prevents reassignment of the variable but it does not make the object the variable points to immutable.
To do the latter, you have to use Object.freeze (prevent modification of an object’s properties, but it is shallow only so for nested objects you need to recurse) and Object.seal (prevent adding or removing properties, but not changing them).
May people use immutable.js or Immer for ergonomic immutable data structures.
That is an excellent point, and indeed a problem when debugging. When I log objects to the console, often they don't get serialized until I actually click on them, which means I don't get to see the object as it was at the time, but after a bunch of later changes.
`var` doesn't have weird scoping issues, it just different than other languages. `var` is function scoped, thus all var declarations are hoisted to the top of the function during execution.
This is why the single var pattern used to be recommended.
Except const is not sufficient. It will prevent the reference from being reassigned but the const can still reference a mutable object.
This is the mental distinction of what something represents vs what is its value. The former should never change regardless of mutability (for any sane program…), and the latter would never change for a const declaration.
The value (pun intended) of the latter is that once you’ve arrived at a concrete result, you do not have to think about it again.
You’re not defining a “variable”, you’re naming an intermediate result.
Does anyone have any real naming conventions, patterns for doing this in ds programming in notebooks? I've got a bad habit of doing:
  df = pd.read_excel()
  df = df.drop_duplicates.blahblah_other_chained_functions()
  [20 cells later]
  df = df.even_more_fns()Dumb question from a primarily Python programmer who mostly writes (sometimes lengthy) scripts: if you have a function doing multiple API calls - say, to different AWS endpoints with boto3 - would you be expected to have a different variable for each response? Or do you delete the variable after it’s handled, so the next one is “new?”
If they're representing different data from different API calls, yeah, I'd be strongly inclined to give them different names.
    order_data = boto.get_from_dynamodb()
    customer_data = boto.get_from_rds()
    branding_assets = boto.get_from_s3()
    return render_for_user(order_data, customer_data, branding_assets, ...)I think renaming an old variable is a common and sensible way to free a resource in python. If there are no valid names for a resource it will be garbage collected. Which is different in languages like C++ with manual memory management.
John Carmack is a C++ programmer apparently that still has a lot to learn in python.
Not "a resource", but memory specifically. If there's a proper resource (e.g. a file), you should ensure it's explicitly released instead of relying on the GC (using with/close/etc.) And if memory usage is really important, you should probably explicitly delete the variable.
Anything else is wishful thinking, trying to rely on the GC for deterministic behaviour.
In the vast majority of cases, developer ergonomics are much more important than freeing memory a little earlier. In other scenarios, e.g., when dealing with large data frames, the memory management argument carries more weight. Though even then there are usually better patterns, like method chaining.
FYI John Carmack is a true legend in the field. Despite his not being a lifelong Python guy, I can assure you he is speaking from a thorough knowledge of the arguments for and against.
But wouldn't you do that inside a function or a loop body?
In TFT, he mentions
> [...] outside of true iterative calculations
// !!!: SWIFTY VAR & LET SUPPORT
#define var __auto_type
#define let const __auto_type
Good point. Had never occurred to me that keeping steps help debug. Obvious in hindsight
For the same reason almost all my functions end with this:
  const result = ... ;
  return result;
This is almost standard in modern languages:
For example in Dart you make everything `final` by default.
A real default doesn’t need a keyword.
Why loops specifically? Why not conditionals?
A lot of code needs to assemble a result set based on if/then or switch statements. Maybe you could add those in each step of a chain of inline functions, but what if you need to skip some of that logic in certain cases? It's often much more readable to start off with a null result and put your (relatively functional) code inside if/then blocks to clearly show different logic for different cases.
There’s no mutating happening here, for example:
  if cond:
      X = “yes”
  else:
      X = “no”
  let X = if cond { “yes” } else { “no” };
Swift does let you declare an immutable variable without assigning a value to it immediately. As long as you assign a value to that variable once and only once on every code path before the variable is read:
    let x: Int
    if cond {
        x = 1
    } else { 
        x = 2
    }
    // read x hereSame with Java and final variables, which should be the default as Carmack said. It’s even a compile time error if you miss an assignment on a path.
IMO, both ternary operator form & Rust/Haskell/Zig syntax works pretty well. Both if expression syntax can be easily composed and read left-to-right, unlike Python's `<true-branch> if <cond> else <false-branch>`.
I wish C++ did some sane things like if I have a const member variable, allow me to initialize it as I wish in my constructor - it's a constructor for crying out loud.
Don't be silly and assume if I assign it multiple times in an if condition it's mutable - it's constructing the object as we speak, so it's still const!!!
C# gets this right among many other things (readonly vs const, init properties, records to allow immutability by default).
And the funny thing is the X thread has lots of genuine comments like 'yeah, just wrap a lambda to ensure const correctness' like that's the okay option here? The language is bad to a point it forces good sane people into seeing weird "clever" patterns all the time in some sort of an highest ELO rating for the cleverest evilest C++ "solution".
I was hoping Carbon was the hail mary for saving C++ from itself. But alas, looks like it might be googlified and reorged to oblivion?
Having said that, I still like C++ as a "constrained C++" language (avoid clever stuff) as it's still pretty good and close to metal.
Is there a ruff rule for this?
There are [1] and [2] for function arguments and loop variables, respectively, but nothing for the general case. Note that a type checker will complain if you re-assign with a different type. Pylint also has [3] for redefining variables from an outer scope, but Ruff doesn't implement that yet.
[1] https://docs.astral.sh/ruff/rules/redefined-argument-from-lo...
[2] https://docs.astral.sh/ruff/rules/redefined-loop-name/
[3] https://pylint.pycqa.org/en/latest/user_guide/messages/warni...
Jonathan Blow had strong objection with const keyword. I forgot because i did not understand at that time. Does anyone with jai experience have a counter point to that.
I believe the standard counterargument goes:
- either it's transitive, in which case your type system is very much more complicated
- or it isn't, in which case it's a near useless liability
Naturally C++ runs with the latter, with bonus extra typing for all the overloads it induces.
How isn't it transitive in C++? If the variable/reference is const, you can't modify fields, and you can only call const methods. What else do you need?
He also stopped shipping things.
In JavaScript, I really like const and have adopted this approach. There are some annoying situations where it doesn't work though, to do with scoping. Particularly:
- if (x) { const y = true } else { const y = false } // y doesn't exist after the block - try { const x = foo } catch (e) { } // x doesn't exist after the try block
JavaScript’s `const` has the bigger issue that while things can’t be reassigned, they can still mutate. For example:
  const myArray = [1,2,3]
  myArray.push(4)
  myArray // [1, 2, 3, 4]In what way is that an issue?
Because this isn't immutability. The goal is to have a way to define an object that will never change after initialisation, and JS's const isn't it.
You could do an absolutely disgusting IIFE if you need the curly brace spice in your life, instead of a typical JS ternary.
  const y = (() => {
    if (x) {
      return true;
    } else {
      return false;
  })();Technically you could just use an assignment ternary expression for this:
    const y = (x === true) ? true : false;
    Composite.prototype.SetPosition(x, y, z) {
        x = (isNumber(x) && x >= 0 && x <= 1337) ? x : null;
        y = (isNumber(y) && y >= 0 && y <= 1337) ? y : null;
        z = isNumber(z) ? z : null;
        if x !== null && y !== null && z !== null {
            // use clamped values
        }
    }I typically only use ternaries for single operations and extract to a function if it's too big. Although they are quite fun in JSX. For your code i'd probably do:
  function SetPosition(x, y, z) {
    if (!(isNumber(x) && isNumber(y) && isNumber(z))) {
      // Default vals
      return;
    }
    x = clamp(x, 0, 1337);
    y = clamp(y, 0, 1337);
    z = z;
  }I always call this the difference of return branch styles. Yours I'd describe as "fast fail" aka return false as quickly as possible (for lack of a better terminology) whereas I personally prefer to have a single return false case at the bottom of my function body, and the other validation errors (e.g. in Go) are usually in the else blocks.
In JS, errors are pretty painful due to try/catch, that's why I would probably these days recommend to use Effect [1] or similar libraries to have a failsafe workflow with error cases.
Errors in general are pretty painful in all languages in my opinion. The only language where I thought "oh this might be nice" was Koka, where it's designed around Effect Types and Handlers [2]
nitpick: cleaner w/o ()'s, as '=' is the 2nd lowest operator, after the comma separation operator.
I love it.
Why not do:
const y = x ? true : false;
That sounds like a more complicated way to write
    const y = (bool)x;
    const bool y = x;I'm talking about cases with additional logic that's too long for a ternary.
Ditto. These days those are the only cases where I use "let" in JS. The thing I miss most from Kotlin is the ability to return values from blocks, e.g.
val result = if (condition) { val x = foo() y = bar(x) y + k // return of last expression is return value of block } else { baz() }
Or:
val q = try { a / b } catch (e: ArithmeticException) { println("Division by zero!") 0 // Returns 0 if an exception occurs }
Edit: ugh, can't get the formatting to work /facepalm.
Wouldn't this allocate wasteful amounts of RAM unnecessarily for every step in a calculation?
not if it gets optimized out
On a similar note, I’ve always liked the idea of being able to mark functions as pure (for some reasonable definition of pure).
The principle of reducing state changes and side-effects feels a good one.
For what it's worth, I was experimenting with this idea for Python (in an almost completely vibe-coded fashion) here: https://github.com/jlmcgraw/pure-function-decorators
Whether it's of any actual utility is debatable
Did you ever have one of those days when variables won't and constants aren't?
Do as much as you can in a spreadsheet, then start a new spreadsheet.
This Carmack guy knows his stuff.
It's fascinating how even when Carmack says something rather obvious and unoriginal, that many people have said before, sometimes decades ago, it still spawns a 400+ comment thread on HN. I really don't get it, it's almost like a cult of personality at this point.
It always was.
the comments/replies to his tweet remind me why I usually avoid twitter
It is really amazing in how many ways C/C++ made the wrong default/implicit choices, in retrospect.
Hindsight's 20/20, of course. But still.
In other words, he wishes he used Rust.
Proposing making immutable by default in C or C++ doesn't make sense due to backwards compatibility reasons. New languages like Rust have easier time making better choices with immutable by default.
They could just add a "use immutable;" directive that you place at the top of your file.
C# does this with the null hole. I wish more languages would take a versioning approach to defaults at the file-level.
Maybe the new C++ profiles that are supposedly going to make C++ a safe language could do it.
Could be a compiler flag. -const-by-default. Would probably mean you need to scatter mutable across the codebase to get it to compile, but I expect some people would like to have every local annotated as const or mutable.
cpp2/cfront could plausibly do this, right? Except he doesn't want to:
https://github.com/hsutter/cppfront/wiki/Design-note%3A-cons...
Well, if they are willing to break backwards compatibility, a lot of things can be improved, including this.
This shouldn't be a hard and fast rule for everything. Be treated as guidelines and allow the programmer some wiggle room to reuse variables in situations that make sense.
Going too hard on mutation means you usually end up with larger structures that are recreated completely. Those themselves are then the point of mutation. This can be helpful if the larger object needs a lot of validation and internal cross rules (eg. You can't set 'A' if 'B' is true, you can validate that when recreating the larger object whereas if 'A' was mutable on it's own someone might set it and cause the issue much later which will be a pain to track down).
Anyway the outcome of trying to avoid mutation means instead of simply setting player.score you get something like player = new Player(oldPlayerState, updates). This is of course slow as hell. You're recreating the entire player object to update a single variable. While it does technically only mutate a single object rather than everything individually it's not really beneficial in such a case.
Unless you have an object with a lot of internal rules across each variable (the can't be 'A' if 'B' example above) it's probably wrong to push the mutation up the stack like that. The simple fact is a complex computer program will need to mutate something at some point (it's literally not a turing machine if it can't) so when avoiding mutation you're really just pushing the mutation into a higher level data object. "Avoid mutations of data that has dependencies" is probably the correct rule to apply. Dependencies need to be bundled and this is why it makes sense not to allow 'A' in the above example to be mutated individually but instead force the programmer to update A and B together.
Use F#, compile it to .net, rust, JavaScript, typescript and python. Problem solved.
He will like RUST very much.
Carmack had said he likes rust, he just isn't a language zealot. In rust though, the idiom would be to shadow intermediate variables often, which removes the debugger benefit.
> the idiom would be to shadow intermediate variables often
There is no idiom about this. Do it if you like but clipply doesn't warn about any of it.
There isn't an enforced one, and the opinion is bifurcated, but I regularly find crates that clearly treat this as idiomatic practice.
Variable is by definition mutable.
Constant is by definition immutable.
Why can't people get it through their heads in 2025? (I'm looking at you, Rust)
That depends on your definition. Programming languages often deviate from mathematics when it comes to definition of variables, functions etc. That is by choice, Haskell tried to be as close to mathematical definition as possible.
You don’t need to make a mathematical argument, “variable” and “constant” have clear meanings in colloquial use which match the definitions of your parent comment.
So does 'parent'.
A constant is a variable that _always_ has the same value on all lifecycles, e.g.
    const int a{10};
   void some_function(const int a);
> A constant is a variable...
No, it is not. A constant is the direct opposite of a variable.
> An immutable variable..
There is no such thing. You can decide not to mutate it but variable is by definition mutable.
If you want to argue mutability, then you have to talk about the data structure or memory footprint of the constant or variable that it points to or represents, not the concept of variable or constant itself.
In other words, we can have var foo = 5 and const bar = 5. foo can be changed by being reassigned another value with simple foo = 6, whereas bar cannot as bar = 6 should cause panic/exception/... On the other hand, we can have var foo = {value: 5} and const bar = {value: 5} and now it depends on the language how it handles complex types like a struct/object as the operation now bypasses the guards on the variable/constant assignment itself. Will it guard against mutation or not? It should, but that is rarely the case. Hence, in most languages, we will be able to do foo.value = 6 and also bar.value = 6, even though we should not. But again, now we are arguing about the mutability of the data type or memory representation and not the variable/constant itself. Most languages don't care about mutability, so we have this flawed thinking where we are simply unable to strictly define what data is actually mutable and what data is not. Rust uses the borrow checker, that is one approach, but generally this should be properly handled by the language spec and compiler itself and we should not even have this conversation where programmers simply cannot make a distinction between variables and constants, let alone comprehend what those terms mean in the first place, as those meanings have been thrown out of the window by the folks designing the languages.
I’m not sure why witx’s comment was downvoted. This is absolutely the standard usage in math and computer science.
Depends on how you look at it.
A true constant won't change between runs of the code. I.e. it is essentially a symbolic name for a literal.
A constant variable OTOH, varies in different executions of the code. So, its invariance is linked to an execution context.
What about the result of a computation within a giant algorithm that I decided to name, and don't plan on rewriting it anymore?
That's surely not a constant like PI, is it?
Constants are useful for reasoning about code, but anyone who focuses only on making everything an immutable is missing the point. The main goal should be achieving referential transparancy.
It can be perfectly fine to use mutable variables within a block, like a function when absolutely needed - for example, in JavaScript's try catch and switch statements that need to set a variable for later use. As long as these assignments are local to that block, the larger code remains side-effect free and still easy to reason about, refactor and mantain.
https://rockthejvm.com/articles/what-is-referential-transpar...
I don't mean to start a holy war, thats not the point, but isn't this a side effect of C++ footguns rather than python allowing you to be lazy?
I mean there is good reason to keep variables well scoped, and the various linters do a reasonable job about scope.
But I've only really know C++[1] people to want everything as a const.
[1] Yes, you functional people also, but, lets not get into that.
> I wish it was the default, and mutable was a keyword.
Well yea, that's what sane languages that aren't Python, C, and C++ do. See F# and Rust.
Really should invent a new name if you don't want your variables to vary
Is there another programming language that comes close to Clojure’s persistent data structures?
If designing your hypothetical ideal language, what are some intuitive/cute keywords you would choose for mutables/immutables?
`let` is so 2020. `const` is too long. `static` makes me fall asleep at the keyboard. `con` sounds bad. How about
`law`?
    law pi = 3.142 (heh typing on Mac autocompleted this)
    law c = 29979245
    law law = "Dredd"
There was a proposal in Java a few years back to introduce "val".
I think it never gained traction but it would have been nice to have this in Java
    val firstName = "Bob";
    val lastName = "Tables";
    var fullName = "";
    fullName = firstName + " " + lastName;That's an aesthetically awkward and also bug-prone syntax: So just a difference of 1 single letter (that looks similar) to mean the completely opposite thing?? Nah you don't want that, and I don't either.
Kotlin uses var/val too[0] which is what Java is trying to copy. I have never written any kotlin code before, so I don't know if this would be a problem in practice. On the plus side, var and val both have the same length, so the variable declaractions are properly aligned. The names are also intuitive as far as I can tell.In theory, I'd probably be okay with it.
Not a problem in practice as you use val 99.99% percent of the cases (which shows why immutability should be the default, because most often that is needed) and Idea underlines any mutable references, so the sticks out. It also suggests val when a var is not actually mutated.
They're intuitively named. A value is a value. A variable is a variable.
A variable is also a value.
Wouldn't this "val" be the same as "final"?
Also related, it annoys me that Java has final but otherwise poor/leaky support for immutability. You can mark something final but most Java code (and a lot of the standard library) uses mutable objects so the final does basically nothing... C++ "const" desparately needs to spread to other languages.
What is wrong with let?
Nothing, it's just been done, just trying to think of some better/newer ways to say it :)
Meh... I agree where he comes from (when working on large projects, muttability can introduce bugs), but many languages have solved it, similar by having two types, 'let' and 'var', or 'const' in front of a variable.
But, there is practicality in the ability of being able to change a var, and not having to create a new object every time you change one of its members.
It models real nature/physics better.
It looks like He is asking that 'const' be the default, and 'var' should be explicit, which makes sense.
Hey Carmack, I heard a few hackers are working on a highly experimental language called Rust where everything is immutable by default and you have to declare mutable things 'mut'.
Eh, Rust is kind of "immutable by default" but not really enforcing that + "constants" in the way I think Carmack advocates for. Other languages does better in that regard. Examples: https://play.rust-lang.org/?version=stable&mode=debug&editio...
Yeah the encouragement of shadowing is a little weird (learning Rust coming from Go where it is sort of discouraged)
Discouraging shadowing in languages with unclear lifetimes makes plenty of sense.
Clippy offers lints for (three?) distinct ways of shadowing because in most cases it turns out people who don't like shadowing only had problems with typically one specific kind of shadowing (e.g. same type unrelated shadowing, or different type same value shadowing) and since that varies why not offer to diagnose the specific problem this programmer doesn't like.
To be concrete some people are worried about things like:
  let sparrows = get_a_list_of_sparrows();
  // ....
  let sparrows = sparrows.len() + EXTRA_SPARROWS;
   let sparrows = find_wild_sparrows();
   // ....
   let sparrows = find_farmed_sparrows();
I hope this experimental language becomes usable eventually, all I hear is how you have to continuously rewrite to work around the borrow checker, accept worse performance to make everything message based, or just ignore the type system and write VB6-style code where everything is indexes in arrays to obfuscate ownership and paste 'unsafe' when it moans.
Can we stop linking the racist website that twitter has become?
Kinda curious on what jblow would say about this.
blojo? Ask him and report back.
There are languages nobody uses, and languages people complain about. Computing is about change, otherwise there is nothing to compute. The mere fact that its called a “variable” makes it obvious that its supposed to change.
This is a viewpoint commonly held by students who were exposed to imperative programming before having any class in maths. However it shouldn't survive long after that.
Bjarne's excuse is very silly, it's like the Laffer curve but for programming language defects. It pins one edge case nobody cares about, then tries to imply that's proof for a claim no sane person could agree with and for which there is no evidence. Bjarne says languages with no users attract no complaints regardless of how terrible they are (nobody was arguing that they do), therefore implies Bjarne, the fact that people complain about my language just means it is popular. Bzzt, wrong. They're complaining because it's so riddled with problems.
Variables are distinct from constants. It's a problem that C and C++ use the keyword "const" to signify immutability instead, indeed as a result C++ needed three more keywords "constexpr", "constinit" and "consteval" to try to grapple with the problem.
One area that I like to have immutability is in function argument passing. In javascript (and many other languages), I find it weird that arguments in function act differently depending on if they are simple (strings, numbers) versus if they are complex (objects, arrays).
I want everything that passes through a function to be a copy unless I put in a symbol or keyword that it suppose to be passed by reference.
I made a little function to do deep copies but am still experimenting with it.
  function deepCopy(value) {
    if (typeof structuredClone === 'function') {
      try { return structuredClone(value); } catch (_) {}
    }
    try {
      return JSON.parse(JSON.stringify(value));
    } catch (_) {
      // Last fallback: return original (shallow)
      return value;
    }
  }> I want everything that passes through a function to be a copy unless I put in a symbol or keyword that it suppose to be passed by reference.
JavaScript doesn’t have references, it is clearer to only use “passed by reference” terminology when writing about code in a language which does have them, like C++ [0].
In JavaScript, if a mutable object is passed to a function, then the function can change the properties on the object, but it is always the same object. When an object is passed by reference, the function can replace the initial object with a completely different one, that isn’t possible in JS.
Better is to distinguish between immutable objects (ints, strings in JS) and mutable ones. A mutable object can be made immutable in JS using Object.freeze [1].
[0] https://en.wikipedia.org/wiki/Reference_(C%2B%2B)
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Thinking on this subject a bit more. I have to wonder if C++ made references just so there is a fast an efficient way to mutate without having to make a copy?
Maybe someone should work on a way to make references in javascript land. Sort of like immer but baked in.
I guess in javascript world, the phrasing I am looking for would be
I wish all arguments were copies unless I put some symbol that says, alright, go ahead and give me the original to mutate?
It seems like this way, you reduce side effects, and if you want the speed of just using the originals, you could still do that by using special notation.
Problem is that "copy" of an object is not well defined. It could be a "shallow" copy or a "deep" copy. The only models where "copy" is defined are simple "memory"/"value" models (like C) and immutable models (like Haskell).
It is a similar idea to what Carmack is writing about. Golang, Clojure does something similar to what I am talking about above so not sure of the motivation behind the voting down of the comment.