• AdieuToLogic 4 days ago

    The short answer - F# and similar languages provide confidence in correctness due to being strongly typed.

    The long answer - https://bartoszmilewski.com/2014/10/28/category-theory-for-p...

    • lispisok 4 days ago

      Immutability by default contributes just as much if not more

      • snorremd 3 days ago

        This is the number one thing that made Clojure work for me despite being dynamically typed. Having confidence that values did not change under my feet after sending them into functions or across thread boundaries was so refreshing. In immutably valued languages even if you technically might be sending a reference to an immutable value type, you can at least practically think about it as pass by value instead.

        I never really got into F# or Haskell (more than some tutorials) so can't really comment on the type safety part.

        • funcDropShadow 3 days ago

          The value of static typing depends very much on the application domain. In a closed-world application domain where you can be reasonably sure that you know upfront all entities, their attributes, their valid ranges, etc, static typing is extremely valuable. That applies to domains like compilers, system software, embedded systems, and more.

          In open-world domains, like business information systems, static typing is often an obstacle to fast adaptation.

          Whereas immutability provides value in every domain, unless the performance requirement cannot be met.

          • bad_user 3 days ago

            I've heard that argument before, but it never really clicked, and I'm a former fan of dynamic typing.

            The application domain is not relevant because you rarely know the domain up-front. Even if the domain is fully known by business stakeholders, it's not known by the application developers, and those domains can be vast. Application development is a constant process of learning the domain and of extending the existing functionality.

            This is why all the talk about how LLMs are going to make it possible to replace programmers with people using prompts in English doesn't make much sense. Because the act of programming is primarily one of learning and translating requirements that are initially confusing and context dependent into a precise language. Programming is less about making the computer dance, and more about learning and clarifying requirements.

            Static typing helps with refactoring, A LOT!

            So when your understanding of the domain changes, YOU WANT static typing because you want to safely change already existing code. You want static typing precisely because it gives you “fast adaptation”.

            It's the same argument for why someone would pick Clojure over other dynamic languages. Clojure gives you some guarantees due to the pervasive use of immutability, such that it gives you a clearer view of the API's contract and how you can change it. But statically typed FP goes even further.

            I've been involved in projects using dynamic typing (PHP, Perl, Ruby, Python) and with no exception, the code became a mess due to the constant evolution. This is one reason for why we preferred an architecture of microservices because it forces you to think of clear boundaries between services, and then you can just throw away or rebuild services from scratch. Large monoliths are much more feasible in statically typed languages, due to the ability for refactoring. And no, while unit testing is always required, IMO, it isn't the same thing.

            • randomdata 3 days ago

              Static typing exists on a gradient. A ounce of static typing is useful to help with the refactoring, as you suggest, but the tradeoffs seem to quickly take over once you go beyond typing basics. Not even the static typing die hards are willing to write line of business applications under complete type system.

              • bad_user 3 days ago

                It's a spectrum, of course.

                I, for one, prefer more static typing, rather than less. I prefer Scala, OCaml, F#, or Rust. And I've seen some difficult refactorings accomplished in Scala due to its expressive type system, although I can understand why it can be a turnoff.

                The downside of having more static typing is a bigger learning curve, so you end up sacrificing horizontal scaling of software development (hiring juniors fast) over vertical scaling (doing more with fewer, more senior people).

                Another downside is many times a slower compiler, which changes how you work. Once the code compiles, it may be correct, but then again, you end up doing less interactive development, so you work more in the abstract, instead of interactively playing with the code. I.e., Python's `pdb.set_trace()` is rarely available in static languages. I've always found this difference between dynamic and static languages quite interesting.

                • randomdata 3 days ago

                  > I, for one, prefer more static typing, rather than less. I prefer Scala, OCaml, F#, or Rust.

                  Why, then, don't you prefer languages with more static typing? Scala, Ocaml, F#, and Rust are middle of the road at best.

                  It seems you're echoing that the pragmatic choice for a business application is to stick to typing basics (within some margin of what is considered basic).

                  • bad_user 3 days ago

                    I recognize there are diminishing returns, and also, while I don't want first-tier mainstream languages, at the very least I want second-tier mainstream languages :)

                    Haskell, for example, is harder to pick, and I wouldn't pick Idris even if I founded my own company.

                    • randomdata 3 days ago

                      [flagged]

                      • bad_user 3 days ago

                        Indeed, and the same answer was in my first reply to you, read again my first sentence.

                        And I was pointing out my preferences, thinking we may have an interesting discussion. It appears not.

                        Cheers mate,

                        • randomdata 3 days ago

                          [flagged]

              • funcDropShadow 3 days ago

                I am a former fan of extreme static typing, think Haskell higher-order type-classes fan. So, yes, I understand the value. But everything you build is very brittle in the face of changing requirement. When requirements seem to change because developers are still learning the domain, that is fine. This churn is unavoidable, true. But, if you have business people tell you, you have to pass some additional information through your system without doing anything to it, and you answer, I have to refactor all my type definitions, then you have some explaining to do.

                • consteval 3 days ago

                  > I have to refactor all my type definitions

                  The data model did change though. Adding an extra field, even if you don't use it, changes the shape of your data.

                  Ultimately your data is going to be typed with or without your approval. It's unavoidable, because eventually the data needs to be bits on a disk or on the wire. It's just a matter of how aware of it you want to be.

                  If you can reasonably keep the shape, and all possible shapes, in your head then fine. But I think you'll find this becomes less feasible as systems grow, and even less feasible in a corporate environment when teams come and go.

                  • freilanzer 3 days ago

                    > But everything you build is very brittle in the face of changing requirement.

                    It's supposed to be 'brittle', in the sense that the compiler verifies your code and if the logic changes, the compiler complains. Everything else is a bug.

                    > But, if you have business people tell you, you have to pass some additional information through your system without doing anything to it, and you answer, I have to refactor all my type definitions, then you have some explaining to do.

                    And the explanation is that this is software development, there is no "without doing anything to it". If there's a new requirement I need to adapt the code - take it or leave it, I'm not a wizard. And yes, I have done that already in some form and it was almost always received properly. 'Business people' sometimes have no understanding of software development.

                    The logic "code changes -> only dynamic typing" isn't valid, in my opinion.

              • undefined 3 days ago
                [deleted]
                • iLemming 3 days ago

                  Apologies for being pedantic here, but Clojure is too - strongly typed, dynamically typed language. This means types are inherent to the values, not the variables that hold them, providing both safety and flexibility - even though variables don't have fixed types, every value in Clojure has a type.

                  In contrast, Javascript is weakly typed. Yet, Clojurescript, which compiles to JS, retains Clojure's strong typing principles even in the Javascript (weakly typed) runtime. That (to certain degree) provides benefits that even Typescript cannot.

                  Typescript's type checking is limited to the program's boundaries and often doesn't cover third-party libraries or runtime values. Typescript relies on static type analysis and type inference to catch type errors during development. Once the code is compiled to JS, all type information is erased, and the JS engine treats all values as dynamic and untyped.

                  Clojurescript uses type inference, runtime checks, immutable data, and dispatch mechanisms, optimized by its compiler, to achieve strong typing for the code running in the JS engine.

                • high_na_euv 3 days ago

                  I feel like immutability is overrated as hell

                  Robust result type benefits give way more

                  • fire_lake 3 days ago

                    Immutability and types go hand in hand. Why? Well with immutability you can do more modelling of your domain as data and then the type system helps you manipulate this data only in valid ways. No type system (I am aware of) can model and check mutable domain models.

                    • AdieuToLogic 3 days ago

                      > I feel like immutability is overrated as hell

                      Logic defined in terms of immutability can trivially support "undo" and "replay" functionality, amongst other more interesting workflows such as event streaming.

                      Logic defined in terms of mutable collaborations cannot do so with the same ease.

                  • manusachi 3 days ago

                    *Statically typed ;)

                    • AdieuToLogic 3 days ago

                      Nice catch, thanks :-).

                      I was thinking of the benefit statically typed languages offer and not the difference between them and dynamically typed ones.

                    • two_handfuls 4 days ago

                      I feel that shortened it too much.

                      • AdieuToLogic 3 days ago

                        > I feel that shortened it too much.

                        No worries, here's a slightly longer description.

                        Languages such as F# have static type systems which can catch logic errors during compilation. The kind of defects type systems can catch are often "low hanging fruit" issues such as logic errors and typo's.

                        In many ways, programming languages with a sufficiently mature type system provide the same benefit as pair programming. When they are used in conjunction with mathematically sound constructs, such as immutability, Functors, Monads, Monoids, etc., these languages can be a productivity/correctness force multiplier.

                      • OptionOfT 3 days ago

                        Also Option types which enforce explicit null / None checking.

                        • bazoom42 3 days ago

                          Strict dependency order is independent of the type system, so strong typing is definitely not all of it.

                        • GiorgioG 4 days ago

                          Also because Microsoft spends relatively little on it and so it’s not changing at a fast pace.

                          • agarren 3 days ago

                            Records, tuples, pattern matching, immutability, error handling with Result<‘a>, distaste for nulls, it’s got everything that Microsoft is trying to shoehorn into C# today, without all the baggage that C# brings with it. It seems like F# (fortunately) does have many reasons /to/ change.

                            • GiorgioG 3 days ago

                              Agreed - the tooling and advocacy needs changing. How many times has VS shipped with broken F# functionality.

                            • fluidwizard 3 days ago

                              Kudos to Don Syme for not getting the language too bloated btw, it's hard to imagine it is almost 20 years old.

                            • 6gvONxR4sf7o 3 days ago

                              Computational expressions [0] seem like a really cool language feature, but i’ve never used f#. Has anyone used them or seen them go well/poorly? Also curious what other languages’ alternatives there are, besides just do notation.

                              [0] https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...

                              • fire_lake 3 days ago

                                They are used everywhere in the codebases. I think the main complaint is they are not as powerful as Haskell (you don’t get monad transformers).

                              • AndriyKunitsyn 4 days ago

                                >Strict dependency order

                                >In F#, all variables, functions, types and files can only depend on variables, functions, types and files defined earlier. The benefits of this are the fact that a circular dependency is not possible by default and extra clarity with “what depends on what”, which helps during code analysis and PR reviews

                                Ehm. So it's like C... with no forward declarations?

                                • p4bl0 3 days ago

                                  No it's not the same thing as in C. What they try to explain but quite badly phrase is that F# (or OCaml, on which it is based) is lexically scoped, values of non local variables are captured in definitions (so they must exist at that time, but also and more importantly they cannot be changed later because of the immutability by default).

                                  In C you can have a function using a global variable, and change to this global variable will affect the function behavior.

                                  • ReleaseCandidat 3 days ago

                                    No, it's not just about lexically scoping, but actually about having to have every (yes, some escape hatches exist like "namespace rec" and "module rec") definition before it's usage, which is an implementation detail of the compiler. So you have to order your files correctly when compiling. Which did break sometimes, because MS' build system/project files doesn't care about order, as C# doesn't care.

                                    • p4bl0 3 days ago

                                      I see. Thanks for correcting me (I'm quite familiar with OCaml but never used F# so this subtle difference between the two eluded me). I wouldn't have imagined this being put as an advantage in terms of robustness and reliability, contrary to lexical scoping and functions actually being closures.

                                    • pjtr 3 days ago

                                      This seems fine within a file, but isn't this problematic across files? Now the file order is significant, so not only is the Visual Studio XML project file is an essential part of the language semantics, you also can't organize your files in subdirectories freely? Or did they fix that at some point? How does that scale to larger projects?

                                      • neonsunset 3 days ago

                                        More expressive nature of F# means you don't have that many files. C# is luckily and finally moving into that direction too. There was no reason for "one file per class" policy anyway, but it was still widely adopted historically.

                                        Here's an example of a worst-case scenario (GUI frameworks and the extensions have notoriously huge amount of code): https://github.com/fsprojects/Avalonia.FuncUI/blob/master/sr...

                                        But realistically an average project would look closer to this instead: https://github.com/DiffSharp/DiffSharp/blob/dev/src/DiffShar...

                                        Once you have enough files, it might be a good idea to factor out separate concerns into different projects.

                                        • fire_lake 3 days ago

                                          You can organize the files however you like, but you must specify list them in the correct order.

                                          The XML is not part of the language. You could invoke FSC manually (again, with the files listed in the correct order).

                                          It scales very well IME.

                                          Would C# be better with circular library dependencies?

                                      • veber-alex 3 days ago

                                        I don't understand how this is called an advantage.

                                        Dealing with cycles in Python is a total pain full of cryptic error messages, while in Rust cycles are fine as all top level items "come to exist" at the same time so they can depend on each other without any issues, and it makes refactoring a breeze.

                                        Sounds like a compiler limitation touted as a feature.

                                      • moi2388 3 days ago

                                        F# is wonderful. And every iteration of C# I see it go more and more in F#’s direction and this makes me incredibly happy.

                                        • DeathArrow 3 days ago

                                          They were discussing bringing discriminated unions in C# since at least 10 years ago. Still didn't happen and they are still discussing it and making commitees that hold talks about how it should be implemented.

                                      • p4bl0 4 days ago

                                        Why are algebraic data types called "discriminated unions" here? Is there a subtle difference between the two or is it just that F# calls the same feature differently than OCaml?

                                        • Jtsummers 3 days ago

                                          Discriminated unions are one type of algebraic data types (sum and product types being the most common). The section discussing them is specifically calling out a feature related to discriminated unions (sum types) in F#, and is not about algebraic data types broadly.

                                        • chenzhekl 4 days ago

                                          Microsoft explained why we love Rust so much ;)

                                          • DeathArrow 3 days ago

                                            Except F# is far more pleasant to use than Rust.

                                            • akkad33 3 days ago

                                              Is it? There is a lot of uphill battle on using DotNet librairies that are mainly developed with C# in mind. So all the nice things about F# like currying, no nulls etc go out the door because you're interacting with C# code. Rust is a first class citizen in its own ecosystem, so you never have to compromise on the strictness of your codebase

                                              • neonsunset 3 days ago

                                                In practical terms it's mostly source-generators-based functionality and nullability that are an issue. F# 9 gains the ability to transparently understand nullable annotations emitted in C# by Roslyn and expresses them as `T | null` unions, or just as T if the T is non-nullable. Currying and other F#-specific features are simply syntax, there is no inherent limitation for defining a simple curried expression that ends up calling into a method implemented in a different .NET language.

                                                The bigger issues are non-technical where it's just very few companies use F# in production. As for the technical ones - there are not that many and most of them are known (like mentioned above - code generation related, so the criticism usually mentions the same limited set every time, like code-first model definition in EF Core).

                                          • foretop_yardarm 3 days ago

                                            If you’re a python user, F# is an excellent next step to making your code safer, easier to maintain and more performant.

                                            • akkad33 3 days ago

                                              The functional syntax puts people off and the DotNet libraries are an uphill battle

                                              • aksss 3 days ago

                                                Anything new or different is an uphill battle, but I have a hard time believing that anyone who has spent even the slightest amount of time with pip and conda, getting their python execution house of cards built.. just.. so.. is going to struggle with .net libraries and packages.

                                                Edit: or do you mean the comparative lack of libraries?

                                                • akkad33 a day ago

                                                  Yes the comparative lack of libraries and also the roughness of existing ones. I've only toyed a bit with F#, but getting libraries to work that were designed for C# were a bit of a struggle unlike Python, were the resistance to get something to work the first time is low. The inconsistencies would turn up but usually much latter

                                                  • aksss 2 days ago
                                              • xiaodai 4 days ago

                                                Haskell, Ocaml, scala

                                                anything else?

                                                • chx 4 days ago

                                                  Erlang, Elixir.

                                                  • 22c 3 days ago

                                                    Julia?

                                                    • Joel_Mckay 3 days ago

                                                      My favorite feature reduced to a single character:

                                                      https://www.geeksforgeeks.org/broadcasting-across-arrays-in-...

                                                      I also like the constraint programming support combined with performance in implicit parallelism:

                                                      sudo apt-get install minizinc-ide minizinc libgecode-dev

                                                      curl -fsSL https://install.julialang.org | sh -s -- --default-channel=lts --add-to-path=yes --startup-selfupdate=3600

                                                      >julia

                                                      import Pkg

                                                      Pkg.add("CUDA")

                                                      Pkg.add("MiniZinc")

                                                      using CUDA

                                                      CUDA.versioninfo()

                                                      Enjoy the fun =3

                                                      • bdjsiqoocwk 3 days ago

                                                        I love Julia, but what is your example trying to illustrate?

                                                        • Joel_Mckay 3 days ago

                                                          Just highlighting my favorite tools other people may find useful.

                                                          Constraint programming is not Julia specific =)

                                                  • genter 4 days ago

                                                    Rust, ReasonML

                                                    • chx 4 days ago

                                                      Rust doesn't apply. You need a functional language for the kind of robustness we are talking about. Rust got some influences from there but it's not one.

                                                      • __s 4 days ago

                                                        What are "we" talking about?

                                                        - Immutability by default. Check

                                                        - Discriminated unions with exhaustive check. Check

                                                        - No nulls by default. Check

                                                        - No exceptions in the business logic. Check

                                                        - Strict dependency order. Rust doesn't have this

                                                        - Warnings on unused expression results. Check

                                                        - Typed primitives. The level of ergonomics this is implemented with in F# I'll say Rust doesn't have this

                                                        - Explicit conversions. Check

                                                        - Functional approach to concurrency. I think Rust's compile time safety against data races gives this a check. I've used channels for concurrent server processes, it's nice. Check

                                                        - Explicit dependency injection. I've never understood what this means

                                                        7/10

                                                        • Horusiath 4 days ago

                                                          Most of these points are related to strict type system. If that was the case, then Lisp wouldn't be functional programming language.

                                                          IMO the first and foremost principle of Functional Programming languages is that they are optimised around building programs in terms of function composition. And anyone who had to work with borrow checker and closures for 5sec knows, that this is not the case for Rust.

                                                          • frogulis 4 days ago

                                                            I think you've taken it backwards. The comment you were replying to is listing features that lead to robustness (many of which appear in strongly-typed functional languages in the ML family), not essential aspects of functional programming languages.

                                                            • __s 3 days ago

                                                              Indeed. I was listing the specific items the article (titled 'Why is F# code so robust and reliable? ') lists. Functional programming languages only came up to reject someone suggesting Rust as a language that can be included as robust/reliable in the context of the article's reasons for F# being robust/reliable

                                                            • kazinator 3 days ago

                                                              Lisp is a family of languages, most of which are not functional.

                                                              • undefined 3 days ago
                                                                [deleted]
                                                              • Jtsummers 4 days ago

                                                                > - Explicit dependency injection. I've never understood what this means

                                                                The article seems to mean what's described in this other article as "dependency parameterization" where the dependency is explicitly passed to the function (and every function it calls that also needs that same dependency). This is as opposed to, in OO languages, setting the dependency (typically) during construction (however the object is constructed). Or it's otherwise set in some larger scope than the functions which make use of it (global, module, object instance, whatever is appropriate to the language and task).

                                                                https://fsharpforfunandprofit.com/posts/dependencies/

                                                                https://fsharpforfunandprofit.com/posts/dependencies-2/

                                                                • pjtr 3 days ago

                                                                  Is "let-over-lambda" (variable capture / closure) not possible in F#? Is that not a form of implicit dependency injection?

                                                                  • Zerot 3 days ago

                                                                    Closures are possible, yeah. But F# also has partial application(and currying). So you don't need to use a closure to do this.

                                                                    • fire_lake 3 days ago

                                                                      Yes it’s possible.

                                                          • brikym 3 days ago

                                                            > No nulls by default ...While many modern languages added some control for doing more null checks to prevent NullReferenceException, F# avoided them from the inception. This means such exceptions are nearly impossible

                                                            A great feature that Golang couldn't get right years later.

                                                            • topspin 3 days ago

                                                              The fact that golang was designed with the "billion dollar mistake" as recently as it was is pretty astonishing.

                                                              • diarrhea 3 days ago

                                                                Yes. The concept of references (which includes that of null pointers) and that of optionality (which includes that of nothingness) are orthogonal. The former is just an artefact of computer architecture, the latter is everyday business logic. Go mixes the two into a single concept, which is painful.

                                                                As a newcomer to Go, I find myself struggling to express business logic all the time. Zero-valued types make the situation even worse. You’re constantly constructing invalid values of types (nil/zero-valued), by design. Go is “so easy”, but when you inevitably run into one of the footguns it’s “yeah just don’t do that”.

                                                                One of the primary reasons for people to dislike Python is its dynamic typing. When Go came out, that was totally fair. But since then, Python has evolved and improved massively. It/mypy now supports type-safe structural pattern matching, for example. It’s very expressive, and safely so.

                                                                Meanwhile Go barely evolved. Generics landed just recently. They’re only now experimenting with iteration, lifting it from a purely magic, compiler intrinsic concept. And still, no enums of course. The “type system” are structs, or very leaky type wrappers (nowhere near the safety of Rust new types, for example). People are obsessed with primitives.

                                                                I can see the appeal of a simple, stable platform, but Go really ran too far with that idea.

                                                                • chx 3 days ago

                                                                  Wait, can you even run into a null pointer reference error in Go?

                                                                  • topspin 3 days ago

                                                                    You can do this much:

                                                                        func main() {
                                                                           var p *int
                                                                           *p = 1
                                                                        }
                                                                    
                                                                        panic: runtime error: invalid memory address or nil pointer dereference
                                                                        [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x466462]
                                                                    
                                                                    That's memory safe. It's also common and terrible. As is Java's NullPointerException, C#'s NullReferenceException, Javascript's "is not a non-null object" and all the other, unnecessary and awful, yet memory safe manifestations of the same design mistake plaguing contemporary software.

                                                                    Understand that when Hoare coined the phrase "billion-dollar mistake," he was referring to his addition of null references to ALGOL W. ALGOL W references are not C pointers that allow willy-nilly pointer arithmetic and all the UB that come's with that. An explicit design goal of ALGOL W was to prevent unsafe dereferences. A dereference of a null reference in ALGOL W is defined behavior and yields a runtime error.

                                                                        NULL OR UNDEFINED REFERENCE
                                                                            An attempt has been made to access a record field using a null or
                                                                            never initialized reference.
                                                                    
                                                                    That said, ALGOL W isn't memory safe either: it attempts to mitigate memory violations, but this is not comprehensive.
                                                                    • neonsunset 3 days ago

                                                                      If you add <WarningsAsErrors>nullable</WarningsAsErrors>, the below will not compile:

                                                                        var bytes = (byte[]?)null;
                                                                        Console.WriteLine(bytes.Length);
                                                                      
                                                                      That and, well, all variables must be assigned before use. You'd be right to point out it's a band-aid, a convenient one but still.
                                                                      • topspin 19 hours ago

                                                                        As you know, that has no benefit for any dependencies you have in play.

                                                                        • neonsunset 6 hours ago

                                                                          What do you mean?

                                                                          • topspin 36 minutes ago

                                                                            Making these warning into errors applies to the code you're writing. That's great: you're code won't dereference nulls for the most part. However, your code nearly always relies on dependencies. Those dependencies can still dereference nulls, and you can still encounter exceptions.

                                                                            • neonsunset 24 minutes ago

                                                                              This is true. This is also rare in modern code, and impossible with standard library. Other libraries too are expected to adhere to nullability, which is a default whenever you create a new project, and many use the same WarningsAsErrors: nullable option on top of Nullable: enable set everywhere.

                                                                              At the end of the day, scenarios like a library spawning a Goroutine which tries to dereference a null where it doesn't expect and experiencing an unhandled panic, crashing the application, are practically impossible in .NET. It is not necessarily watertight, especially around JSON serialization, but it certainly evokes a "can't believe you guys still struggle with nulls/nils" kind of reaction.

                                                                  • brikym 3 days ago

                                                                    Yes! And people will defend this awful language!

                                                                    • topspin 3 days ago

                                                                      I don't go that far. Mistakes were made, but golang has great properties as well.

                                                                      I'm just amazed that, given that golang's inception is relatively recent, somehow this old lesson had not been learned by the designers. Accomplished and learned people created this language. How did that happen?

                                                                      • veber-alex 3 days ago

                                                                        Go's design is laser focused on fast compile times at the expense of everything else.

                                                                        • topspin 2 days ago

                                                                          While it's true that fast compilation was probably the primary asperation of golang's design, it wasn't the only concern, and there is nothing about eliminating nullability that would have compromised compile time. So I'm left to imagine that they just didn't know.

                                                                          • fire_lake 2 days ago

                                                                            Which is crazy. Even at Google scale the bottleneck to delivery is human reasoning about code changes, not so much build times.

                                                                      • agarren 3 days ago

                                                                        I agree, but it doesn’t change the fact in my mind that despite golang’s nulls, I’ve found it to be a ridiculously productive language to work with in a lot of cases. I’d credit that to the simplicity of the language (if not the runtime), and, it least from that perspective, it’s something Golang shares with F#. F# has the obvious advantage of a significantly better type system and the disadvantage of not sharing the more familiar Algol/c syntax.