Ada is a criminally underrated tool that is unfortunately probably doomed to perpetually take the backseat to Rust despite Rust not solving all the problems Ada does. It's really sad that so many people's idea of safe programming is pretty strictly limited to memory safety, and that because Ada's baseline memory safety (SPARK is a different story) isn't as robust as Rust's borrow checker (in the sense that it doesn't have a borrow checker in favor of just avoiding dynamic allocations whenever possible), that it's a relic of the past.
Ada's type system, SPARK, general clarity on behavior, etc. allows you to structure programs in a manner that makes it hard to Hold It Wrong, especially when dealing with embedded firmware that has all sorts of hardware gotchas. I haven't gotten the chance to use the Tasking primitives in anger yet, but I have a strong suspicion that they're going to bail my ass out of a lot of problems just like the rest of the language has.
My team started at a new employer and made the jump from C to Ada for MCU firmware. We control things that spin real fast and our previous experiences with C definitely resulted in some screwups that left you weak in the knees for a bit. There was some initial hesitation but nobody has any interest in going back now. Rust was floated but we're all glad we didn't opt for it -- memory safety on a system that never allocates memory doesn't hold a candle to Ada's bitfield mapping/representation clauses, ranged types, decimal types, reference manual, formal verification options, concern from the powers that be about providing a stable and trustworthy environment for doing shit that you really don't want to get wrong, etc.
Ada is a lot of fun and a great thing which is ruined (and blessed) by the fact there's de facto only one implementation and company behind it out in the open, and that is semi-closed / license PITA. There were improvements over the years by AdaCore, but I think this altogether hurt the adoption of such a great language in general - no other wide open implementation (like Rust has). If you want to see an extreme example of such a hurt, take a look at Allegro CL and Franz; Imagine having that out in the open and what it'd do for CL, but at least CL has great alternatives in the open like SBCL, whereas Ada doesn't.
Not at all, there are 7 Ada compiler vendors still in business.
https://www.ghs.com/products/ada_optimizing_compilers.html
https://www.ptc.com/en/products/developer-tools/apexada
https://www.ddci.com/products_score/
http://www.irvine.com/tech.html
http://www.ocsystems.com/w/index.php/OCS:PowerAda
http://www.rrsoftware.com/html/prodinf/janus95/j-ada95.htm
And AdaCore sponsors GNAT, with Ada being one of the few official GCC languages for two decades now.
> out in the open
This part was an important part of the sentence you might've missed.
The original point is that there is only one open implementation. This is a link to that one open implementation. You are pushing this conversation around in circles.
How many open implementations are there for plenty of other languages, like the main implementations, driving the whole ecosystem, not partial implementations with if and buts?
How many open implementations are there of Rust?
(Serious question; I genuinely don't know. Always assumed it was about 1, though.)
Only 1, additionally there are two competing ones based on GCC still WIP, and an alternative backend (Cranelift) without the capabilities LLVM can offer in performance.
Do they all support the latest standards ?
A couple of them do, it isn't as if C and C++ FOSS compilers do as well.
C++23 and C23 are the latest, and there are plenty of missing features from previous standards.
It's FOSS and is actually included with GCC, but the toolchain is still a PITA to install just because no one (other than Debian and Arch Linux) bothers packaging it. I think Alire is supposed to make it easy to install but I haven't used it much: https://alire.ada.dev/
SPARK 2014 itself is the same too AFAIK, the problem is there's a lot of auxiliary static analysis tools and plugins that are gated behind AdaCore's sales wall (and of course they'd never deign to sell licenses affordable to individuals)
The situation was pretty bad a few years ago, and the licensing was confusing but it's pretty straightforward now with Alire. (AdaCore also got rid of their "Community Edition which had the weird license restrictions).
I'm doing a presentation at FOSDEM next year called "Get started with Ada in 2 minutes or less!"[1]; because (on MacOS and Linux) I can go from not having no toolchain a all to compiling hello world in under 2 minutes (I've timed it).
Here's some steps:
1. open your terminal, run the following command: curl --proto '=https' -sSf https://www.getada.dev/init.sh | sh
Congratulations, you now have Alire!
2. Run the following command: alr init --bin hello
Now you have a fully structured Ada project, gpr files and all!
3. Edit your program in hello/src/hello.adb
You can use vscode to open the hello folder with the Ada Language server, or just run "alr edit" to open your editor with all of the files built in.
4. Compile with: alr build
Alire will automatically grab the latest native toolchain (gnat, gprbuild, etc) and compile the program.
If you want to use another toolchain, such as for cross-compiling, or another version of GNAT, simply run: alr toolchains --select
5. Run the program: hello/bin/helo
On windows, you can just download the windows installer; alire is also on Freshports for the BSDs.
[1] https://fosdem.org/2025/schedule/event/fosdem-2025-5056-get-...
Your step 1 violated my very hard rule against piping curl into a shell.
…and is ironic advice for a safety-critical system, no?
——-
(Yes, I’ve read arp242.net/curl-to-sh.html - but my point being that as I’d be new to Ada then I don’t know who to trust; I’ve never heard of getada.dev therefore I don’t trust it)
It's an understandable rule, and you can definitely just download the binary from alire.ada.dev and add it to $PATH but with such a harsh stigma around how difficult it is to get the toolchain set up, GetAda follows the precedent of Rust, where you can grab the toolchain via "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh" (thanks to Rustup).
It was received positively at the time:
Show HN: Getada: rustup-like installer for Ada's toolchain/package manager 194 points | 115 comments: https://news.ycombinator.com/item?id=40132373
Is it really so bad to run it twice, with cat/bat replaced with sh on the second pass? If you're really paranoid about it you could save the first run to a file, so you know for certain that the server didn't do a switcheroo for the second one, but if you think about it, that's a low-payout move for Bad Guy #N compared to just sending the pwnage the first time around.
This is beside the fact that we're talking about downloading and running a user-owned binary, which, unlike the shell script, is impractical to inspect in any detail, and has the same privileges as the shell script we're supposed to worry about.
I view "don't curl to shell" as about 90% theatrics basically. Sure, read it first, I do in fact do that. But it's a silly 'very hard rule'.
I routinely save to a file before execution, but it's not really about being attacked. It's about the fact that those scripts, in order to be "fire and forget", make many assumptions, often very large ones, about where I want things or under what user, etc.
Many of them are actually quite well-written under the hood and can be easily moved to other directories, have comments about what it is doing, etc.
Here are the Rust installation instructions, that apparently is sooo much better,
https://www.rust-lang.org/learn/get-started
> The primary way that folks install Rust is through a tool called Rustup, which is a Rust installer and version management tool.
A couple of lines below
> curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
I'm pretty sure that gentoo also packages ada, but it doesn't get installed by default, as its behind a USE flag.
As far as I'm aware, the compiler AdaCore sells is just GCC. You can install GCC built with Ada support from your distros package manager and it will just work. You can also download builds from here: https://github.com/alire-project/GNAT-FSF-builds
> As far as I'm aware, the compiler AdaCore sells is just GCC.
The compiler yes, but I'm convinced FOSS gnatprove must be outdated in some way: Last time I tried following AdaCore's SPARK manuals, certain basic aspects and pragmas didn't work correctly on the latest version.
Not to mention when SPARK aspects sometimes broke the LSP and formatter.
If something hasn't changed, FSF builds are a year behind libre version (by design), and libre version is GPL3 cancer which is not suitable for commercial development. You're then stuck either with a year old version or buy into AdaCore Pro version of it. Not great, not terrible.. but that's kind of the only game out in the open, which is what makes it different from most of other languages out there.
GNAT CE isn't a thing anymore, only FSF and Pro exist. And AdaCore now sponsors Alire, which installs FSF GNAT, and relicensed some of their tools more permissively.
Haven't looked in a while. That's great news then! Rust steals a lot of thunder these days though.
> and libre version is GPL3 cancer which is not suitable for commercial development.
YTF would GPL v. 3 be "cancer" for commercial development? (Shades of Ballmer; is Linux also "a cancer" to you?)
EULAs have all kinds of stupid shit in them already. Just make yours GPL3-compatible.
Why would the license of the compiler matter? It doesn't require you to license the code you compile with it accordingly.
runtime matters
Why? The code you compile isn't infected by the runtime license?
???
Yeah, to this day Ada, though a beautiful language, is a hard choice to make if you are concerned about certain targets. How would i get my program to compile to Android NDK? Even Swift has a better story with it's tooling.
That's before we even talk about important stuff like libraries.
Unless you are using the Android NDK with the official provided clang for C and C++, anything else is a pain for masochits that rather endure yak shaving the whole infrastructure than code in either C or C++ alongside either Java or Kotlin.
What are the options required to build an android gcc these days?
None since Google dropped GCC support, to anyone to do the work to make it work again.
> no other wide open implementation (like Rust has)
So (at least according to pjmlp in https://news.ycombinator.com/item?id=42548360) Rust also only has a single "wide open" implementation.
Which means Rust doesn't have any "other" open implementation either, right?
Honestly, it's hard to know what of all the pro-Rust stuff one sees (here and elsewhere on-line) to take seriously, when its advocates constantly -- consciously or not -- exaggerate its virtues like this.
The popularity of a programming language is not always about what the language offers. I would say a comprehensive, well-documented, mature set of standard libraries for its target audience is far more important (notable examples like R, Python, and Go). Last time I checked, Ada doesn’t even have a de facto, high quality TLS/crypto library, let alone various essential protocol/format codecs, yet the core team (AdaCore I assume) puts a lot of resources into offering a few sophisticated flagship IDEs that potential hobbyists would never use (they already have vim, emacs or vscode). I understand that as a business they have to sell something for revenue and they cannot sell standard libraries. So, that’s probably a dilemma that we cannot have the nice things for Ada to take off.
There's some thick bindings to libtls that coincidentally happen to be written by the author of the article. There's also some OpenSSL bindings in Dmitry Kazakov's Simple Components and some in Ada Web Server by AdaCore, although they're pretty minimal.
I think most applications of Ada are in embedded systems where you don't often want anything not in the standard library.
> I think most applications of Ada are in embedded systems...
Ada is heavily used and carries a historical influence not only with embedded software space, but also with hardware space: VHDL is one of the two major hardware description languages used in ASIC design and FPGA implementations. (The other language is Verilog, based on - you guessed it - C, because of its popularity.)
"Due to the Department of Defense requiring as much of the syntax as possible to be based on Ada, in order to avoid re-inventing concepts that had already been thoroughly tested in the development of Ada, VHDL borrows heavily from the Ada programming language in both concept and syntax." - https://en.wikipedia.org/wiki/VHDL#History
And databases on the software space, PL/SQL is heavily influenced by Ada, and pg/SQL is influenced by PL/SQL.
Also a package manager - Rust's is excellent and a huge reason to use it over C/C++. I see Ada has Alire but that seems like a fairly recent development and I don't know how it compares.
Very few of which are needed or even wanted for lower level mcu development. I assume when the gp was talking about spinning things they were talking about ESC software.
For what it's worth, many Rust developers (including myself) are also Ada fans.
Note that ranged types, decimal types, etc. can fairly easily be emulated in Rust, with what I find is a clearer error mechanism.
SPARK is, of course, extremely cool :) There are several ways to work with theorem provers and/or model checkers in Rust, but nothing as nicely integrated as SPARK so far.
I would quibble with the "fairly easily" part. It will likely become possible to make them as ergonomic as the Ada variety if Rust's const generic and constant evaluation facilities are extended far enough, but this would also open up the can of worms of essentially giving Rust the full capabilities of a dependently-typed language (at least in its compile-time subset), which Rust's dev community may not necessarily be OK with.
I'll grant you that Rust is not nearly as ergonomic as Ada in this domain, but doing it manually is fairly easy. Turning it into a library is a bit more complicated - these days, I'd do it with macros. Of course, making sure that the compiler knows about it for optimization purposes would require lots of const generic.
The real benefit of Adas typing is that it is so easy to utilise often preventing logic errors.
And this is definitely a strong benefit.
The benefit of Rust's typing is that (in the absence of `unsafe` or bugs in the compiler or stdlib), it's a simple theorem prover. Much less powerful than the theorem provers you can use with SPARK, but it's a start :)
Steve Klabnik (of Rust fame) wrote a (very generous IMO) article about Ada, interesting comparison: https://steveklabnik.com/writing/learning-ada/
It suffered from high prices in compilers when it had an opportunity, plus Modula-2 and Object Pascal being good enough for those that cared about safety on 16 bit home computers.
It also didn't help that the UNIX vendors that had Ada compilers like Sun, it was an additional purchase on top of the development SKU that already had C and C++ included.
I never even knew that this "Ancient Language" had dependent types. Always thought it was a "modern" invention of snazzy newer academic languages like Idris, etc.
But, its easy to figure out why it didn't become popular. C/C++/any other top10 language all had free compilers available to everyone. Ada didn't during the explosive era of computing expansion. Also, not a problem nowadays with IDE auto-complete/snippets but the language was too verbose for older generation of programmers.
Verbosity was genuinely expensive at the time. Two ways: until the mid-80s, 5 1/4" floppies held between 100 and 250kB depending on format, so a program which used up three times as many bytes (I think that's a good multiplier from C to Ada) is making a meaningful difference for transfer, backups, storage.
What's probably more important is that 80 columns was far and away the likely maximum for a screen, and 40 columns wasn't unheard of. The word PROCEDURE took up 11 to 22% of the column width! This wasn't a show-stopper, Pascal uses a similar syntax (both of them derived from Algol of course) and was pretty popular, but plenty of people complained about Pascal's verbosity as well, and Ada is definitely more verbose than even Pascal.
The lack of autocomplete (even things like snippets were relatively uncommon) didn't help, but mainly, verbosity imposed real costs which are mitigated or irrelevant now.
You talk about C and C++ yet call Ada "ancient," C from 1969 and C++ from 1979.
Whereas Ada's first version is from 1980 and first standardised version (different to 1980) in 1983. Yeah, "ancient."
I'm a bit disappointed that we've ended up with Rust in the kernel but not Ada. The kernel relies on GCC and there's already an Ada compiler in GCC, so it wouldn't require adding another compiler as a build requirement like Rust does.
There's a couple of major advantages that Ada could have in the Linux over Rust for safe drivers:
1. Having the option to use SPARK for fully verified code provides a superset of the correctness guarantees provided by Rust. As mentioned in the parent comment, Rust focuses purely on memory safety whereas SPARK can provide partial or complete verification of functional correctness. Even without writing any contracts, just preventing things like arithmetic overflows is incredibly useful.
2. Representation clauses almost entirely eliminate the need for the error-prone manual (de-)serialisation that's littered all over Linux driver code: https://www.adaic.org/resources/add_content/standards/22rm/h...
Agreed. It would also alleviate the maintenance issue as Ada was designed to reduce software costs. Realistically the obly way this will have any chance is if a few developers were funded full time to work on it. There seemed to be a few full time devs pushing Rust support and some drivers before it was taken seriously. Honestly Ada is the best language that I have seen for drivers and for network stacks or for registers received as bytes. Linux is missing out.
> 2. Representation clauses almost entirely eliminate the need for the error-prone manual (de-)serialisation that's littered all over Linux driver code:
Do representation clauses let you specify endianess? From a quick glance at that link it didn't appear so. I would imagine that invalidates most use cases for them.
You can specify endianness, but only over the entire record, not an individual field. The way it works is a little complicated: https://www.adacore.com/gems/gem-140-bridging-the-endianness...
Interesting, thank you. I think per-record is probably good enough for most applications, and less verbose than per-member. But it's not part of the language (that page calls it "implementation specific") and quite recent (that page is undated but references Ada 2012 so must be since then). It wouldn't have helped the Ada project I'm working on, which had an endianess crisis with serialisation a few decades ago.
rust is in the kernel to attract the young developers, which ada does not.
GetIntoGamedev is not old, he's in his 20's. The issue is, is that people are not prepared to trying something if it doesn't look like C or C++.
But he's one. See how many people are jumping on the rust bandwagon…
>ended up with Rust in the kernel but not Ada
Linus hated Ada. I suspect he doesn't exactly like Rust either but the Tribe is just too strong within Linux.
Do you have a source for that? I've found his remarks on Pascal and C++ but not Ada.
Linus' opinion on Ada (vs Rust):
> We've had the system people who used Modula-2 or Ada, and I have to say Rust looks a lot better than either of those two disasters.
https://www.infoworld.com/article/2247741/linux-at-25-linus-...
> I'm a bit disappointed that we've ended up with Rust in the kernel but not Ada.
Why? Do you program in Ada or Coq?
People can't be bothered to track lifetimes, what makes you think they are ready to track pre/post-conditions, invariants and do it efficiently and thoroughly.
Having the option there is good, even if not everyone uses it. The same thing applies to Rust.
For Rust and Ada in the Kernel they need to be accepted by kernel maintainers.
And seeing they can't stand Rust, when its constraints are much weaker than Ada.Spark what chances does it have?
To paraphrase some computer science guru: The cyclical nature of programming languages is all to blame on the collective programmers. Rather than being guided by mathematic rigor, they are guided by a fashion sense.
Memory safety and the borrow checker are useful even in the absence of dynamic memory allocation. This still doesn't bring rust and ada to the same place, but it is important to clarify that piece.
Spark supports borrowing which is easier to use than Rusts now. It also prevents memory leaks.
Ideally, Rust would start adopting excellent features like Ada's SPARK, and vice-versa Ada get inspired by Rust's good parts as well.
> We control things that spin real fast and our previous experiences with C definitely resulted in some screwups that left you weak in the knees for a bit.
Ha, you could almost read this as a stuxnet joke
The literally verbose syntax contributes to its unpopularity as well. It is extremely hard to skim/read and comprehend prose-like Ada code.
For me it's the other way around, I can skim a program written in Ada and figure out what it's doing almost the immediately because it is highly structured and enforces correctness with no ambiguity through its syntax.
I can't really do the same for rust which tends to lend itself into a more confusing format.
About two years ago, I was able to dive into the Ada reference manual formatter which has initial commit of March 2000 and is about 45k lines of code, and add MDX output pretty easily.
Other languages focus on terseness and expressiveness. Ada expresses a bunch of constraints directly and forces you to do things in organized ways (e.g. use namespaces appropriately).
Wouldn't Rust's symbol heavy syntax contribute the same?
I think that since a significant portion of Rust developers come from a C++ background, and C++ uses basically the same set of symbols, it's not a huge barrier to adoption
Rust actually has a bunch of oddities, to the point they test them [0].
[0] https://github.com/rust-lang/rust/blob/master/tests/ui/weird...
There's really only one sigil in there that isn't in C++ (the ' single-quote to name lifetimes and labels). And it's missing several ambiguities that plague older C++ grammars (i.e. is >> a greater than, or closing two template expressions?)
> (i.e. is >> a greater than, or closing two template expressions?)
I thought it was a pointer-dereference signifier(, or maybe an object-string-stream thingy)? If you mean C++, that is.
I like Ada, but I tend to agree. "End Something_Somethig_Something" is really a mouthful (compared to "}"). And programmers are superficial like that. Ada wouldn't be the first decent language being dismissed for inconsequential aspects like this one.
Context free curly braces in deeply nested code make me crazy. Labels to match up with the open symbol would be super helpful.
My company’s style guide requires them on closing braces for namespaces.
Let me say that I do agree with you.
However I’d add that this job could easily be done by the IDE. For a reason that I fail to grasp, after being around for multiple decades, IDEs are still incredibly bad at presenting useful information on the code. Apart from coloration and more recently type hints, there never have been any real innovation in just helping reading code structure. For some reason we are still stuck with a tree of files besides some glorified text editor.
Interestingly, we have made incredible progress into code comprehension. We have incredibly mature ast parsers, lsp servers, … our IDEs know everything about our code structure but we still fail to make anything else with that data than autocompletion and doc popups.
It’s called “sticky scroll” in VSCode and Visual Studio [1]. It pins the opening line of a scope to the top when it scrolls out of view and it does it multiple levels deep so you can see the class, function definition, conditionals, etc at the top of the source file when browsing deeply nested code.
[1] https://learn.microsoft.com/en-us/visualstudio/ide/editor-st...
Okay that’s cool :)
There is some great work being done here - I'm watching GToolkit advances, and while I don't fully buy the "moldable" hype, the UX of reading and writing code in GT feels like sci-fi sometimes.
So, this is a problem with using good editors. I don't know if VSCode or any similar editors have the way to select current block / current function, but in Emacs (and mostly likely Vi(m)) world this is just part of knowing the ropes. So, giving extra emphasis to the end of the function (or block) is completely unnecessary, it just reduces entropy.
Other problems with this: one of the ways to navigate the code is by searching for the function (procedure) name. Now you have double the number of matches.
Also, when I find code with comments that add labels like "end of xxx", I automatically delete all of these. It doesn't help me at all, and only makes searches in the code more difficult. Even the bad editors like VSCode probably have the functionality to jump to matching delimiter, which would easily reveal the name of the block you are working with.
And your company guidelines... oh, it should be fun when someone renames a namespace, but forgets to update the comments. At least, in Ada it's guaranteed that the end and the beginning will match. Also, sometimes I prefer not to invoke any kind of automatic refactoring, but rather rename a function and see what other code breaks, to quickly find dependencies. It's really annoying that this renaming must happen twice.
If you can't see the other end of a curly brace inside a function, I'm pretty tempted to say you're doing too much in one spot.
Extracting sequences of statements into a function sometimes improves readability, when those sequences do together some recognizable operation, but other times such an extraction worsens a lot the readability, because now you need to also open other pages to see what the program really does, when the statements in a sequence are not really related.
Even in programs optimally written for readability it is frequent to have iterations or selection statements much bigger than a page, especially where the coding rules enforce a small length for program lines, e.g. 80 characters.
In languages that follow Algol 60, like Pascal, C, C++ and so on, which have a single kind of statement brackets, it is frequently needed to add comments on the final braces or "end", to avoid confusions when reading the program.
This is much more cumbersome than in the languages that follow Algol 68, e.g. Ada and bash, where there are several distinct pairs of statement brackets.
I do occasionally have a long switch statement that doesn't lend itself to be broken up. If all the branches are simple and operate on the same conceptual level, breaking them out into separate functions that wouldn't be useful anywhere else doesn't make sense to me.
But it's definitely not a frequent enough occurrance to merit replacing closing braces with lengthy names that need to be kept in sync with their opening counterpart.
In real life you’ll have to deal mostly with code you haven’t wrote yourself.
Agreed but it’s not always my own code I am reading.
Pretty sure Java in vscode and android studio places block names after }'s because it's hard to read otherwise.
End names are optional in Ada, so "fixing" that is just a style guide away. Meanwhile, Scala 3 added named `end` to help with long blocks on its indentation syntax.
Like in all such discussions: the problem isn't with what you write, but with what you read. There are tons of Ada code that's already written like that. So, future changes to the style guide aren't going to do much. Also, nobody's changing that anyways.
I got used to it after writing a lot of complex SQL. I even developed a preference for uppercase keywords.
Philosophically this is a fair point but practically it is not. Eventually any programmer will start relying more and more on tools to help with skim/read/comprehend code bases, no matter the language. There's a reason that every text editor and IDE used for programming includes helpers to find subroutine calls, jump to matching symbol (curly braces, closing parenthesis, end statement, or tab-depth indicator), etc, etc. No language is so "easy to skim/read and comprehend" that you'd be happy with a realistically significant code base and only the navigation keys on your keyboard.
There's a very fine line between nice language syntax and ease of use via tools you use to interact with the language (with APL-influenced languages being the only exception I can think of, but even there I've heard of programmers having physical key map symbols overlaid on keyboards).
It was considered verbose years ago but now days it is IMHO better than messy C++ templates or Rust syntax.
I really wanted to use Ada, at least learn it. Concepts are nice but I gave up when started looking into unicode support. It was wild, a bit discouraging. Or has the situation changed? What’s the unicode status in Ada?
You can embed and work with UTF-8 strings with no issue (I have source with emoji string literals), but if you need complex manipulation of code points vs glyphs etc. I’m not sure how robust the libraries are for what you are trying to do.
Thank you, this is very useful information.
Coming from the type theory side with only a passing glance at Ada, I am nevertheless sure: this is not what type theorists mean when they talk about dependently typed languages. Such languages derive from the formulation of Per Martin-Löf (also called Intuitionistic Type Theory), they include dependent sum and dependent product types, and they allow type checkers to prove complex statements about code. (The history of dependent types is intertwined with the history of formalizing mathematics; dependent types were designed to encode essentially arbitrary mathematical statements.)
The interesting feature of Ada here seems to be what it calls "subtype predicates". As you've explained, these come in a "dynamic" flavor, which are a nice syntax for runtime assertions, and a static flavor, which are compile-time checked but restricted to certain static expressions (per https://ada-lang.io/docs/arm/AA-3/AA-3.2#p15_3_3.2.4).
An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order. I am pretty sure this cannot be done in Ada; checking at runtime does not count!
I do believe (having heard from multiple sources) that Ada's type system was ahead of its time and its success in creating practical programs that are likely to be correct is probably underrated. But I'm not here just to legislate semantics; one should be aware that there is something vastly more powerful out there called "dependent types" (even if that power is not likely to come into most people's day-to-day).
(Unfortunately Wikipedia is quite poor on this topic; you will see, for example, that on the Talk page someone asked "Is Ada really dependently typed?" two years ago; no reply. And it makes no sense to say that Ada has "tactics" but not "proof terms"; tactics are a way of generating proof terms. There are many better resources out there (especially ones associated with the languages Agda, Coq (currently being renamed Rocq), and Lean, e.g. https://lean-lang.org/theorem_proving_in_lean4/dependent_typ...). But be warned, there is no "short version": dependent types cannot be explained in a sentence, and they are not something you will arrive at with enough "hacking away".)
> An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order. I am pretty sure this cannot be done in Ada; checking at runtime does not count!
It actually can be done in Ada, but not purely with the type system, instead we rely on SPARK, which converts Ada code and passes it through various automatic theorem provers. Some examples of fully proven sorting functions are here: https://github.com/AdaCore/spark2014/blob/master/testsuite/g...
You can also see from the above code just how good theorem provers and SPARK are now with the reasonably low number of assertions required to both prove that the output is sorted and prove that the input and output contain the same elements, not to mention all the hidden proofs relating to integer overflow, out-of-bounds access, etc..
You could maybe do all this with types and SPARK, but it's not the approach that would usually be taken.
Ah, very interesting. It does seem that the Ada community has done serious engineering work to build in powerful formal verification, in a way that is somehow parallel to the (much slower, for practical purposes, if more elegant) arc of type theory...
what is the flow for working through this kind of proof? Is there an interactive proof mode like you find in a lot of dependent type provers? Or is there some other guiding mechanism for telling you that you haven't provided enough guidance with asserts?
SPARK will give you some guidance, but there's no particularly fancy interactive tools. Here's an example of working through a different sorting algorithm: https://blog.adacore.com/i-cant-believe-that-i-can-prove-tha...
Do note however that the "proof" part of dependent types requires being able to evaluate arbitrary parts of the program at "compile time". (As a fact of the matter, there isn't even a clean phase separation between compile time and run time in dependently-typed languages; the distinction can only be reintroduced after-the-fact via "program extraction".) So, in a sense, types may depend on values in a dependently-typed language but this is merely an elaborate form of meta-programming, it need not establish a true dependence from runtime values. Whether Ada qualifies as a 'true' dependently-typed language, in this view, would depend on how strong its forms of guaranteed compile-time evaluation and meta-programming are.
It does look like the "discriminant" system of Ada shares key similarities with what dependently-typed languages call a "dependent sum", a generalized record type where "later" elements of the record can depend on "earlier" ones.
"Dependent products", on the other hand, can be viewed as an extended version of generic functions, although they may also suffice to account for e.g. the examples given by OP of Ada's runtime range types and runtime-sized arrays. The key here being that the representation of a type is indeed given as a "function" of the value parameters that are "depended" on.
SPARK allows you to statically prove properties about Ada code. Proving a sort implementation is a classic example : https://blog.adacore.com/i-cant-believe-that-i-can-prove-tha...
Looks really difficult to prove even a "hello world" algorithm. I'm afraid you can easily run into the problem of not understanding what you're proving and just not doing it for what you would actually want.
What’s nice is that you can do it in steps - you may have a hard time proving full specification, but you can prove absence of bad behavior like buffer overruns, etc and go from there.
Maybe they're not implying this kind of limited dependent type system but surely it is still dependently typed? It's just not the "full fat" dependent typing.
Another example of a language with limited dependent typing is Sail. It has "lightweight" dependent types for integers and array lengths (pretty similar to Ada from what it sounds like).
It's very good in my experience - it lets you do a lot of powerful stuff without having to have a PhD in formal verification (again, sounds similar to Ada).
> An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order.
Yeah you can't do that but you can have the type checker say things like "n^m is positive if n is positive or even" or "foo(x) -> (bits(n), bits(m)) with m+n=x" (not the actual syntax). I'm pretty sure you can't do that stuff in a type system without dependent types right?
Well, "dependently typed" is widely used to mean something like "derived from Martin-Löf type theory, including arbitrary dependent sums and dependent products"; in other words, "dependent types" means "full fat dependent types", and it's the things that are less powerful that require qualifiers.
(So when Sail says it has "lightweight dependent types", that seems fine to me (it does seem to do more than it could with simple types or polymorphic types), but if it simply asserted that it "had dependent types" I would feel misled.)
The wording is subtle and language does change, but what I want to push back on is the confusion I see from time to time that "if I can write anything that looks like a function from values to types, I have the same thing that everybody talking about dependent types has". If you think this you don't know what you're missing, or even that there is something you're missing, and what you're missing is very cool!
Super interesting! Thanks for the link.
Would you say Lean is a somewhat learnable language for somebody who only has cursory exposure to functional programming and static types? I’ve almost exclusively used typescript for the last few years, except for some clojure in the last few months.
Sometimes I find a neat language, but my very TS-oriented brain has a hard time getting into it.
I would say dependent types are going into the deep end; unless you have a real need to prove things, it may be hard to see the motivation to learn such abstractions.
In between ad hoc types like TypeScript and dependently-typed languages like Agda, Coq/Rocq, and Lean are well-typed, polymorphic (but not dependent) languages like OCaml, F#, or Haskell ("ML family" or "Hindley-Milner" are related terms). Those are what I'd suggest checking out first!
If you’d like to dive into the deep end and learn a “we tried to make as much type system power available as conceptually simply and elegantly as possible” language, Agda [1] is a very fun language to fool around with. It’s superficially a Haskell-like, but you’ll likely only really learn it as a toy that nevertheless blows your mind… like Prolog etc.
The first thing I tried to do after learning all the basic syntax is write a function f: Nat —> Set (from numbers to types), and then stick it in a function signature (i.e. g: f (3).)
[1] https://people.inf.elte.hu/divip/AgdaTutorial/Index.html
I made some small changes based on your comment https://en.wikipedia.org/wiki/Talk:Dependent_type#Ada2012 but I hope you or others with more knowledge improve the page! ~~~~
For anyone interested in trying Ada, I've written an installer brings in the entire tool chain with a copy and paste command line on macOS and Linux: https://getada.dev
Here is some previous discussion:
Show HN: Getada: rustup-like installer for Ada's toolchain/package manager 194 points | 115 comments: https://news.ycombinator.com/item?id=40132373
There is a quickstart with a link to a large tutorial: https://www.getada.dev/how-to-use-alire.html
If you wanna try out Ada without even installing anything, you can also check out https://learn.adacore.com/
One other neat thing about discriminated records is that you're not limited to just a single field with a variable size, you can also write something like this:
type My_Record (A, B : My_Integer) is record
X : My_Array (1 .. A);
Y : My_Array (1 .. B);
end record;
A record that's created from this will have those arrays tightly packed rather than leaving space at the end of the first one like you might expect (this might be compiler dependant, but it's definitely how GCC does it). Also note that these values can be set at runtime, so this isn't just a generic in disguise (although in Ada you can also instantiate generics at runtime).Very interesting. As a Zig enthusiast I've long suspected that Ada has the ability to express more invariants and types that we will want in Zig (some, like ranges, are planned, last I heard). I'm definitely interested to learn more about Ada.
I’d love to work with Ada but never had the opportunity. Anyone know which companies hire for it?
I've seen a single company that does warehouse management software out of Sweden that advertises (in job listings) that they're using it. Otherwise, it's pretty slim pickings if you're not applying in its wheelhouse (high integrity systems -- aerospace, defense, medical, etc).
If you do microcontroller firmware development, I'd say it's perfectly reasonable to float it for a smaller project and just give it a spin. The language is significantly more modern/sane than C so you're not really exposing yourself to much talent risk. There's no gaping holes in the environment, experienced firmware devs will adjust easily, and new devs will feel more at home with the facilities provided.
What was the company in Sweden? Thankfully I live in Stockholm!
I think that'd be Lund Sweden, or Consafe Logistics. Similar systems at both.
Consafe was who I was thinking of.
Probably more jobs available for language VHDL (influenced by Ada) than Ada itself. Of course as a hardware description language you're on the hardware side of things. Also, worth noting it's more popular in Europe (Verilog seems to have won over in the US).
The military
I have been out of the defense industry for quite a while now, but even back then, more and more projects were using C/C++, because it was so hard to hire Ada developers.
I am not versed in Ada, but Ada does not seem to have dependent types at all, in fact the author doesn't seem to understand what dependent types are. All his examples seem to revolve about arrays and bounded integers so I will stick to those example (although DT are far richer than that).
In a language with dependent types you don't merely have arrays with arbitrary bounds, but you get a proof that array access is always within bounds. AFAICT this is missing in Ada, even when SPARK is employed. Similarly with bounded integers. In a DT language don't get runtime bound checks, it's all compile time proofs.
In Ada, even the name of the feature makes it pretty clear that it's just runtime verification:
type My_Record (Top, Bottom : My_Integer) is record
Field : My_Array(Bottom .. Top);
end record
with Dynamic_Predicate => Bottom <= Top;
It's not a dynamic predicate in either a language with DT or refinement types! It's all in compile time proofs!SPARK does attempt to prove certain properties of programs automatically, including things like bounds checking, which is great, but it's all best effort and it's nowhere at the same level of capability compared to when using DT (or even refinement types). Of course it's far more lightweight (although it can be argued that systems based on refinement types are just as lightweight).
It's very clear that people developing SPARK know a lot about type theory and formal verification, and use as much of the theory as possible to make verification of Ada programs as cheap and ergonomic as possible, but to claim that Ada has DT is quite a stretch. People are using DT to do formal verification of C programs but that doesn't mean that C has dependent types.
> you get a proof that array access is always within bounds
But the way you get 'proofs' in dependently-typed languages is just by building a tree of function evaluations that demonstrate a given logical inference at compile time, and that disappear (are translated to a unit type) when 'program extraction' is performed on the code. This is ultimately a kind of meta-programming. And in a DT language you would still need runtime checks within your extracted code to ensure that program inputs provided at runtime satisfy the logical conditions that the "proof" part of the program relies on: the need for runtime checking is thus reduced to a bare minimum, but not fully eliminated.
> This is ultimately a kind of meta-programming.
There is connection between advanced type systems and metaprogramming, you don't even need dependent types to reach it, GHC can express, for example, typed symbolic differentiation of compiled terms[1], something that would be of interest to a Lisp programmer. This is not a surprise, System Fω has a copy of simply typed lambda calculus at the type level.
> But the way you get 'proofs' in dependently-typed languages is just by building a tree of function evaluations that demonstrate a given logical inference at compile time, and that disappear (are translated to a unit type) when 'program extraction' is performed on the code.
As someone who works on dependently-typed language, I have no idea what you mean. Programs in most DT languages run directly (Lean, Idris, Adga, etc), code extraction is a Coq thing (and Isabelle, but Isabelle does not use DT). Some languages have type erasure, some don't. Some are explicitely concerned about type erasure (Idris), some don't.
> And in a DT language you would still need runtime checks within your extracted code to ensure that program inputs provided at runtime satisfy the logical conditions that the "proof" part of the program relies on: the need for runtime checking is thus reduced to a bare minimum, but not fully eliminated.
In a typed language you need to parse the input exactly once and construct a typed term. The type system itself will prevent you from constructing an ill-typed term -- that is, the parsing routine itself is typed checked (at compile time). Yes, parsing needs to happen, this is true for any language, not just a DT one, but the act of parsing itself produces a dependently-typed term, there are no additional checks happening on the term while it is being used at runtime. The fact that a parsed term cannot, for example, cause an integer overflow inside your program no matter what is quite a massive guarantee.
[1] https://mail.haskell.org/pipermail/haskell/2004-November/014...
> Programs in most DT languages run directly (Lean, Idris, Adga, etc), code extraction is a Coq thing (and Isabelle, but Isabelle does not use DT). Some languages have type erasure, some don't. Some are explicitely concerned about type erasure (Idris), some don't.
Lean has proof irrelevance which means that any information that may be contained within the "proof" or "logical" part of the program is erased at runtime. It amounts to largely the same thing.
One thing that’s kind of interesting about SPARK in particular - all the contracts get compiled to why3ml as an intermediate step before running through the solvers. If there are any VCs that can’t be discharged using the automatic provers, you can manually prove them using Coq: https://blog.adacore.com/using-coq-to-verify-spark-2014-code
I think the lines between what some consider true DT and what is possible w/ Ada might be more blurred than people expect.
The 2 stacks is really cool. Seems like it solves a lot of problems dynamic allocation + RAII solves. Is there more written about this?
You don't need explicit language support to do this -- it's a fairly common practice in videogame development, but we usually usually call it an arena/scratch buffer. Ryan Fleury has a wonderful article where goes at length about different memory management strategies [1].
It's just a static buffer that you can use for temporary allocations, e.g. to return an array of 50 int32's to a parent stack frame you can just allocate `50*sizeof(int32)` bytes on that buffer and return a pointer to the beginning, instead of a heap-allocated `std::vector<int32>`. Every allocation advances the head of the buffer, so you can have multiple allocations alive at the same time.
Every once in a while you need to free the buffer, which just means resetting its head back to the start - in the case of games the beginning of a new frame is a convenient point to do that.
This doesn't solve dynamic memory in general, as it can't be used for long-lived allocations, it just saves you a bunch of temporary heap allocations. It doesn't solve all problems that RAII does either, as RAII is often used for resources other than memory - files, sockets, locks, ...
[1] https://www.rfleury.com/p/untangling-lifetimes-the-arena-all...
Explicit language support is very nice as the compiler can and do a lot of optimizations and free the space as early as possible on all control paths and ensure full memory safety.
For example, when a function returns a thing on the second stack the compiler can arrange that, before returning, the thing is moved to the second stack position that was at the start of the function. This releases the memory that was used for other things by the callee. Then the caller knows the second stack depth after the call and can release its second stack usage just as well.
Doing frees/moves/copies on the second stack makes it harder to track the lifetime of allocations, and restricts what you can use it for - to free a block on the stack you necessarily have to free everything after it.
Most programs have a point where you know nothing on it is used and it's convenient (and very performant) to free the entire thing, and that makes it way easier to reason about - when you alloc from it you know your block of memory it's valid until a <RESET> point:
- For a video game you can <RESET> at the start of every frame
- For a web server you can alloc a new stack for every incoming request and <RESET> after the request is over
- For a short-lived program you may not even need to <RESET> at all
The <reset> point is indeed what you need to look for but why does it need to be a stack ? I guess it's just a naming thing. I've always looked at it as a scratchpad (that was heap allocated)
You can just pop the second stack at each function return, though that does limit its use to only really scratch-space for dynamically sized objects. Like this: https://nullprogram.com/blog/2023/09/27/. “Arenas” are passed by value so their end pointer will auto reset when the function returns.
Ada has the concept of Pools, the heap is one of the pools, you can define your own.
Technically one does not need the second stack to implement this. One can use the main stack to place all dynamically sized things. The trick is then on the return copy the dynamically sized thing to insert it before the caller return address stored on the stack. The caller will see it then as if the new thing was allocated on its stack after the call.
But using the second stack is just simpler, avoids the extra copies and more compatible with the mainstream ABI.
Not really much other than tangential mentions. I did write an addendum to the OP going over the secondary stack as well as where you still use normal dynamic allocation in Ada: https://nytpu.com/gemlog/2024-12-27-2
All my knowledge on it comes from some documentation on configuring the secondary stack for embedded targets and from comments+diagrams in the GCC GNAT source code (and maybe I saw a conversation on it on the comp.lang.ada Usenet group at some point…): https://docs.adacore.com/gnat_ugx-docs/html/gnat_ugx/gnat_ug... https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/ada/libgnat/...
Forth is another language with separate control flow and data stacks, for similar reasons, although at a much lower level - the stacks are a part of the language spec that is fully exposed to you, not just an implementation detail.
[dead]