> But the best programming language is like the best wine. Different people like different things, and that's OK. Drink whatever is the best fit for your palate, and code in whatever is the best fit for your brain.
Hi :) You should check out Gerbil Scheme (https://cons.io). It is built on top of Gambit Scheme (https://gambitscheme.org) and has the generics you are looking for.
Oh, I completely forgot about your blog. I only had the JPL page bookmarked.
HN is how I heard the story many years ago, and I posted it again because (a) It's my favourite story and (b) a discussion elsewhere in HN reminded me of it. So I had to.
Didn't expect it to bubble up on the front page... that says a lot about the enduring quality of your field experience. Thanks for publishing about it!
Can you comment on the relation been a live programming environment (no separate "staged" compilation. There is always a dev environment, the REPL, in a deployment), and dynamic types?
The typing available in CL, is it like Python type hints in that they don't affect the meaning of a program whatsoever?
Type declarations in ANSI CL are promises you make to the compiler in order to allow it to generate faster code. The compiler can also use this information to generate compile-time warnings and errors, but it is not required to. This makes CL's native compile-time type system good for making your code fast, not so much for making it reliable. But it's straightforward to layer a proper modern type checker on top of CL, and in fact this has been done. It's called Coalton:
IMHO this is the Right Answer: types when you want/need them, dynamism when you don't. It seems like a no-brainer to me. I've never understood why so many people think it has to be one or the other. It seems to me like arguing over whether the window on the bike shed should be on the left or the right. If there is disagreement over this, just put in two windows!
I think the "reliability" you mentioned is "type safety". And I think the dynamicism I sometimes miss in a language with static types is, in a bizarre way, the lack of runtime type errors. If you load a configuration file, necessarily at runtime, but the configuration file has the wrong format, an error should occur. What kind of error? In essence, a type error...
Reusing the type checking machinery at runtime is a benefit to some. At the same time, the possibility of type errors (type unsafety) is a threat to the reliability of programs.
I have trouble wrapping my head around it, but I ultimately want the bikeshed with both windows.
And this is the problem, in response to which I hereby formulate Ron's Third Law (*): For any position regarding the proper design of a programming language there exists an authoritative-sounding aphorism to support it.
> good type inference ... good language design
This is begging the question. Of course* you want good type inference and good language design. Who in their right mind would advocate for bad type inference and bad language design? The problem is specifying what the word "good" means* here.
---
(*) Ron's first two laws are: 1) All extreme positions are wrong, and 2) The hardest part of getting what you want is figuring out what it is. (The difficulty of defining "good" is a corollary to this.)
> Ron's Third Law (*): For any position regarding the proper design of a programming language there exists an authoritative-sounding aphorism to support it.
And let's not forget the Law's corollary:
"For any such position there also exists an authoritative-sounding aphorism to oppose it."
That was in the other article, the one I didn't comment on at all.
There are two reasons I didn't comment on it. First, I don't actually understand what the author means by things like "only phrases that satisfy typing judgements have meanings" and "a typing judgement is an assertion that the meaning of a phrase possesses some property". I can kinda-sorta map this onto my intuitions about compile-time and run-time typing, but when I tried to actually drill down into the definitions of things like "phrase" and "typing judgement" I got lost in category theory. Which brings me to the second reason, which is that this is very, very deep rabbit hole. To do it justice I'd have to write a whole blog post at the very least. I could probably write a whole book about it. But here's my best shot at an HN-comment-sized reply:
I've always been skeptical of the whole static typing enterprise because they sweep certain practical issues under the rug of trivial examples. The fundamental problem is that as soon as you start to do arithmetic you run headlong into Godel's theorem. If you can do arithmetic, you can build a Turing machine. So your type system will either reject correct code, accept incorrect code, or make you wait forever for an answer. Pick your poison.
Now, this might not matter in practice. We manage to get a lot of practical stuff done on computers despite the halting problem. But in fact it does turn out to matter in practice because in practice the way typed languages treat arithmetic generally imposes a huge cognitive load on the programmer.
To cite but one example: nearly every typed programming language has a type called Int. It is generally implied (though rarely actually stated) that it is meant to represent a mathematical integer (hence the name). But computers are finite state machines which cannot actually represent mathematical integers. So in practice Int generally means something like Int32 or UInt64 or maybe, if you're lucky, a generic numerical type that will automatically grow into a bignum rather than overflow or (if you're unlucky) wrap around if you try to add the wrong values.
Sometimes these details matter and sometimes they don't. Sometimes when I'm adding things I really care about how it's done under the hood. Maybe I'm writing something mission-critical that can absolutely not fail, or maybe I'm writing an inner loop that has to run as fast as possible at the cost of possibly failing on occasion. But sometimes I don't care about any of this and I just want to add, say, two elliptic curve points by writing P1+P2 rather than EllipticCurveAdd(P1,P2) -- or was that EdwardsCurveAdd(P1,P2)? or Curve25519donnaPointsAdd(P1,P2)? -- and I don't care if it takes a few millisecond because I'm just noodling around with something. If I have to take even 30 seconds to read the docs to remember what the name of the function is that adds elliptic curve points, I've lost. It doesn't matter so much if I only have to do it once, but in practice this sort of cognitive load infects every nook and cranny of a real program. Thirty seconds might not matter much if you only have to do it once. But if you have to do it all the time it can be the difference between getting your code to run in a week and getting it to run in ten minutes. And God help you if you should ever have to make a change to an existing code base.
Those are the sorts of things I care about. Those concerns seem worlds away from the kinds of things type theorists care about.
> Those concerns seem worlds away from the kinds of things type theorists care about.
It’s true there’s a tradition, voiced by Dijkstra, that emphasizes correctness at a point in time, sometimes at the expense of long-term adaptability.
> “Unfathomed misunderstanding is further revealed by the term software maintenance, as a result of which many people continue to believe that programs – and even programming languages themselves – are subject to wear and tear. Your car needs maintenance too, doesn’t it? Famous is the story of an oil company that believed that its PASCAL programs did not last as long as its FORTRAN programs ‘because PASCAL was not maintained’. “ (Dijkstra, “On the cruelty of really teaching computer science“, EWD1036 (1988))
As for Gödel, I’d say invoking incompleteness is like invoking Russell’s paradox — important for foundations, but often a distraction in practice. And ironically, type theory itself arose to tame Russell’s paradox. So while Gödel tells us no system can prove everything, that doesn’t undercut the usefulness of partial, checkable systems — which is the real aim of most type-theoretic tools.
Among the “three poisons,” I’m most comfortable rejecting “correct” code. If a system can’t see your reasoning, how sure are you it’s correct? Better to strengthen the language until it can express your argument. And since that relies on pushing the frontiers of our knowledge, then there are times when you need an escape hatch --- that is 50% of how I understand "dynamicism".
The intrinsic vs. extrinsic view of types cuts to the heart of this. The extrinsic (Curry) view — types as sets of values --- aligns with tools like abstract interpretation, where we overlay semantic properties onto untyped code. The intrinsic (Church) view builds meaning into the syntax itself. In practice, we need both: freedom to sketch, and structure to grow.
-----
On the cognitive load of remembering names like `EllipticCurveAdd(P1, P2)` vs. just writing `P1 + P2` --- that pain is real, but I don't really see it as being about static typing. It’s more about whether the system supports good abstraction. Having a single name like `+` across many domains is possible in static systems --- that’s exactly what polymorphism and type classes are for. The most comfortable system for me has been Julia, because of pervasive multiple dispatch, and this focus on generic programming correctness. I don't think this works well unless the multiple dispatch is indeed pervasive (which requires attention paid to performance), and I'm pretty sure CLOS falls short here (I am no expert on this).
The difference between Julia and "static types" here is more about whether you are forced to content with precisely the meaning of `+`. In the type class view, you bundle operations like `+` and `*` into something like "Field". Julia is much more sloppier and simultaneously flexible. It does not have formal interfaces (yet), which allows interfaces to emerge organically in the ecosystem. It is also a pretty major footgun in very large production use cases....
FWIW, I have been a huge fan of "Lisping at JPL" for many years now and it is validating in many ways. I especially enjoyed the podcast with Adam Gordon Bell. This is also validating: https://mihaiolteanu.me/defense-of-lisp-macros (discussed on HN).
I think we are more or less in violent agreement here. Our disagreements are on the fringes, and could well just be a result of my ignorance or misunderstanding. That said...
> invoking incompleteness is like invoking Russell’s paradox — important for foundations, but often a distraction in practice
Yes, but the operative word here is "often". "Often a distraction" is logically equivalent to "sometimes relevant". The problem is that whether or not it is relevant to you depends on your goals, and different people have different goals. For example, one of my goals is pedagogy, and so it's really handy for me to be able to fire up a CL REPL and do this:
Clozure Common Lisp Version 1.12.1 (v1.12.1-10-gca107b94) DarwinX8664
? (expt (sqrt -1) (sqrt -1))
#C(0.20787957 0.0)
so that I can tell a student, "See? i to the i is a real number! Isn't that cool?" But if your goal is to build a social media site that has no value for you. Different strokes.
> The intrinsic (Church) view builds meaning into the syntax itself.
This is what I don't get. I can't even wring any coherent meaning out of the phrase "build meaning into the syntax itself". Programs don't have meaning, they are descriptions of processes. Programs don't mean things, they do things.
Even for natural language sentences, which do mean things, I don't understand what it could possibly mean to build meaning into the syntax. "The dog ate my license plate" has meaning but "The dog ate my car crash" does not despite the fact that those two sentences are syntactically identical.
(BTW, imbuing syntax with meaning sounds more like Chomsky than Church to me. But what do I know?)
> I’m most comfortable rejecting “correct” code.
Again, one of my goals is pedagogy. Towards that goal I once wrote this:
After writing that, I thought this might be a good time to learn Haskell. If CL is good for showing off the lambda calculus surely Haskell will be even better? I'm guessing I don't need to explain to you why that did not go to plan.
Ah, your example reminds me of a quirk in Julia where some methods are type stable, and others are not. (This is just an aside. I'll provide a response in another comment.)
```
julia> sqrt(-1.0)
ERROR: DomainError with -1.0:
sqrt was called with a negative real argument but will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
Stacktrace:
```
The justification is "type stability". The return type of a function should be constant across different values of the same input type. This is not a requirement, but a guidance that informs much of the design, including the DomainError on `sqrt(-1)`, instead of returning a complex result.
> ERROR: DomainError with -1.0:
> sqrt was called with a negative real argument but will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
This is the sort of thing that drives me absolutely nuts. In no possible world that I can imagine is generating an error a more useful result than returning a complex value. I could forgive it if the system didn't know about complex numbers, but in this case it is clear that the system is perfectly capable of producing a correct and useful result, but it just refuses to do so in service of some stupid ideology. I have zero tolerance for this sort of nonsense.
I'm not sure how familiar you are with Julia, but it is very spiritually aligned with lisp. The reason for the DomainError is "type stability" and it's not stupid ideology unless you consider high-performing numerical code stupid ideology.
I know very little about Julia. It is on my list of Things To Look Into Some Day.
> type stability [is] not stupid ideology unless you consider high-performing numerical code stupid ideology.
"Stupid ideology" may have been putting it a bit strongly, and there are times when I want numerical code to be performant. But I don't always want that, and I don't want a language that makes me pay for it even when I don't want it. Almost always I prefer high-fidelity [1] over speed. But even when I want speed, I always want to start with high-fidelity, get that working, and then make it fast if I need to. 99% of the time I don't need to because 99% of my code turns out to be I/O bound or memory-latency bound. It's extremely rare for my code to be compute-bound, and even in those cases I can almost always just find a C library somewhere that someone else has written that I can call through an FFI. So for me, a programming language that forces me to pay for run-time performance whether I want it or not has negative value.
But honestly, I can't actually think of a single instance in my entire 40 year career when the run-time performance of my CL code was a limiting factor. The limiting factor to my productivity has almost always been my typing speed and the speed of my internet connection.
---
[1] By "high fidelity" I mean how well the semantics of the language reflects the domain model, which, in the case of numerical code, is mathematics. Fixnums and floats are a fast but low-fidelity model because they don't actually behave like the integers and reals that they purport to model. Bignums and rationals are a high(er)-fidelity model. A really high fidelity model would let me do something like (* (sqrt 2) (sqrt 2)) and get back exactly 2 by having some sort of exact representation of algebraic numbers. I don't know of any programming language that provides that natively, but some languages, like CL and Python, let me extend the language to add that kind of capability myself. For me, that kind of extensibility is table stakes.
> The intrinsic (Church) view builds meaning into the syntax itself.
I shouldn’t have phrased it that way --- the distinction between syntax and semantics is awkward for me, especially when there are discussions of "static" vs "dynamic" semantics. What I meant is closer to this: in intrinsic systems, the typing rules define which terms are meaningful at all. Terms (programs) that do not type do not have meaning. This is the sense in which "correct programs get rejected".
> Programs don't have meaning, they are descriptions of processes.
Despite my confusion about the previous point, one clear thing I can point to, however, is the funny realization that there are some programming languages where the primary point is not to run the program, but just to type check it. This is the case for many programs written in languages such as Lean and Rocq. In these situations, the programs do have meaning and are not descriptions of processes. A much more common example that makes it clear that programs aren't just descriptions of processes is in the "declarative" languages. In SQL different execution plans might implement the same logical query, and writing down the SQL is less about describing a process and more about assertion or specification.
> depends on your goals
That’s fair, and I agree (perhaps violently??) I focused on your mention of Gödel's incompleteness because it’s often brought up (as I think you did here) to cast doubt on the viability of static typing in practice, but I don't think it does that at all. I think what casts much more doubt is a connection that I have felt is missing for a while now, which is about interactivity or perhaps "live programming".
The Lean and Rocq interactive theorem proving environments are very "live", but not with a REPL. It's fascinating. But it's also because those programs, in a sense, don't do anything...
There is a type theory rabbit hole, for sure. I think there is also a Smalltalk and Lisp rabbit hole. I would like them to meet somehow.
Yep, I get that. But I don't think of these as programming languages, I think of them as proof assistants. Related for sure, but different enough that IMHO it is unwise to conflate them.
> SQL
Again, a whole different beast. Database query languages are distinct from programming languages which are distinct from proof assistants. All related, but IMHO they ought not to be conflated. There is a reason that SQLite3 is written in C and not in SQL.
> it’s often brought up (as I think you did here) to cast doubt on the viability of static typing in practice
I bring up Godel just to show that static typing cannot be a panacea. You are necessarily going to have false positives or false negatives or both. If you're OK with that, fine, more power to you. But don't judge me if I'm not OK with it.
> There is a type theory rabbit hole, for sure. I think there is also a Smalltalk and Lisp rabbit hole. I would like them to meet somehow.
> But I don't think of these as programming language
I don’t want to get too caught up in what counts as a programming language, but you can absolutely write programs in a language like Lean, e.g. computing factorial 10. The fact that you can also write proofs doesn’t negate a system's status as a programming language. It’s worth noting that the official Lean documentation reflects this dual use: https://lean-lang.org/documentation/.
So there’s no conflation... just a system that supports both orientations, depending on how you use it.
> SQL [...] ought not to be conflated
I agree that SQL isn’t a general-purpose programming language, but you said something to the effect of "I don't understand what meaning a program has beyond what it does", and if you know about SQL, then I think you do understand that a program can have "computational meaning" that is separate from its meaning when you run it through an interpreter or processor.
That said, my interest in generic programming pushes pretty hard toward dependent types. I’m increasingly convinced that many patterns in dynamic programs, especially those involving runtime dispatch, require dependent types in order to be typed. Coalton might help me understand better the relation also between interactivity and static typing. But it's not going to be a panacea either ;)
> I bring up Godel just to show that static typing cannot be a panacea. You are necessarily going to have false positives or false negatives or both. If you're OK with that, fine, more power to you. But don't judge me if I'm not OK with it.
If you want a static type system that is sound and decidable, it will necessarily reject some programs that would behave just fine at runtime. That’s not really Gödel’s fault. It follows more directly from the halting problem.
But that's a quibble, and I still know what to make of your stance. The thing you're not okay with is having the system "get in your way", which is your false positive (the polarity doesn't matter, if that's what I'm getting wrong). The point is you want to say "my program is correct, now run it for me", and you will hate if the system rejects it.
What you don't hate is runtime type errors.
There is no judgement on my side on the preferences. But I'm confused. If you don't have any static type checking, then it's like having a system that always gives negatives, and some of those will be false (again, assuming I got the polarity right. if not, just swap here and earlier). I'm pretty sure you're okay with this, even though the text of your last message leads me to believe otherwise.
To summarize, there’s a fundamental trade-off:
- If you want a sound type system (never accepts unsafe programs),
- And you want it to be decidable (terminates on all inputs),
> you want to say "my program is correct, now run it for me"
No. I want to say "Run this program whether or not it is correct because I just want to see what it does in its current state." Sometimes I know that it will produce an error and what I want is the information in the backtrace.
> What you don't hate is runtime type errors.
It depends. I hate "NIL is not of the expected type" errors. So I have a bee in my bonnet about that too, it's just not something that comes up nearly as often as static-vs-dynamic. I think typed nulls are a good idea, but very few languages have those.
> If you don't have any static type checking
I love static type checking. The more information I can get at compile time the better. I just think that the output of the type checker should be warnings rather than errors.
Thanks for sharing the BINDING-BLOCK macro, it solves one of my perennial problems with Lispy languages (I believe I have ranted about this on HN too). The best I could come up with was something that required reimplementing defun, etc.
And thanks for the "pass" tip for Python. I gave up using Python a few months ago, and ported all my Python scripts to C and ObjC because I just couldn't handle indentation bugs after every refactor. Never realised the solution could be this simple.
you mention compiling down Lisp code. Did that come with many restrictions in the end result for how you were coding systems? Or would you basically write lisp "as-is" and get decent results?
> Tooth's processors didn't have nearly enough RAM to run Lisp directly [1] so instead we used a custom-designed compiler written in Lisp to generate 6811 code.
Was thinking about this point in the 2002 article.
As a Python user, I could imagine a lot of pain going into "try and compile my Python code" (notably downstream from hashing and operator overloading meaning that I might have to end up shipping the entire Python object model into my binary). But perhaps Common Lisp avoids a lot of that mess by being "just" functions?
and it looks just like normal code, but really you are just passing a data structure to the compile-bunny function. The direct equivalent in Python is awful:
but you can build up a pretty decent EDSL in Python for this sort of thing with operator overloading, similar to what Sympy does:
x.set(y[i] + x).compile()
That looks a lot more like normal Python code, but getting it working is a bigger hassle than in more conventional Lisps where you can just quote whatever S-expression. (Of course, your compiler has to handle each type of expression sooner or later, so this isn't as big a deal as it might sound.)
But neither one of these is compiling your Python code. Python's bytecode may give you a head start on that if you do want to do it.
> your compiler has to handle each type of expression sooner or later, so this isn't as big a deal as it might sound
It's a bigger deal than you think. Not only can you leverage the CL parser on the front end so that you can convert code to data simply by prefixing it with a quote, but you can also leverage the CL compiler on the back end to generate the final code. So you don't have to build a complete back end. All you have to do is translate your language into Common Lisp and then pass that on to the CL compiler. That is almost always orders of magnitude simpler than writing a complete back end. The only time it doesn't work is if your language has semantics that are fundamentally different from CL. (The classic example of this is call/cc.) But 99% of the time you don't need this, and so 99% of the time writing a compiler in CL is 1% of the work it otherwise would be.
This is awkward because I'm mansplaining your own project back to you, but, as I understand it, you were compiling to the 68(HC?)11, which your CL compiler back end didn't support, so I don't think it helped you in that way, to generate the final code. Unless I've misunderstood something fundamental?
Depending on how the compiler was structured, it could have helped you in some other way, like doing target-independent optimizations, if there was a way to get some kind of IR out of it instead of native code for the wrong CPU.
I agree that there are lots of cases where the approach you're describing works very well and has a huge payoff, though I've only done it with C, JS, and Lua compilers rather than CL compilers. Usually it's been more like 20% of the work rather than 1% of the work, but maybe that depends on the level of optimization you're hoping for.
It's a little more complicated than that, and I can't fully explain it in an HN comment. I'd have to give you a crash course in compiler design. But the TL;DR is that if you're building a custom language, Lisp gives you the AST (abstract syntax tree -- look it up if you don't already know) for free. You can also sometimes leverage parts of the Lisp compiler depending on what machine you're targeting, though for the 6811 that was not the case.
yeah I got that you get the AST.... I suppose you get the macro capabilities "for free" as well so in a lot of cases the actual code you're looking to transform might be quite simple?
I was mostly worried if the underlying CL is so dynamic that it would be hard to compile things down cleanly but I'm probably overthinking it.
This reminds me a bit of Emscripten's problem though. Emscripten does "machine code" (I believe LLVM IR) to Javascript translation and in order to get good performance it has to try really hard to rediscover things like switch statements and for loops in the IR, since using those in the JS will lead to good perf.
Anyways, I suppose that the core Lisp semantics are going to be so small that it all works out quite well.
CL and Scheme mostly aren't that dynamic. CL has functions like map and elt which operate on runtime-determined sequence types, but it's more common to use things like mapcar and nth or aref which aren't polymorphic at all. Arithmetic is pretty polymorphic in both, which can be an efficiency problem.
A lot of things you'd do dynamically in Python (with operator overloading or the like) are done statically (with macros, as you allude) in more traditional Lisps like CL and Scheme.
The core Scheme semantics are pretty small. Common Lisp has significantly hairier core semantics.
I left JPL in 2004, never to return :-) I'm mostly retired now but I have a part-time consulting gig that uses CL for developing a bespoke custom chip design tool.
Very cool! Sorry to hear that Clozure CL isn't getting the love it deserves. I really wish the lisp community tried banding together under a single implementation, but we all know about the curse :)
That's what Common Lisp itself is: one Lisp to rule them all. And it actually works pretty well. There are libraries that work across a wide variety of CL implementations.
Yeah, but then there are some schemes and racket and Janet and picolisp and so on too. Then there are different CL implementations right (e.g., Allegro, SBCL, that old one the barski book recommends...etc). Too many implementations and not enough users I guess.
Heh, thanks! But it really was a bit of whinging when I wrote it. Lisp in space was ultimately a failure, and I've always had very mixed feelings about telling the story. But I'm glad it resonates.
> At the time it was more or less taken for granted that AI work was done in Lisp. C++ barely existed. Perl was brand new. Java was years away. Spacecraft were mostly programmed in assembler, or, if you were really being radical, Ada.
Given the choices, Lisp made a lot of sense when they started. After 2001-2004, there were other options - not to say they were necessarily better, but a mainstream language that enables a large number of people working together (interchangeably) has its value. Lisp is indeed "one-of-a-kind, highly dynamic applications that must be developed on extremely tight budgets and schedules" - but has a reputation for fostering lone geniuses and bad for large teams working together and maintaining legacy codebases.
I think the late and slow standardization process hurt it a lot. Too many cooks trying to shove their recipes in. Scheme went too much the other way of making it too minimalistic. Imagine a modern Lisp with one source and a kitchen-sink library, like Go. Clojure is the closest we have for that I think, but it running on the JVM also hurt it a bit for a wider adoption.
While I was at Amazon, just before AWS, the entire internal network was monitored by a Lisp agent. I'm not sure if that is still true but it was kind of secret, and the internal wiki (only a few sentences) that documented its existence was removed with no deletion record.
Right before my position was outsourced to an entire remote overseas team, we had rolled out AAA* which conceivably cut out any unauthorized automated agents from the loop.
I also worked on a team at Amazon that did (still does to this day, as far as I know) lisp development. A system that was responsible for automated customer support workflows. And it had a visual programming environment for solutions architects. I worked on the Java backend of that.
I was just talking to someone about Java and they said that it's outdated but afaik it's still a major part of their internal infrastructure. Ofc, they're trying to eat their own dogfood which means they're trying as much as possible to move to AWS for internal services which is basically a mix of customized OSS services programmed in a wide mix of languages, but I imagine mostly C/++ and other low level languages encapsulated in various virtualization solutions.
> Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem. The story of the Remote Agent bug is an interesting one in and of itself.
Not quite the same scale but one of my personal favourite stories involved hardware buried under a highway. The Ethernet stopped working, but the PoE was still ok. We had the foresight to install a serial line to the console on the equipment too. This meant that I could power cycle the hardware at will (through the managed PoE switch) and talk to the boot loader (U-boot) over serial. While not exactly a REPL in the conventional sense, it had enough functionality to be able to talk directly to the MAC and PHY to determine what was going on.
Sadly we couldn’t convince it to work, even at 10 Mbit. My suspicion is salt water ingress into the vault. What we did manage to do, though… There were just enough tools installed on it that I could cross-compile zmodem at home, convert it to a hex file, upload the hex file by essentially just running cat > on the target, convert it back into a binary using… Perl I think? Or xxd? And then doing the daily data offload over zmodem every night instead of over TCP as was originally planned. It was a crazy weekend…
Neat. I’m curious about long run serial as someone who’s only done arduino stuff at 5V, what does industrial serial talking over a miles long connection look like? Higher voltage? Repeaters?
Often, though evidently not in this case, people use RS422, which is differential, so you can get megabits per second or kilometers, though not both. Shared-bus RS422 is RS485, like LocalTalk or DMX512. The voltages are actually lower than the ±12V normally used by RS232. Converting back and forth between RS232 and RS422 is easy and cheap. https://www.ti.com/lit/an/slla070d/slla070d.pdf is a TI appnote with an overview.
Ahhhh it was standard RS232 which uses -12V and +12V instead of the 0-5V signalling that TTL serial uses. Otherwise very similar and easy to convert with eg a MAX232.
A lot of machines that use u-boot wont give you a prompt unless you connect them from a serial connection. Ideally, these ARM VM8850 and up based netbooks should have a u-boot prompt with a keybinding. Also, it would be far better if they had an Open Firmware (Forth) based BIOS.
> during a task’s release of a lock, but before its actual release, the task may get interrupted by the daemon if the property gets broken. This means that the task terminates without releasing the lock. The error is particularly nasty in the sense that all code, except the lock releasing itself, had been protected against this situation: in case of an interrupt the lock releasing would be executed.
> The modeling effort, i.e. obtaining a PROMELA model from the LISP program, took about 12 man weeks during 6 calendar weeks, while the verification effort took about one week. ... The translation phase was non-trivial and time consuming due to the relative expressive power of LISP when compared with PROMELA.
> Java PathFinder (JPF) is a translator from a non-trivial subset of Java to PROMELA.
> The translator is written in 6000 lines of LISP, and was developed over a period of 8 months. JPF has been applied to a number of case studies, amongst them a 1500 line game server, a NASA file transfer protocol for satellites, and a NASA data transmission protocol for the space shuttle ground control.
I've spent the last 6 months getting into podcasts and you're my favourite. You don't let your opinions get in the way, always have good pacing and have gotten better with time. I'm enjoying going through your pods one by one, starting from the first ones and it's a special joy. Thank you.
I enjoyed that interview, Adam and Ron. Thanks for doing it!
Also, to anyone reading, let me plug the corecursive podcast, in general... the stories that Adam brings out are unwaveringly humane, surprising, illuminating, and encouraging to me, your friendly neighbourhood average joe programmer.
i believe the point is more that bootstrapping a higher-level programming environment up from machine code shouldn't be the sole domain of so-called genius. a forth basically writes itself once you know how the execution model works, but these days we don't focus on teaching the end to end skills required to take arbitrary hardware and target it with a new, simple language.
This; it's not rocket science. I am no university educated (maybe something compared to a community college) and yet understanding how a Forth and/or a Lisp bootstrap themselves from small cores it's something every programmer should experience at least once, even from books. If SICP for Scheme and the ones for Common Lisp are too complex, the author from https://t3x.org has two or three for scheme, one of them for almost a micro Common Lisp (less than Elisp actually), a Forth and several more. Oh, and the code runs on 8086 PC's, Unix and some of them even under CP/M 2.2. It's crazy, and eye-opening too.
With a Forth you can literally see how floats are built from 'integer' blocks in RAM. Heck, you can even see how floats are done by using integer and string related words (printing them) . And you can see the 'odd' integer based stack in a live way with once your entered a double or a float. Binary representation in the spot.
I'm familiar with SICP and t3x. Sorry if my statements made it sound like only a genius could figure this out. I think most coders I know are just trying to solve a concrete problem with the existing building blocks. Really smart (or interested) people tend to go a bit broader and deeper. I don't think HN folks are very representative of the overall coding industry - not even close.
Every country in Europe, in order to call yourself a CS Engineer, you must have a Bachelor and up. The requeriments are far higher than the so-called software engineers. Nothing of a genious; just years of studying.
I work with engineers from Europe and since they immigrated I also assume they are academically inclined, and I don’t think any of them know about writing forth interpreters.
Come on, it's a spectrum. In any given class you'd have a handful of savants who would do that in their sleep, a bunch of those struggling to make a nested loop work, and everything in between.
true Evergreen ...nostalgic memories of stories from old 20%-time-and-don-t-be-evil Google days. Sigh. I still hope AI takeoff somehow revives Lisp-as-a-human-friendly-computer-language or something in the vein of this.
I think it might, after the blackbox LLM craze has blazed over and people go back to "AI" that is understandable and can be reasoned about. It comes and goes in waves, I am pretty sure there is gonna be time again for reasearch into a Lisp/Prolog style of AI.
We used Forth back in the day as well, running on 6811 microcontrollers. Forth was also used on the Galileo Magnetometer, and we used Lisp to develop a patch for it in flight.
Author here. This pops up on HN regularly, which I'm happy to see, but it's pretty dated at this point. Here is a more recent update:
https://blog.rongarret.info/2023/01/lisping-at-jpl-revisited...
And, as always, AMA.
> But the best programming language is like the best wine. Different people like different things, and that's OK. Drink whatever is the best fit for your palate, and code in whatever is the best fit for your brain.
never has anything rang truer
Hi :) You should check out Gerbil Scheme (https://cons.io). It is built on top of Gambit Scheme (https://gambitscheme.org) and has the generics you are looking for.
Oh, I completely forgot about your blog. I only had the JPL page bookmarked.
HN is how I heard the story many years ago, and I posted it again because (a) It's my favourite story and (b) a discussion elsewhere in HN reminded me of it. So I had to.
Didn't expect it to bubble up on the front page... that says a lot about the enduring quality of your field experience. Thanks for publishing about it!
Can you comment on the relation been a live programming environment (no separate "staged" compilation. There is always a dev environment, the REPL, in a deployment), and dynamic types?
The typing available in CL, is it like Python type hints in that they don't affect the meaning of a program whatsoever?
Type declarations in ANSI CL are promises you make to the compiler in order to allow it to generate faster code. The compiler can also use this information to generate compile-time warnings and errors, but it is not required to. This makes CL's native compile-time type system good for making your code fast, not so much for making it reliable. But it's straightforward to layer a proper modern type checker on top of CL, and in fact this has been done. It's called Coalton:
https://coalton-lang.github.io/
IMHO this is the Right Answer: types when you want/need them, dynamism when you don't. It seems like a no-brainer to me. I've never understood why so many people think it has to be one or the other. It seems to me like arguing over whether the window on the bike shed should be on the left or the right. If there is disagreement over this, just put in two windows!
I think you're describing "gradual typing", which had lots of promise and attention, but arguably achieves the worst of both worlds: https://www.ncameron.org/blog/a-response-to-a-decade-of-deve...
What it comes down to is a reconciliation between these two views: https://ncatlab.org/nlab/show/intrinsic+and+extrinsic+views+...
I think the "reliability" you mentioned is "type safety". And I think the dynamicism I sometimes miss in a language with static types is, in a bizarre way, the lack of runtime type errors. If you load a configuration file, necessarily at runtime, but the configuration file has the wrong format, an error should occur. What kind of error? In essence, a type error...
Reusing the type checking machinery at runtime is a benefit to some. At the same time, the possibility of type errors (type unsafety) is a threat to the reliability of programs.
I have trouble wrapping my head around it, but I ultimately want the bikeshed with both windows.
> arguably achieves the worst of both worlds
The author makes a number of unfounded claims, including:
> there is huge benefit in throwing away the prototype
This is at odds with:
https://en.wikipedia.org/wiki/Second-system_effect
And this is the problem, in response to which I hereby formulate Ron's Third Law (*): For any position regarding the proper design of a programming language there exists an authoritative-sounding aphorism to support it.
> good type inference ... good language design
This is begging the question. Of course* you want good type inference and good language design. Who in their right mind would advocate for bad type inference and bad language design? The problem is specifying what the word "good" means* here.
---
(*) Ron's first two laws are: 1) All extreme positions are wrong, and 2) The hardest part of getting what you want is figuring out what it is. (The difficulty of defining "good" is a corollary to this.)
> Ron's Third Law (*): For any position regarding the proper design of a programming language there exists an authoritative-sounding aphorism to support it.
And let's not forget the Law's corollary:
"For any such position there also exists an authoritative-sounding aphorism to oppose it."
I agree with everything you said, and just wish you'd respond to the strongest points, not the weakest ;)
Specifically, the two views of typing.
But my fault for linking to a mediocre blog post.
> Specifically, the two views of typing.
That was in the other article, the one I didn't comment on at all.
There are two reasons I didn't comment on it. First, I don't actually understand what the author means by things like "only phrases that satisfy typing judgements have meanings" and "a typing judgement is an assertion that the meaning of a phrase possesses some property". I can kinda-sorta map this onto my intuitions about compile-time and run-time typing, but when I tried to actually drill down into the definitions of things like "phrase" and "typing judgement" I got lost in category theory. Which brings me to the second reason, which is that this is very, very deep rabbit hole. To do it justice I'd have to write a whole blog post at the very least. I could probably write a whole book about it. But here's my best shot at an HN-comment-sized reply:
I've always been skeptical of the whole static typing enterprise because they sweep certain practical issues under the rug of trivial examples. The fundamental problem is that as soon as you start to do arithmetic you run headlong into Godel's theorem. If you can do arithmetic, you can build a Turing machine. So your type system will either reject correct code, accept incorrect code, or make you wait forever for an answer. Pick your poison.
Now, this might not matter in practice. We manage to get a lot of practical stuff done on computers despite the halting problem. But in fact it does turn out to matter in practice because in practice the way typed languages treat arithmetic generally imposes a huge cognitive load on the programmer.
To cite but one example: nearly every typed programming language has a type called Int. It is generally implied (though rarely actually stated) that it is meant to represent a mathematical integer (hence the name). But computers are finite state machines which cannot actually represent mathematical integers. So in practice Int generally means something like Int32 or UInt64 or maybe, if you're lucky, a generic numerical type that will automatically grow into a bignum rather than overflow or (if you're unlucky) wrap around if you try to add the wrong values.
Sometimes these details matter and sometimes they don't. Sometimes when I'm adding things I really care about how it's done under the hood. Maybe I'm writing something mission-critical that can absolutely not fail, or maybe I'm writing an inner loop that has to run as fast as possible at the cost of possibly failing on occasion. But sometimes I don't care about any of this and I just want to add, say, two elliptic curve points by writing P1+P2 rather than EllipticCurveAdd(P1,P2) -- or was that EdwardsCurveAdd(P1,P2)? or Curve25519donnaPointsAdd(P1,P2)? -- and I don't care if it takes a few millisecond because I'm just noodling around with something. If I have to take even 30 seconds to read the docs to remember what the name of the function is that adds elliptic curve points, I've lost. It doesn't matter so much if I only have to do it once, but in practice this sort of cognitive load infects every nook and cranny of a real program. Thirty seconds might not matter much if you only have to do it once. But if you have to do it all the time it can be the difference between getting your code to run in a week and getting it to run in ten minutes. And God help you if you should ever have to make a change to an existing code base.
Those are the sorts of things I care about. Those concerns seem worlds away from the kinds of things type theorists care about.
> Those concerns seem worlds away from the kinds of things type theorists care about.
It’s true there’s a tradition, voiced by Dijkstra, that emphasizes correctness at a point in time, sometimes at the expense of long-term adaptability.
> “Unfathomed misunderstanding is further revealed by the term software maintenance, as a result of which many people continue to believe that programs – and even programming languages themselves – are subject to wear and tear. Your car needs maintenance too, doesn’t it? Famous is the story of an oil company that believed that its PASCAL programs did not last as long as its FORTRAN programs ‘because PASCAL was not maintained’. “ (Dijkstra, “On the cruelty of really teaching computer science“, EWD1036 (1988))
As for Gödel, I’d say invoking incompleteness is like invoking Russell’s paradox — important for foundations, but often a distraction in practice. And ironically, type theory itself arose to tame Russell’s paradox. So while Gödel tells us no system can prove everything, that doesn’t undercut the usefulness of partial, checkable systems — which is the real aim of most type-theoretic tools.
Among the “three poisons,” I’m most comfortable rejecting “correct” code. If a system can’t see your reasoning, how sure are you it’s correct? Better to strengthen the language until it can express your argument. And since that relies on pushing the frontiers of our knowledge, then there are times when you need an escape hatch --- that is 50% of how I understand "dynamicism".
The intrinsic vs. extrinsic view of types cuts to the heart of this. The extrinsic (Curry) view — types as sets of values --- aligns with tools like abstract interpretation, where we overlay semantic properties onto untyped code. The intrinsic (Church) view builds meaning into the syntax itself. In practice, we need both: freedom to sketch, and structure to grow.
-----
On the cognitive load of remembering names like `EllipticCurveAdd(P1, P2)` vs. just writing `P1 + P2` --- that pain is real, but I don't really see it as being about static typing. It’s more about whether the system supports good abstraction. Having a single name like `+` across many domains is possible in static systems --- that’s exactly what polymorphism and type classes are for. The most comfortable system for me has been Julia, because of pervasive multiple dispatch, and this focus on generic programming correctness. I don't think this works well unless the multiple dispatch is indeed pervasive (which requires attention paid to performance), and I'm pretty sure CLOS falls short here (I am no expert on this).
The difference between Julia and "static types" here is more about whether you are forced to content with precisely the meaning of `+`. In the type class view, you bundle operations like `+` and `*` into something like "Field". Julia is much more sloppier and simultaneously flexible. It does not have formal interfaces (yet), which allows interfaces to emerge organically in the ecosystem. It is also a pretty major footgun in very large production use cases....
FWIW, I have been a huge fan of "Lisping at JPL" for many years now and it is validating in many ways. I especially enjoyed the podcast with Adam Gordon Bell. This is also validating: https://mihaiolteanu.me/defense-of-lisp-macros (discussed on HN).
I think we are more or less in violent agreement here. Our disagreements are on the fringes, and could well just be a result of my ignorance or misunderstanding. That said...
> invoking incompleteness is like invoking Russell’s paradox — important for foundations, but often a distraction in practice
Yes, but the operative word here is "often". "Often a distraction" is logically equivalent to "sometimes relevant". The problem is that whether or not it is relevant to you depends on your goals, and different people have different goals. For example, one of my goals is pedagogy, and so it's really handy for me to be able to fire up a CL REPL and do this:
so that I can tell a student, "See? i to the i is a real number! Isn't that cool?" But if your goal is to build a social media site that has no value for you. Different strokes.> The intrinsic (Church) view builds meaning into the syntax itself.
This is what I don't get. I can't even wring any coherent meaning out of the phrase "build meaning into the syntax itself". Programs don't have meaning, they are descriptions of processes. Programs don't mean things, they do things.
Even for natural language sentences, which do mean things, I don't understand what it could possibly mean to build meaning into the syntax. "The dog ate my license plate" has meaning but "The dog ate my car crash" does not despite the fact that those two sentences are syntactically identical.
(BTW, imbuing syntax with meaning sounds more like Chomsky than Church to me. But what do I know?)
> I’m most comfortable rejecting “correct” code.
Again, one of my goals is pedagogy. Towards that goal I once wrote this:
https://flownet.com/ron/lambda-calculus.html
After writing that, I thought this might be a good time to learn Haskell. If CL is good for showing off the lambda calculus surely Haskell will be even better? I'm guessing I don't need to explain to you why that did not go to plan.
And BTW, thanks for the kind words.
Ah, your example reminds me of a quirk in Julia where some methods are type stable, and others are not. (This is just an aside. I'll provide a response in another comment.)
The justification is "type stability". The return type of a function should be constant across different values of the same input type. This is not a requirement, but a guidance that informs much of the design, including the DomainError on `sqrt(-1)`, instead of returning a complex result.However, the same choice is not made universally:
```> ERROR: DomainError with -1.0: > sqrt was called with a negative real argument but will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
This is the sort of thing that drives me absolutely nuts. In no possible world that I can imagine is generating an error a more useful result than returning a complex value. I could forgive it if the system didn't know about complex numbers, but in this case it is clear that the system is perfectly capable of producing a correct and useful result, but it just refuses to do so in service of some stupid ideology. I have zero tolerance for this sort of nonsense.
I'm not sure how familiar you are with Julia, but it is very spiritually aligned with lisp. The reason for the DomainError is "type stability" and it's not stupid ideology unless you consider high-performing numerical code stupid ideology.
To your point, goals can differ.
I know very little about Julia. It is on my list of Things To Look Into Some Day.
> type stability [is] not stupid ideology unless you consider high-performing numerical code stupid ideology.
"Stupid ideology" may have been putting it a bit strongly, and there are times when I want numerical code to be performant. But I don't always want that, and I don't want a language that makes me pay for it even when I don't want it. Almost always I prefer high-fidelity [1] over speed. But even when I want speed, I always want to start with high-fidelity, get that working, and then make it fast if I need to. 99% of the time I don't need to because 99% of my code turns out to be I/O bound or memory-latency bound. It's extremely rare for my code to be compute-bound, and even in those cases I can almost always just find a C library somewhere that someone else has written that I can call through an FFI. So for me, a programming language that forces me to pay for run-time performance whether I want it or not has negative value.
But honestly, I can't actually think of a single instance in my entire 40 year career when the run-time performance of my CL code was a limiting factor. The limiting factor to my productivity has almost always been my typing speed and the speed of my internet connection.
---
[1] By "high fidelity" I mean how well the semantics of the language reflects the domain model, which, in the case of numerical code, is mathematics. Fixnums and floats are a fast but low-fidelity model because they don't actually behave like the integers and reals that they purport to model. Bignums and rationals are a high(er)-fidelity model. A really high fidelity model would let me do something like (* (sqrt 2) (sqrt 2)) and get back exactly 2 by having some sort of exact representation of algebraic numbers. I don't know of any programming language that provides that natively, but some languages, like CL and Python, let me extend the language to add that kind of capability myself. For me, that kind of extensibility is table stakes.
> The intrinsic (Church) view builds meaning into the syntax itself.
I shouldn’t have phrased it that way --- the distinction between syntax and semantics is awkward for me, especially when there are discussions of "static" vs "dynamic" semantics. What I meant is closer to this: in intrinsic systems, the typing rules define which terms are meaningful at all. Terms (programs) that do not type do not have meaning. This is the sense in which "correct programs get rejected".
> Programs don't have meaning, they are descriptions of processes.
Despite my confusion about the previous point, one clear thing I can point to, however, is the funny realization that there are some programming languages where the primary point is not to run the program, but just to type check it. This is the case for many programs written in languages such as Lean and Rocq. In these situations, the programs do have meaning and are not descriptions of processes. A much more common example that makes it clear that programs aren't just descriptions of processes is in the "declarative" languages. In SQL different execution plans might implement the same logical query, and writing down the SQL is less about describing a process and more about assertion or specification.
> depends on your goals
That’s fair, and I agree (perhaps violently??) I focused on your mention of Gödel's incompleteness because it’s often brought up (as I think you did here) to cast doubt on the viability of static typing in practice, but I don't think it does that at all. I think what casts much more doubt is a connection that I have felt is missing for a while now, which is about interactivity or perhaps "live programming".
The Lean and Rocq interactive theorem proving environments are very "live", but not with a REPL. It's fascinating. But it's also because those programs, in a sense, don't do anything...
There is a type theory rabbit hole, for sure. I think there is also a Smalltalk and Lisp rabbit hole. I would like them to meet somehow.
> Lean and Rocq
Yep, I get that. But I don't think of these as programming languages, I think of them as proof assistants. Related for sure, but different enough that IMHO it is unwise to conflate them.
> SQL
Again, a whole different beast. Database query languages are distinct from programming languages which are distinct from proof assistants. All related, but IMHO they ought not to be conflated. There is a reason that SQLite3 is written in C and not in SQL.
> it’s often brought up (as I think you did here) to cast doubt on the viability of static typing in practice
I bring up Godel just to show that static typing cannot be a panacea. You are necessarily going to have false positives or false negatives or both. If you're OK with that, fine, more power to you. But don't judge me if I'm not OK with it.
> There is a type theory rabbit hole, for sure. I think there is also a Smalltalk and Lisp rabbit hole. I would like them to meet somehow.
You want Coalton :-)
https://coalton-lang.github.io/
> But I don't think of these as programming language
I don’t want to get too caught up in what counts as a programming language, but you can absolutely write programs in a language like Lean, e.g. computing factorial 10. The fact that you can also write proofs doesn’t negate a system's status as a programming language. It’s worth noting that the official Lean documentation reflects this dual use: https://lean-lang.org/documentation/.
So there’s no conflation... just a system that supports both orientations, depending on how you use it.
> SQL [...] ought not to be conflated
I agree that SQL isn’t a general-purpose programming language, but you said something to the effect of "I don't understand what meaning a program has beyond what it does", and if you know about SQL, then I think you do understand that a program can have "computational meaning" that is separate from its meaning when you run it through an interpreter or processor.
> You want Coalton :-)
Thanks for the pointer. I am going to read https://coalton-lang.github.io/20211212-typeclasses/ , hoping to eventually achieve clarity on the relation between multiple dispatch and type classes..
That said, my interest in generic programming pushes pretty hard toward dependent types. I’m increasingly convinced that many patterns in dynamic programs, especially those involving runtime dispatch, require dependent types in order to be typed. Coalton might help me understand better the relation also between interactivity and static typing. But it's not going to be a panacea either ;)
> I bring up Godel just to show that static typing cannot be a panacea. You are necessarily going to have false positives or false negatives or both. If you're OK with that, fine, more power to you. But don't judge me if I'm not OK with it.
If you want a static type system that is sound and decidable, it will necessarily reject some programs that would behave just fine at runtime. That’s not really Gödel’s fault. It follows more directly from the halting problem.
But that's a quibble, and I still know what to make of your stance. The thing you're not okay with is having the system "get in your way", which is your false positive (the polarity doesn't matter, if that's what I'm getting wrong). The point is you want to say "my program is correct, now run it for me", and you will hate if the system rejects it.
What you don't hate is runtime type errors.
There is no judgement on my side on the preferences. But I'm confused. If you don't have any static type checking, then it's like having a system that always gives negatives, and some of those will be false (again, assuming I got the polarity right. if not, just swap here and earlier). I'm pretty sure you're okay with this, even though the text of your last message leads me to believe otherwise.
To summarize, there’s a fundamental trade-off:
- If you want a sound type system (never accepts unsafe programs),
- And you want it to be decidable (terminates on all inputs),
- Then it must reject some safe programs.
If you're not okay with that, then... sorry :D
> you want to say "my program is correct, now run it for me"
No. I want to say "Run this program whether or not it is correct because I just want to see what it does in its current state." Sometimes I know that it will produce an error and what I want is the information in the backtrace.
> What you don't hate is runtime type errors.
It depends. I hate "NIL is not of the expected type" errors. So I have a bee in my bonnet about that too, it's just not something that comes up nearly as often as static-vs-dynamic. I think typed nulls are a good idea, but very few languages have those.
> If you don't have any static type checking
I love static type checking. The more information I can get at compile time the better. I just think that the output of the type checker should be warnings rather than errors.
Thanks for sharing the BINDING-BLOCK macro, it solves one of my perennial problems with Lispy languages (I believe I have ranted about this on HN too). The best I could come up with was something that required reimplementing defun, etc.
And thanks for the "pass" tip for Python. I gave up using Python a few months ago, and ported all my Python scripts to C and ObjC because I just couldn't handle indentation bugs after every refactor. Never realised the solution could be this simple.
Thanks for the kind words!
you mention compiling down Lisp code. Did that come with many restrictions in the end result for how you were coding systems? Or would you basically write lisp "as-is" and get decent results?
I'm not sure what you mean by "compiling down Lisp code." Can you elaborate? That phrase doesn't appear in either article.
> Tooth's processors didn't have nearly enough RAM to run Lisp directly [1] so instead we used a custom-designed compiler written in Lisp to generate 6811 code.
Was thinking about this point in the 2002 article.
As a Python user, I could imagine a lot of pain going into "try and compile my Python code" (notably downstream from hashing and operator overloading meaning that I might have to end up shipping the entire Python object model into my binary). But perhaps Common Lisp avoids a lot of that mess by being "just" functions?
In Lisp you can write
and it looks just like normal code, but really you are just passing a data structure to the compile-bunny function. The direct equivalent in Python is awful: but you can build up a pretty decent EDSL in Python for this sort of thing with operator overloading, similar to what Sympy does: That looks a lot more like normal Python code, but getting it working is a bigger hassle than in more conventional Lisps where you can just quote whatever S-expression. (Of course, your compiler has to handle each type of expression sooner or later, so this isn't as big a deal as it might sound.)But neither one of these is compiling your Python code. Python's bytecode may give you a head start on that if you do want to do it.
> your compiler has to handle each type of expression sooner or later, so this isn't as big a deal as it might sound
It's a bigger deal than you think. Not only can you leverage the CL parser on the front end so that you can convert code to data simply by prefixing it with a quote, but you can also leverage the CL compiler on the back end to generate the final code. So you don't have to build a complete back end. All you have to do is translate your language into Common Lisp and then pass that on to the CL compiler. That is almost always orders of magnitude simpler than writing a complete back end. The only time it doesn't work is if your language has semantics that are fundamentally different from CL. (The classic example of this is call/cc.) But 99% of the time you don't need this, and so 99% of the time writing a compiler in CL is 1% of the work it otherwise would be.
This is awkward because I'm mansplaining your own project back to you, but, as I understand it, you were compiling to the 68(HC?)11, which your CL compiler back end didn't support, so I don't think it helped you in that way, to generate the final code. Unless I've misunderstood something fundamental?
Depending on how the compiler was structured, it could have helped you in some other way, like doing target-independent optimizations, if there was a way to get some kind of IR out of it instead of native code for the wrong CPU.
I agree that there are lots of cases where the approach you're describing works very well and has a huge payoff, though I've only done it with C, JS, and Lua compilers rather than CL compilers. Usually it's been more like 20% of the work rather than 1% of the work, but maybe that depends on the level of optimization you're hoping for.
You are correct. I was talking about writing compilers in CL in general, not specifically for the 6811.
Oh, I agree then.
It's a little more complicated than that, and I can't fully explain it in an HN comment. I'd have to give you a crash course in compiler design. But the TL;DR is that if you're building a custom language, Lisp gives you the AST (abstract syntax tree -- look it up if you don't already know) for free. You can also sometimes leverage parts of the Lisp compiler depending on what machine you're targeting, though for the 6811 that was not the case.
yeah I got that you get the AST.... I suppose you get the macro capabilities "for free" as well so in a lot of cases the actual code you're looking to transform might be quite simple?
I was mostly worried if the underlying CL is so dynamic that it would be hard to compile things down cleanly but I'm probably overthinking it.
This reminds me a bit of Emscripten's problem though. Emscripten does "machine code" (I believe LLVM IR) to Javascript translation and in order to get good performance it has to try really hard to rediscover things like switch statements and for loops in the IR, since using those in the JS will lead to good perf.
Anyways, I suppose that the core Lisp semantics are going to be so small that it all works out quite well.
CL and Scheme mostly aren't that dynamic. CL has functions like map and elt which operate on runtime-determined sequence types, but it's more common to use things like mapcar and nth or aref which aren't polymorphic at all. Arithmetic is pretty polymorphic in both, which can be an efficiency problem.
A lot of things you'd do dynamically in Python (with operator overloading or the like) are done statically (with macros, as you allude) in more traditional Lisps like CL and Scheme.
The core Scheme semantics are pretty small. Common Lisp has significantly hairier core semantics.
I imagine - though I don't know - it might have some similarities with GOAL, see https://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp and https://opengoal.dev/docs/intro/
Are you still writing lisp at JPL or NASA, or as a hobbyist?
I left JPL in 2004, never to return :-) I'm mostly retired now but I have a part-time consulting gig that uses CL for developing a bespoke custom chip design tool.
Very cool! Sorry to hear that Clozure CL isn't getting the love it deserves. I really wish the lisp community tried banding together under a single implementation, but we all know about the curse :)
That's what Common Lisp itself is: one Lisp to rule them all. And it actually works pretty well. There are libraries that work across a wide variety of CL implementations.
Yeah, but then there are some schemes and racket and Janet and picolisp and so on too. Then there are different CL implementations right (e.g., Allegro, SBCL, that old one the barski book recommends...etc). Too many implementations and not enough users I guess.
Would you send another Lisp program to space, just for the joy of it?
Of course! :-)
Related. Others?
Lisping at JPL Revisited - https://news.ycombinator.com/item?id=34557347 - Jan 2023 (100 comments)
The rise and fall of Lisp at the Jet Propulsion Lab (2002) - https://news.ycombinator.com/item?id=34524552 - Jan 2023 (145 comments)
Lisping at JPL (2020) - https://news.ycombinator.com/item?id=28113434 - Aug 2021 (9 comments)
Lisping at JPL (2002) - https://news.ycombinator.com/item?id=22087419 - Jan 2020 (307 comments)
Lisping at JPL (2002) - https://news.ycombinator.com/item?id=13626074 - Feb 2017 (37 comments)
Lisping at JPL (2002) - https://news.ycombinator.com/item?id=7989328 - July 2014 (19 comments)
The rise and fall of Lisp at the Jet Propulsion Lab (2002) - https://news.ycombinator.com/item?id=2212211 - Feb 2011 (36 comments)
The Rise and Fall of Lisp at the Jet Propulsion Lab. - https://news.ycombinator.com/item?id=304736 - Sept 2008 (72 comments)
Wow. I never imagined that this bit of whinging would end up being my ticket to immortality. :-/
I (we) don't see it as whinging, but as a bit of badass computer history :)
Heh, thanks! But it really was a bit of whinging when I wrote it. Lisp in space was ultimately a failure, and I've always had very mixed feelings about telling the story. But I'm glad it resonates.
> At the time it was more or less taken for granted that AI work was done in Lisp. C++ barely existed. Perl was brand new. Java was years away. Spacecraft were mostly programmed in assembler, or, if you were really being radical, Ada.
Given the choices, Lisp made a lot of sense when they started. After 2001-2004, there were other options - not to say they were necessarily better, but a mainstream language that enables a large number of people working together (interchangeably) has its value. Lisp is indeed "one-of-a-kind, highly dynamic applications that must be developed on extremely tight budgets and schedules" - but has a reputation for fostering lone geniuses and bad for large teams working together and maintaining legacy codebases.
(I write this as a big fan of Lisp.)
I think the late and slow standardization process hurt it a lot. Too many cooks trying to shove their recipes in. Scheme went too much the other way of making it too minimalistic. Imagine a modern Lisp with one source and a kitchen-sink library, like Go. Clojure is the closest we have for that I think, but it running on the JVM also hurt it a bit for a wider adoption.
While I was at Amazon, just before AWS, the entire internal network was monitored by a Lisp agent. I'm not sure if that is still true but it was kind of secret, and the internal wiki (only a few sentences) that documented its existence was removed with no deletion record.
Right before my position was outsourced to an entire remote overseas team, we had rolled out AAA* which conceivably cut out any unauthorized automated agents from the loop.
* https://www.geeksforgeeks.org/what-is-aaa-authentication-aut...
I also worked on a team at Amazon that did (still does to this day, as far as I know) lisp development. A system that was responsible for automated customer support workflows. And it had a visual programming environment for solutions architects. I worked on the Java backend of that.
I was just talking to someone about Java and they said that it's outdated but afaik it's still a major part of their internal infrastructure. Ofc, they're trying to eat their own dogfood which means they're trying as much as possible to move to AWS for internal services which is basically a mix of customized OSS services programmed in a wide mix of languages, but I imagine mostly C/++ and other low level languages encapsulated in various virtualization solutions.
Tells one of my all-time favourite stories.
> 1994-1999 - Remote Agent
> Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem. The story of the Remote Agent bug is an interesting one in and of itself.
Not quite the same scale but one of my personal favourite stories involved hardware buried under a highway. The Ethernet stopped working, but the PoE was still ok. We had the foresight to install a serial line to the console on the equipment too. This meant that I could power cycle the hardware at will (through the managed PoE switch) and talk to the boot loader (U-boot) over serial. While not exactly a REPL in the conventional sense, it had enough functionality to be able to talk directly to the MAC and PHY to determine what was going on.
Sadly we couldn’t convince it to work, even at 10 Mbit. My suspicion is salt water ingress into the vault. What we did manage to do, though… There were just enough tools installed on it that I could cross-compile zmodem at home, convert it to a hex file, upload the hex file by essentially just running cat > on the target, convert it back into a binary using… Perl I think? Or xxd? And then doing the daily data offload over zmodem every night instead of over TCP as was originally planned. It was a crazy weekend…
Neat. I’m curious about long run serial as someone who’s only done arduino stuff at 5V, what does industrial serial talking over a miles long connection look like? Higher voltage? Repeaters?
Often, though evidently not in this case, people use RS422, which is differential, so you can get megabits per second or kilometers, though not both. Shared-bus RS422 is RS485, like LocalTalk or DMX512. The voltages are actually lower than the ±12V normally used by RS232. Converting back and forth between RS232 and RS422 is easy and cheap. https://www.ti.com/lit/an/slla070d/slla070d.pdf is a TI appnote with an overview.
Ahhhh it was standard RS232 which uses -12V and +12V instead of the 0-5V signalling that TTL serial uses. Otherwise very similar and easy to convert with eg a MAX232.
Uboot compared to Forth it's very odd to use.
You can load things into memory and run them with u-boot, so somewhat similar. :)
Can you please elaborate? Thanks.
A lot of machines that use u-boot wont give you a prompt unless you connect them from a serial connection. Ideally, these ARM VM8850 and up based netbooks should have a u-boot prompt with a keybinding. Also, it would be far better if they had an Open Firmware (Forth) based BIOS.
Thanks, never realized Open Firmware is written in Forth!
Unfortunately, the relevant link in the TFA is dead, but IA has it: <https://web.archive.org/web/20111019054900/http://ti.arc.nas...>.
Here's some interesting quotes:
> during a task’s release of a lock, but before its actual release, the task may get interrupted by the daemon if the property gets broken. This means that the task terminates without releasing the lock. The error is particularly nasty in the sense that all code, except the lock releasing itself, had been protected against this situation: in case of an interrupt the lock releasing would be executed.
> The modeling effort, i.e. obtaining a PROMELA model from the LISP program, took about 12 man weeks during 6 calendar weeks, while the verification effort took about one week. ... The translation phase was non-trivial and time consuming due to the relative expressive power of LISP when compared with PROMELA.
> Java PathFinder (JPF) is a translator from a non-trivial subset of Java to PROMELA.
> The translator is written in 6000 lines of LISP, and was developed over a period of 8 months. JPF has been applied to a number of case studies, amongst them a 1500 line game server, a NASA file transfer protocol for satellites, and a NASA data transmission protocol for the space shuttle ground control.
Version with some pictures here:
https://corecursive.com/lisp-in-space-with-ron-garret/
It's an interview I did with Ron Garrett about the history of Lisp at the JPL.
I've spent the last 6 months getting into podcasts and you're my favourite. You don't let your opinions get in the way, always have good pacing and have gotten better with time. I'm enjoying going through your pods one by one, starting from the first ones and it's a special joy. Thank you.
Thanks!!
I enjoyed that interview, Adam and Ron. Thanks for doing it!
Also, to anyone reading, let me plug the corecursive podcast, in general... the stories that Adam brings out are unwaveringly humane, surprising, illuminating, and encouraging to me, your friendly neighbourhood average joe programmer.
Thanks!! The podcast is basically powered by positive feedback so I appreciate it.
> The demise of Lisp at JPL is a tragedy. The language is particularly well suited for the kind of software development that is often done here
That is a shame, but this can be said about many languages of time past. Do schools even teach lisp these days ?
IMO, another casualty of our WEB only environment :(
Very tiny Lisp, Forths and some Pascal like at http://t3x.org
One of them whole numbers as lists. I saw no floats, but there are fractional numbers.
If you want to know what is truly Lisp about:Easy mode:
http://t3x.org/zsp/index.html
You are Alonzo Church reincarnated:
http://t3x.org/clc/code.html
>enginner >can't do anything with 128MB. In 2002.
In my country an Engineer with a Bachelor would implement a Forth in KB's in days by just reading the specs or books related to building one.
A Microlisp maybe in weeks.
That's an interesting statement. I think that would be rare in the states, but there are plenty of genius coders out there.
i believe the point is more that bootstrapping a higher-level programming environment up from machine code shouldn't be the sole domain of so-called genius. a forth basically writes itself once you know how the execution model works, but these days we don't focus on teaching the end to end skills required to take arbitrary hardware and target it with a new, simple language.
This; it's not rocket science. I am no university educated (maybe something compared to a community college) and yet understanding how a Forth and/or a Lisp bootstrap themselves from small cores it's something every programmer should experience at least once, even from books. If SICP for Scheme and the ones for Common Lisp are too complex, the author from https://t3x.org has two or three for scheme, one of them for almost a micro Common Lisp (less than Elisp actually), a Forth and several more. Oh, and the code runs on 8086 PC's, Unix and some of them even under CP/M 2.2. It's crazy, and eye-opening too.
With a Forth you can literally see how floats are built from 'integer' blocks in RAM. Heck, you can even see how floats are done by using integer and string related words (printing them) . And you can see the 'odd' integer based stack in a live way with once your entered a double or a float. Binary representation in the spot.
I'm familiar with SICP and t3x. Sorry if my statements made it sound like only a genius could figure this out. I think most coders I know are just trying to solve a concrete problem with the existing building blocks. Really smart (or interested) people tend to go a bit broader and deeper. I don't think HN folks are very representative of the overall coding industry - not even close.
What do you think this has to do with your country?
Every country in Europe, in order to call yourself a CS Engineer, you must have a Bachelor and up. The requeriments are far higher than the so-called software engineers. Nothing of a genious; just years of studying.
I work with engineers from Europe and since they immigrated I also assume they are academically inclined, and I don’t think any of them know about writing forth interpreters.
Come on, it's a spectrum. In any given class you'd have a handful of savants who would do that in their sleep, a bunch of those struggling to make a nested loop work, and everything in between.
Also, as a very tiny and minimal Lisp:
http://t3x.org/zsp/index.html
Easier than SICP for Scheme and Intro to Symbolic Computation for Common Lisp.
true Evergreen ...nostalgic memories of stories from old 20%-time-and-don-t-be-evil Google days. Sigh. I still hope AI takeoff somehow revives Lisp-as-a-human-friendly-computer-language or something in the vein of this.
I think it might, after the blackbox LLM craze has blazed over and people go back to "AI" that is understandable and can be reasoned about. It comes and goes in waves, I am pretty sure there is gonna be time again for reasearch into a Lisp/Prolog style of AI.
Forth would be a good choice too. No GC, use 'forget'.
We used Forth back in the day as well, running on 6811 microcontrollers. Forth was also used on the Galileo Magnetometer, and we used Lisp to develop a patch for it in flight.
https://github.com/rongarret/gll-mag-patch/
Forth, in fact, is. https://www.forth.com/resources/space-applications/
'Was' is more likely. The list is from 2003, and if FORTH Inc, the (only? biggest?) remaining forth company hasn't updated it...
Forth is used in firmware and tons of places. There's ANS Forth and no need to 'update' anything, as there are tons of implementations.
As with Lisp, you can bootstrap themselves with very few primitives.
I read a comment on /r/Forth the other day from someone at NASA, implying that Forth was still actively used for something.
[dead]
[flagged]