The short version appears to be that the author has managed to replace standard linear algebra notation with an alternative (but mathematically equivalent) graphical notation, which is harder to do computations and write programs in.
This submission is the latest in a trend of what I would classify as "math-lite" category theory and Haskell articles that reach the HN front page, which purport to explain something interesting but end up just rehashing standard mathematics in opaque ways.
I really wish this would stop. Fortunately (unfortunately?) I've been down this road a few times and know to avoid getting sucked in, but I can easily see a bright, curious person wasting a lot of time before realizing that the content is merely linguistic, not mathematical.
Is (for example) category theory a useful organizational tool in certain abstract branches of mathematics? Sure (those branches being, basically, algebraic topology and algebraic geometry). Is it a grand unified theory of math? No, it's just some useful vocabulary. Most mathematicians go their entire lives without writing a paper mentioning categories. Getting excited about category theory is like getting excited about matrix notation – useful, sure, but not where the meat is.
I also find any claims that category theory is relevant to the average working programmer to be dubious at best.
(And yes, I realize the graphical notation presented in this article is not category theory. I am making a broader point about certain kinds of articles I see, which also applies here.)
Someone needs to write a buyer's guide for GPUs and LLMs. For example, what's the best course of action if don't need to train anything but do want to eventually run whatever model becomes the first local-capable equivalent to ChatGPT? Do you go with Nvidia for the CUDA cores or with AMD for more VRAM? Do you do neither and wait another generation?
Thank you for making an actually relevant point about syntax. I agree with this 100%, and I love Janet, and was recently doing a lot of interactive Janet programming for a generative art playground.
So I added postfix function application. So instead of (f (g x)), you can write (g x | f).
I liked the syntax a lot, but it looked really weird with operators: (calculate x | + 1). So I made operators automatically infix: (calculate x + 1).
I also didn't like that the transformation from foo to (foo :something) (record field access) required going back and adding parentheses before the value, so I added foo.something syntax that means the same thing.
The result is something that's very easy for me to type and read:
(def eyes
(eye-shapes
| color (c + 0.5)
| color (c - (dot normal (normalize eye-target) - 0.72
| clamp -1 0
| step 0))
| color [c.b c.g c.r]))
Is this even Janet anymore? I dunno. It's a Janet dialect, and it's implemented as regular old Janet macros. But it's much easier for me to type like this. I recognize that it makes my code necessarily single-player, but that's fine for the sorts of dumb projects that I do for fun.
I think a lot of lisp programmers use paredit exactly so that they can write (f (g x)) in the order g x f, but with their editor re-positioning their cursor and taking care of paren-wrapping automatically. But I don't use paredit, and I don't want to wed myself to a particular mode of editing anyway. So I like a syntax that lets me type in the order that I think.
Much of the complexity and error reporting that exists in the lexer or parser in a non-Lisp language just gets kicked down the road to a later phase in a Lisp.
Sure, s-exprs are much easier to parse. But the compiler or runtime still needs to report an error when you have an s-expr that is syntactically valid but semantically wrong like:
(let ())
(1 + 2)
(define)
Kicking that down the road is a feature because it lets macros operate at a point in time before that validation has occurred. This means they can accept as input s-exprs that are not semantically valid but will become after macro expansion.
But it can be a bug because it means later phases in the compiler and runtime have to do more sanity checking and program validation is woven throughout the entire system. Also, the definition of what "valid" code is for human readers becomes fuzzier.
One way I've come to answer "why Lisp syntax" is through the following proxy question:
> If you want extensible syntax, what should the base syntax be?
The regularity and austerity of Lisp syntax comes from this idea. If Lisp, by default, were loaded up with all sorts of syntactic constructs [1] many of us take for granted today (which may in and of themselves be good!), then it leaves less room for the programmer to extend it in their own way. It turns out that the syntaxes we take for granted today—like for(;;){} loops or pipe | operators—are perfectly serviceable in their S-expression-equivalent form to the working Lisp programmer.
The author is right about why Common Lisp's syntax extension facilities (macros) work; the language is in a sort of syntactic Goldilocks zone.
[1] To properly discuss Lisp, we really ought to distinguish meta-syntax (the parentheses) and syntax (the grammar of symbols and lists). Common Lisp has lots of syntax, like
All of this is different syntax! These are different rules about how symbols etc. are allowed to be arranged to produce a semantically meaningful program. But they wear the same clothes of meta-syntax, which is relatively small, mostly based off of the fundamental idea of S-expressions:
Ordinary macros allow extension of the former class of syntax, while reader macros allow extension of the latter class of syntax. When talking about "macros" unqualified, we usually mean the former, but Common Lisp supports both.
which uses a Lisp to define itself. This means roughly that if you understand enough Lisp to understand this program (and the little recursive offshoots like eval-cond), there is nothing else that you have to learn about Lisp. You officially have read the whole language reference and it is all down to libraries after that. Compare e.g. with trying to write Rust in Rust where I don't think it could be such a short program, so it takes years to feel like you fully understand Rust.
Indirectly this also means that lisps are very close at hand for “I want to add a scripting language onto this thing but I don't want to, say, embed the whole Lua interpreter” and it allows you to store user programs in a JSON column, say. You also can adapt this to serialize environments so that you can send a read-only lexical closure from computer to computer, plenty of situations like that.
Aside from the most famous, you have things like this:
1. The heart of logic programming is also only about 50 lines of Scheme if you want to read that:
3. The object model available in Common Lisp was more powerful than languages like Java/C++ because it had to fit into Lisp terms (“the art of the metaobject protocol” was the 1991 book that explained the more powerful substructure lurking underneath this object system), so a CL programmer could maybe use it to write a quick sort of aspect-oriented programming that would match your needs.
There is a strange... kind of poetry with Haskell. It is like math on wheels, math applied to procedures, math with... time.
Its appeal to me is like the appeal of math to me, not like real analysis math but abstract algebra math. The beauty, the purity of mathematics of younger days that once became lost after encountering the sad complexities of the world. Understanding every little aspect and being able to prove every part is now a luxury, and interfacing without real understanding is the more practical approach in the turbulent waters of poorly connected technological and social systems.
But still there is hope and there are dreams. We like to drench ourselves in dream qualia sometimes, and Haskell and pure math are that medium. The abstractions of it, the consistency of it, the purity of it... When Haskell is called a pure language, it almost goes beyond the static definition of functions being pure, and describes the general feeling that occurs when writing Haskell. You feel pure. You feel like you are taking these small parts and creating greater parts in an elegant buildup of abstractions, traversing one level higher and one level lower at your whim.
Lisp... maybe it’s the parentheses, maybe it’s something else... it never really caught onto me like Haskell did. Haskell feels pure and dream-like and perhaps unsuited to the world where (if you really get down to it) abstractions and types are just useful ‘human’ inventions and unfit for every usage. The world is for getting down and dirty, and mathematics, or at least the pure side of it, really isn’t. The representative mathematics of Haskell is Category Theory, and it is just as far from the level of “real” as it can be. More abstract than abstract algebra, if you will.
Abstraction itself is an intellectual operation that is also rooted in emotional detachment. Perhaps Haskell represents that kind of ideal in a modern world where practicality pays before purity.
What's the deal with the name "Kubernetes"? Does it mean anything, or have some tech significance, or is it really just because it basically means "ruler" in Greek?
Lisp is an acceptable language for a variety of things, not just "scripting". I work on many projects written in Guile Scheme that range from game engines to static site generators to dynamic web applications to package managers, and I write one-off scripting tasks with it, too. There's a lot of mystique around Lisp and Scheme, and people tend to write it off as a relic of academia, but I use it for practical needs every day. Lisps enable a type of development via live coding that I have yet to see a non-Lisp match. I've used the REPL for doing everyday calculator tasks to live coding video games, and I've used it for customizing an X11 window manager in the past, too.
But rather than read about me confess my love for Lisp, I recommend that people just pick a Lisp that looks fun and take it for a spin. You might find a lot to like.
This submission is the latest in a trend of what I would classify as "math-lite" category theory and Haskell articles that reach the HN front page, which purport to explain something interesting but end up just rehashing standard mathematics in opaque ways.
I really wish this would stop. Fortunately (unfortunately?) I've been down this road a few times and know to avoid getting sucked in, but I can easily see a bright, curious person wasting a lot of time before realizing that the content is merely linguistic, not mathematical.
Is (for example) category theory a useful organizational tool in certain abstract branches of mathematics? Sure (those branches being, basically, algebraic topology and algebraic geometry). Is it a grand unified theory of math? No, it's just some useful vocabulary. Most mathematicians go their entire lives without writing a paper mentioning categories. Getting excited about category theory is like getting excited about matrix notation – useful, sure, but not where the meat is.
I also find any claims that category theory is relevant to the average working programmer to be dubious at best.
(And yes, I realize the graphical notation presented in this article is not category theory. I am making a broader point about certain kinds of articles I see, which also applies here.)