Tree Calculus is awesome with implications beyond this website.

Shame the website doesn't attribute the creator and author Prof. Barry Jay. (Seems to be a pattern for them sadly, not sure why)

See Jay's book on GitHub for more https://github.com/barry-jay-personal/tree-calculus/blob/mas...

Note that it refers to his book on the "Specification" page :)

> Seems to be a pattern for them sadly, not sure why

Can you elaborate? Agreed I could (and will) attribute more explicitly on the website, but the intention is in no way to grab credit. I just posted this reply for more background on everything: https://news.ycombinator.com/item?id=42375914

  • sulam
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
English is likely not your first language and it’s fairly obvious what you mean, but the word you’re using a lot is spelled “intention”. Not at all how it sounds, stupid English.

(It’s also likely that this minor error is spawned by the use of intension, a very uncommon word, in the description of Tree Calculus.)

Ooops, thanks for catching, typo fixed. That Tree Calculus is intenSional (https://plato.stanford.edu/entries/logic-intensional/) is one of its main selling points, so that spelling must've rewired too many motor neurons.
As I understand, he uses both "intention" and "intension". "Intention" as a desire to do something, and "intensional" as opposite of "extensional": https://www.collinsdictionary.com/dictionary/english/intensi...
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Oh cool, his Bondi Language / Pattern Calculus was the first thing I thought of seeing this, so I guess I wasn't too far wrong.
I think this is really cool. It's at least shaped like something really cool. But I need to have my hand held a little bit more than this page is set up for. Is there like a... for dummies version?
It's a stripped down model of computation like the SKI calculus [0] or its cousin the lambda calculus, which are formal systems with precise rules for mechanically evaluating or reducing expressions.

It differs from the SKI calculus in that it can reflect on its own program structure, especially in ways the SKI calculus cannot - deciding if two programs are equal, for instance [1]. Further, unlike the lambda calculus, reducing a program with the reduction rules given [2] eventually converges upon a stable "normal form" of the program, which is expressed in irreducible terms, instead of leading to possibly infinite chains of reduction [3]. This allows for reflection without needing to "quote" or serialize the program into a stable data structure or other representation to sidestep the possibility of infinite reduction. This is similar to the notion of homoiconicity as in Lisp.

[0] https://en.wikipedia.org/wiki/SKI_combinator_calculus

[1] https://github.com/barry-jay-personal/tree-calculus/blob/mas...

[2] https://treecalcul.us/specification/

[3] https://sci-hub.se/https://dl.acm.org/doi/abs/10.1016/j.tcs....

  • cjfd
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
"reducing a program with the reduction rules given eventually converges upon a stable "normal form" of the program".

How is that possible if the reduction rules define a Turing complete language?

It would appear that programs in even SKI calculus and other combinatory logics can reduce to base combinators in finitely many steps, but when those combinators are applied to arguments, then the possibility of non-termination arises [0]:

    Combinatory logic [9,12] is very close to λ-calculus, and widely considered to be equivalent to it [8], but there are
    important differences. In particular, by modifying the usual account of recursion, i.e. by modifying the usual account of
    fixpoint functions, it is possible to ensure that all programs are represented by stable combinators, i.e. combinators that are
    irreducible or in normal form. Only when applied to arguments does non-termination become possible.
    That is, we may define a program to be a combinator in normal form. As such it is both an executable, since applications
    of it will trigger evaluation, and also a syntax tree, i.e. a binary tree of applications, with leaves labelled by the primitive
    combinators, say S and K.
[0] https://sci-hub.se/https://dl.acm.org/doi/abs/10.1016/j.tcs....
This thread is perhaps phrased confusingly. If I understand correctly, the claim is only that any program (including recursive ones) can be expressed in a finite normal form, which may of course diverge when "run" (applied to an argument). The same is true for basically every language though, e.g. consider the following Python:

    omega = lambda x: x(x)
    omega(omega)(42)
The expression `omega(omega)` diverges, but only because we've applied it to an argument. The function `omega` itself is in an irreducible normal form, namely `lambda x: x(x)`.

On the other hand, consider the function call with argument `42`: in that case, the function is `omega(omega)`, which is not in a normal form, and in fact diverges. In a system like SK, and presumably this tree calculus, we can form a similar expression, where the "function part" (i.e. left sub-tree) diverges with no normal form. That's unavoidable, due to the halting problem.

I think the claim is that we never need to write such expressions, since there's always an alternative which behaves the same but whose "function part" has a normal form.

As a (slightly) more practical example, consider some code like:

    adjust = (lambda x: x + 1) if increase else (lambda x: x - 1)
The function `adjust` is not in a normal form, since it depends on the value of `increase` (when truthy, `adjust` will increment; when falsy it'll decrement). Yet most programmers would probably avoid that, in favour of this:

    adjust = lambda x: x + 1 if increase else x - 1 
Both implementations of `adjust` will behave the same, but the second is in a normal form.
  • calf
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I'm rusty but doesn't the increase depend differently, the second adjust lets you decide to assign a value for the variable increase much later, whereas the first adjust fixes the value of increase at the point of evaluating that very assignment statement. There's a name for this, dynamic scoping or something.
Yes, the two are not identical - in the former, increase is not the part of the closure, in the latter it is.

Test:

  increase = True

  adjust1 = (lambda x: x + 1) if increase else (lambda x: x - 1)
  adjust2 = lambda x: x + 1 if increase else x - 1 

  print(adjust1(42))
  print(adjust2(42))

  increase = False

  print(adjust1(42))
  print(adjust2(42))
Yes, my argument relies on immutability of all values (including `increase`). Also, under lazy evaluation we can call `adjust2` immediately, since we know it's a `lambda`; yet attempting to call `adjust1` will force evaluation of `increase`; not good if `increase = provable(collatz_conjecture)`!
Reading the technical papers around Tree Calculus, I now see this claim a little more clearly. It's based around Tree Calculus rejecting eta-equivalence.

In Lambda Calculus, eta-equivalence says that `λf. λx. f x` is equivalent to `λf. f`, i.e. a function which passes its argument straight into `f` and returns the result unmodified, is indistinguishable from the function `f` itself. Those two functions could beta-reduce in a different order, depending on the evaluation strategy we use; but such distinctions are unobservable from within Lambda Calculus itself. In fact, we can make a rule, called eta-reduction, which transforms one into the other:

    λx. F x --> F
This is sound and, if we apply it before beta-reduction, ensures that both forms will also reduce in the same way (and hence have the same normal forms, if they exist). Note that Python has a single evaluation strategy, which will not reduce an expression like `lambda f: lambda x: f(x)` to `lambda f: f`; hence it doesn't implement eta-reduction.

In SK combinatory logic, eta-reduction is also sound; e.g. if 'xyz' reduces to 'yz' then 'xy' is equivalent to 'y'. This is obvious with a combinator like I = SKK, since Iy reduces to y via the ordinary S and K rules, so Iyz reduces yz in the same way, and the above implication is trivial. However, there are other combinators which don't reduce reduce all the way until they get another argument, i.e. a combinator W where Wy does not reduce to y, but Wyz does reduce to yz (the Tree Calculus papers give examples, referred to as "wait" combinators). It's fine to use an eta-equivalence rule like "if xyz ~= yz then xy ~= y" for reasoning about SK programs, but to actually reduce real expressions we may need a whole bunch of rules, to cover the most common "wait" combinators. The reason this is sound is that the only ways an SK expression can "use" an argument it's applied to is either (a) discard it, (b) apply some other other expression to it (potentially copying it a few times), or (c) apply it to some other expression. Cases (a) and (b) do not depend on the particular value of the expression, and the behaviour of (c) respects eta-equivalence.

(Side note, I recently implemented SK in egglog, and wrote a search procedure for finding eta-equivalent expressions and setting them equal http://www.chriswarbo.net/blog/2024-05-10-sk_logic_in_egglog... )

Tree Calculus can also use a given expression in another way: it can branch on the structure. That doesn't respect eta-equivalence, and hence Tree Calculus can use this to distinguish between two combinators P and Q even if Px == Qx for all x.

Thanks for the links, I'll be reading up on these.

When I signed up for a computer science degree, I was hoping I'd learn this stuff (if not this calculus, than enough context to grapple with it at least). What I actually got was software engineering. Snore.

  • calf
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The Theoretical Programming Languages class I took for my minor at Princeton was okay, basically they just taught from Benjamin Pierce's textbook. I'm sure there are other undergrad level intro class syllabi available if that's something you have spare time to self study.
It's a very funny way to structure the main landing page—it takes cues from trendy programming language and framework websites (single word headings that feel slightly buzzwordy, animated code samples, etc), but then the text body is in quite overwhelmingly dense and lengthy academic language, but then it doesn't actually provide enough details in the academic language to really understand what is going on.

I spent a while parsing the paragraphs in the hope of understanding what this is, only to discover that in spite of its verbosity the text on the landing page is no more informative than PL landing pages usually are—it tells you a lot about what the author thinks is great about the language while not actually explaining how any of it works. I guess I need to go to the specification for that?

> it tells you a lot about what the author thinks is great about the language

I did not get so far as to understand that it was a language. I thought maybe it was some kind of higher order function to be implemented in a language of your closing. Like a map/reduce sort of thing.

It's an example of Combinatory Logic. For another example check out SKI combinators. SKI is actually kind of close to this but using three combinators instead of one it's a bit simpler to understand (IMO).

https://en.wikipedia.org/wiki/Combinatory_logic

https://en.wikipedia.org/wiki/SKI_combinator_calculus

Note that the I combinator is redundant since `I = SK`, so all we really need is `SK`. There are alternatives like Iota with a single combinator, but they're essentially an obfuscated mix of S and K; so it's usually clearer to stick with SK.

Note that the first two rules of this Tree Calculus are precisely those of K and S.

  • tromp
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> I = SK

Wrong. I = S K M where M can be anything:

    I x = S K M x = K x (M x) = x
K and S K represent True and False, since K x y = x and S K x y = y.

> essentially an obfuscated mix of S and K

Most alternatives are tuples of S and K like λz. z S K or λz. z K S K. But the arguably shortest one is indeed a proper mix:

    A = λx λy λz. x z (y (λ. z)) = λx λy λz. x z (y (K z))
Oops, sorry; I got part way through writing I = S K K, but went off to another tab to double-check, and forgot to update that expression when I came back!

You're absolutely right.

You know, it's entirely possible that it isn't a language and I just assumed that into existence. It's a very confusing landing page.

Edit: no, it's definitely a language of some sort:

> The syntax and semantics of Tree Calculus are minimal and self-contained, relying on no platform-specific concepts or abstractions. As a result, it is trivial to write interpreters on any platform or in any programming language (demo). ...

> The ability to bootstrap the full power of Tree Calculus anywhere makes it an excellent configuration as code language in a heterogeneous system. Developers could write Tree Calculus programs that generate, say, JSON, and use an interpreter ...

Thanks for the feedback! Tree Calculus is a calculus/logic, see Specification page or the book by Barry Jay (linked on that page) for way way better and detailed verbose explanations. It only defines what I chose to call "t" on the website, Barry uses "Δ" in his book and papers.

So without anything else, we'd have to talk about programs in terms of "(t t) t ..." or binary trees, which gets unwieldy quickly. The first natural step, for practical purposes, is to allow definitions "foo = ...", then some syntactic sugar for lists, functions, etc. Ooops and now we have a "language". If you open the "Playground" page there's a summary of what exactly is TC and what is syntactic sugar (and really nothing more!) on top of it.

I like to think that the line is so blurry precisely because TC needs nothing but a bit of syntactic sugar to feel like a usable PL haha.

Right, it's a programming language the way the lambda calculus or pi calculus or whatever are programming languages—I did understand that much!

I love the idea of having a website like this to introduce people to one of the less popular calculi, and the playground is a great touch. It might be helpful to have an introductory paragraph that explains exactly what the tree calculus is, starting from what a "calculus" is—your target audience seems to be people who aren't otherwise going to go out of their way to read Barry's papers, which means you can't assume as much background as you currently do. As a reference, I'm a casual PL nerd who actually has read academic papers related to some of the less well-known calculi with an eye towards implementing them, so I'm on the more informed end of the spectrum of your target audience.

Have you considered making this site open source? No pressure if not, but if so I'd be happy to contribute to polishing up the landing page. I'm very interested in learning more about this anyway, and I'm more than willing to help!

Agreed!
[flagged]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I will also try to give some explanation:

The specification says that the syntax of expressions in this thing is `E ::= t | E E`. This is a bit confusing, because it might lead you to believe that all expressions just look like `t t t t t t t`. In reality, you are supposed to keep the implied bracketing, so expressions really look like `(t t) (t ((t t) (t t)))` (note that at the top level and in each parenthesis, there always exactly two subexpressions). Essentially, the space character becomes a binary operator (similar to how we often write multiplication with no symbol).

The expressions are a bit heavy on parentheses, so we say that this binary operator is left associative. This means that an expression `a b c` should be interpreted as `(a b) c`, an expression `a b c d` should be interpreted as `((a b) c) d`, and so on. If you think about it, this means that you can always get rid of an opening parenthesis at the left edge of an expression, i.e. we can assume that an expression always start without one.

With this out of the way, we can now understand where the trees come from: As there is only one terminal symbol, `t`, after removing unnecessary parentheses, every expression will always start with `t`, which is followed by a number of other expressions. To draw this as a tree, draw a node representing the initial `t`, and a sub-tree for each of the follow-up expressions (by applying the same procedure to them).

In this view, the semantic rules at the top of the specification page now tell you how to "simplify" a tree whenever there is a node with three or more sub-trees, or alternatively, how to reduce an expression that is a `t` followed by three or more sub-expressions. (In the syntax view, you replace the `t` and the first three expressions following it by whats on the right of the arrow. In the tree view, you replace the node and its first three children by some other sub-tree, then you attach the remaining children to the root of the new sub-tree.)

  • Exuma
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Great extra detail - thanks
A defining sentence at the top of the page, something like “Tree Calculus is a [noun phrase] for [summary of purpose],” would be helpful. Wikipedia articles typically begin with such sentences:

“Lambda calculus (also written as λ-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution.”

“In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices.”

“The felicific calculus is an algorithm formulated by utilitarian philosopher Jeremy Bentham (1748–1832) for calculating the degree or amount of pleasure that a specific action is likely to induce.”

[dead]
Quick summary:

It's a programming language whose programs (and whose values) are unlabeled trees.

An unlabeled tree is a tree-shaped data structure whose nodes hold no data. The children of a node are ordered, though. The "Tree Calculus" defines a set of rules by which you can "evaluate" an unlabeled tree to get another unlabeled tree. If you apply these rules repeatedly, either you'll get into an infinite loop, or you'll get a tree that the rules say doesn't change anymore. The rules are designed so that the rules don't effect binary trees, so if you evaluate a binary tree you'll get the same tree back out and the computation is "done". These rules are written as a small-step semantics (a standard way to write down evaluation rules in PL) in the "specifications" page.

They claim that:

- The evaluation rules for trees are Turing Complete, meaning that you can express any computation, e.g. any JS program, using the Tree Calculus. More precisely, the claim is that there's a straightforward way to convert any (say) JS Program into a tree, and any tree into a JS value, and now you can use tree evaluation to run a JS program by doing (i) convert the JS program into a tree, (ii) evaluate the tree to get another tree, and finally (iii) convert the tree into a JS value, which is the correct output of the JS program. To prove this you wouldn't actually use JS as the language, you'd use something simpler that we already know is Turing complete like the lambda calculus, but it's the same idea. Though glancing at the page they might have actually done this for JS.

- The evaluation is asymptotically optimal, meaning that for any programming language P (like JS), there's a conversion f(prog) from programs in P to Tree Calculus trees and constants a and b such that running_time(f(prog)) <= a+b*running_time(prog). That is, you can run programs in any language using the Tree Calculus with ~constant overhead. This is true for all the programming languages you love, e.g. you could write a JS program that takes a Java program, compiles it to bytecode, and run that bytecode, and unless you did this reasonably badly the running time won't be worse than a factor of 1,000,000.

- A whole bunch of other stuff too. It's all plausible at first glance (i.e., I don't think they're making any of it up), but not obviously consequential.

What's it good for:

Some PL people might think it's cool. Conceivably useful to simplify some theory of computation proofs.

If you find this sort of thing interesting, though, I'd recommend learning the lambda calculus instead. The lambda calculus is simpler, more well known, and more useful (it's a model of functions, instead of some made up rules about trees).

> The lambda calculus is simpler, more well known, and more useful (it's a model of functions, instead of some made up rules about trees).

Lambda calculus is actually quite tricky; e.g. I've implemented it many times over the decades, both for fun and for serious applications, and have still never properly implemented capture-avoiding substitution (I've tried, but usually give up and reach for an existing implementation).

Also, notice that the tree calculus reduction rules are confluent, disjoint and branch on their left subtrees (similar to SK calculus). Hence they are also a "model of functions" like lambda calculus; or, alternatively, lambda calculus is also "some made up rules about [expression] trees".

> If you find this sort of thing interesting, though, I'd recommend learning the lambda calculus instead. The lambda calculus is simpler, more well known, and more useful (it's a model of functions, instead of some made up rules about trees).

I was with you all the way up until here.

The lambda calculus is only more useful because it's become the basis for a lot of existing programming languages. Its made up rules are no more fundamental than the made-up rules of any other calculus, tree calculus included. They just seem more fundamental because they form the basis of most functional programming today.

I'm also unconvinced that the lambda calculus itself is meaningfully simpler. It typically has three syntactic components and two reduction operations, compared to one syntactic component and five reduction operations—that gives the lambda calculus a very small edge in moving parts, but it's very small.

The only way in which I could agree with you is that learning the lambda calculus first is going to be easier because there's so much more material on it. But that's not because the fundamental properties of the lambda calculus, it's because Church came first.

> The lambda calculus is only more useful because it's become the basis for a lot of existing programming languages.

But there's a reason for that. Functions make a good basis for programming languages. It's not the only good basis! Concatenative combinators make a good basis too: see Factor and Joy.

If you take the lambda calculus and add numbers and a few operations on them, it's easy to program in it. Likewise for the concatenative calculus. But not so for NAND gates or SK combinators. You certainly can do anything you want with them, but doing so feels more like solving a logic puzzle than programming. I am likewise skeptical about the tree calculus.

I'm skeptical that we'd feel that way if Church didn't come first. His work defined the field and shaped the way we all think about programming in profound ways that make it very difficult to distinguish how natural one is over the other.
  • thesz
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Turing published the same year as Church, couple of months later. Church's advantage was very small.

Turing machine includes write over some place, thus it models a calculating man with infinite paper, infinite pencil and infinite eraser. Lambda calculus models calculating man with infinite paper and infinite pencil. Lambda calculus has very substantial advantage here.

Maybe for some people. Personally I find the concatenative calculus more natural to think about/in than the lambda calculus!
  • junon
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Thank you for this, way more digestible.

So if it's just an unlabeled tree (nodes hold no data) then the only information is the child order / count, correct? So part of it is somehow mapping some high level information to combination of nodes and children, and back (after some manipulation), correct? Or am I misunderstanding everything?

Correct. I think of it this way: The reduction rules prescribe an encoding for functions, but don't describe it for other (traditional) data. But there are very canonical choices of course, which the demos on the website follow: * false := t and true := t t * Pairs are t first_elem second_elem * Lists could be encoded as (t first_elem (t second_elem (t third_elem ...))) with empty list being t * Natural numbers as lists of booleans (LSB first) * Strings as lists of natural numbers (unicode code points)

These choices will affect what the functions that operate on data look like, concretely.

Thanks, your comment really helped me understand what's going on here!

It'd be great if you could add some beginner-friendly introduction to your website. :)

  • ufo
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> I'd recommend learning the lambda calculus instead

Not to mention, it'll be hard to understand what's going on here without some peevious experience with lambda calculus and combinators (SKI, etc)

people are always so impressed by this kind of stuff but i'll never get why. doesn't your intuition already lead you to recognize what's going on? or do you recognize and are still impressed?

look if it walks like an expression tree, talks like an expression tree, and quacks like... something else... then it's basically an expression tree with extra sugar and spice. that doesn't mean the sugar and spice isn't fun, it means the basic idea is exactly what you think it is - expression trees (plus in this case enough semantics to define combinators).

I'm very much impressed with this. I have never seen a programming language that allowed me to de/serialize functions. Let alone calculate the inverse of a function. If you're saying one or more languages with these features already exist then I'm very interested in names, links or references.
  • pdw
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Modern APL dialects have an inverse operator (https://aplwiki.com/wiki/Inverse). For example, in Dyalog APL (https://tryapl.com/):

        ⍝ Convert Fahrenheit to Celsius
        f_to_c ← (÷∘1.8)∘(-∘32)

        f_to_c 68
    20

        (f_to_c ⍣ ¯1) 20
    68
Now I'm an APL noob, I don't know how deeply this is implemented. I suspect it's very adhoc.

More interesting are logic languages such as Prolog. If you stick to the core logical features, you get the inverse of a function for free:

    ?- append([1,2,3], [4,5,6], X).
    X = [1, 2, 3, 4, 5, 6].

    ?- append(X, [4,5,6], [1,2,3,4,5,6]).
    X = [1, 2, 3].

    ?- append(X, Y, [1,2,3,4]).
    X = [], Y = [1, 2, 3, 4] ;
    X = [1], Y = [2, 3, 4] ;
    X = [1, 2], Y = [3, 4] ;
    X = [1, 2, 3], Y = [4] ;
    X = [1, 2, 3, 4], Y = [].
> have never seen a programming language that allowed me to de/serialize functions.

You can pickle functions in python? You can trivially serialize any lisp function (I'm not a lisp fan). Plenty of programming languages with both macros and first class function objects (those that can be passed around and thus have data representations).

> Let alone calculate the inverse of a function

Note it says "try to compute the inverse" because actually computing inverses is equivalent to the halting problem.

"If it seems to good to be true it probably is" could be adapted here to "If it seems too magical to be true, it's probably just cherry-picked".

> You can pickle functions in python? You can trivially serialize any lisp function (I'm not a lisp fan).

The point of the tree calculus appears to be that it doesn't require the intermediate step of "pickling" or, as the author calls it, "quoting" the program to produce a data structure or other representation of the program [0]:

    Previous accounts of self-interpretation had to work with programs that were not
    normal forms, that were unstable. Stability was imposed by first quoting the program to
    produce a data structure, by putting on some make-up. In tree calculus, the programs are
    already data structures, so that no pre-processing is required; both of the self-evaluators
    above act on the program and its input directly. In short, tree calculus supports honest
    reflection without make-up.
It sounds similar to the notion of homoiconicity as in Lisp, but probably more precisely or even strongly stated.

> Plenty of programming languages with both macros and first class function objects (those that can be passed around and thus have data representations).

A language may have first class function objects, but its actual structure may be opaque and not open to reflection or manipulation (beyond of course just munging the source code as plaintext). You can maybe create a function literal, pass the function around and to higher-order functions, but you can't inspect or modify its internal structure, or decide program equality (based on either exact structure, or one program reducing to another according to the reduction rules of the calculus).

Lastly the tree calculus would also appear to differ from the lambda calculus in that programs are stable and won't reduce infinitely, instead converging on some normal form of irreducible terms. [1]

[0] https://github.com/barry-jay-personal/tree-calculus/blob/mas...

[1] https://sci-hub.se/https://dl.acm.org/doi/abs/10.1016/j.tcs....

He sounds confused if he thinks that quotation involves turning programs into source code and then later recompiling them.

What he implemented IS quotation, despite his objections.

More precisely, the distinction would seem to be that programs in the tree calculus can analyze themselves with reference only to the reduction rules of the calculus, not needing to reach for some meta-language or theory outside the calculus that works on source code or some AST representation of the program [0]:

    Reflective programs are programs that can act on themselves to query their own struc-
    ture. The querying is important: that the identity function can be applied to itself does
    not make it reflective. A simple example is the size function defined in Chapter 5.
    When applied to itself, the result is (the tree of) the number 508 [...] Self-evaluation
    in a calculus provides good evidence for the ability to perform program analysis and
    optimisation within the calculus itself. Traditionally, self-interpreters were allowed to
    act on the syntax tree of a program, i.e. its quotation. [...] When quotation lies
    outside of the formal calculus then interpretation is separated from computation proper,
    so that some sort of staging is required.
A demo of a size function is given here [1], implemented directly in the tree calculus:

    size = \x (y $ \self \x compose succ $ triage id self (\x \y compose (self x) (self y)) x) x 0
[0] https://github.com/barry-jay-personal/tree-calculus/blob/mas...

[1] https://treecalcul.us/live/?example=demo-program-optimizatio...

  • pizza
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
If I’m not mistaken this seems like something that would be possible in Bend/HVM
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • cat5e
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Straight up dude. Can we get an explanation for the rest of us?
If it turns out to be broadly useful (to working programmers) I'm sure someone will write up a better explanation.

At its current state of development, it might be more appropriate for the blog "Lambda the Ultimate" than HN.

By the measuring stick of "anything that gratifies one's intellectual curiosity," I'd say this submission has been a roaring success for a lot of people. The nice thing about HN is that if something on the front page doesn't gratify your personal curiosity there are 29 more things that might.
And if the other 29 don't scratch your itch either, then there's a "More" link...
  • toxik
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Do you know what a Y combinator is?
Yes. And I needed to know to follow some of the code ("constructions"?) in the OP.
[flagged]
Although it might turn out to be a waste of everyone's time, it is quite sincere (not a hoax).
It's a sign of the times here when people decry computer science constructs they can't understand as hoaxes.
Well, "hoax" may indeed not be the best word, because it implies intent. I've known of another highly intelligent software developer that invented similar symbology, but was suffering from a mental disorder which made him believe his work was useful even though it's only uses were 'self-referential'. Highly intelligent people, including software developers, are not immune from such disorders.
The author has been publishing math and compsci papers since the 80s [0] with another on the way [1].

Maybe stick to TypeScript?

[0] https://scholar.google.com/citations?user=eWreFm4AAAAJ&hl=en

[1] https://popl25.sigplan.org/details/pepm-2025-papers/5/Typed-...

Ok, thanks. I'll avoid the Turning Tarpits.

https://en.wikipedia.org/wiki/Turing_tarpit

Is the Y combinator a hoax?

  λf. (λx. f (x x))(λx. f (x x))
Y Combinator is the simple syntax for representing the concept of passing a function to itself, so it's useful, for symbolic manipulation.

However the "Tree Calculus", just from those 5 little rules, is supposedly able to do all kinds of magic. I'm not buying it. It's nothing but musings and examples of binary trees, and how you can convert various data structures to binary trees.

Thanks for writing what I was afraid to post.
The homepage says "Democratizing Functions" and "Democratizing Metatheory". Whatever that means, I have a strong feeling that this is an abuse of the word "democraztizing"
This usage is quite common; it's the second definition on Britannica: "to make (something) available to all people : to make it possible for all people to understand (something)"

https://www.britannica.com/dictionary/democratize

Yeah but functions are available to all people. I mean, it's an abstract mathematical concept. When you find a 30% discount sticker on an item in a store and compute in your head what the resulting price is going to be, you're arguably applying a function.

Also, to say that this makes functions available to all people, and to immediately follow that with text such as "The reduction rules act on trees of higher degree, until they are binary" suggests that well, actually, this is only for PL gurus, not all people (and that's totally fine IMO, just don't use the word "democratizing")

I get your point, but would disagree that being able to apply a function is the same as understanding functions. As with most other things in CS&math, I would argue that you only start to "really understand" functions when you get proficient with higher-order functions, which is what this is about.
I never suggested "really understand", I talked about democratization which is used here to mean "to make available to all people". To remind, I'm solely writing all this in support of the comment that "democratizing" is the wrong choice of word here.

Sidenote, you can't possibly claim that this site helps anyone understand anything about functions they didn't yet, right? I mean, it's cool, but it's not really easy to understand or anything.

Falcors point seems clear enough: the authors have a generous definition of "make available" for "understanding", they seek to teach people how to fish.
Thanks for the feedback! The target audience I had in mind was certainly developers (like me), not "all people". And the wording was indeed inspired by PL talks and blog posts I consumed over the years.

Here is a (slightly provocative) thought: We developers were promised "first class functions" with functional programming languages. And it's true, you can pass them around like any other value! Cool. But first: What about inspecting those values? I can "look inside of" a number or string or array any day. But functions, I can only call. Huh, so an intensional view (not that anyone thinks that out loud) for all kinds of values, except functions. Yes sure, many languages do allow you do dig into functions. But it is all not the same as or as powerful as supporting it right down at the logic level! TC is also not first to do that. But IMO the most compact and practical calculus/language to do so, yet. Second, a practical example: We had "first class functions" for decades now. But where is our main stream config language (JSON etc) that has first class support for them? Of course the answer is: Because it remains tricky. In industry I've seen all sorts of work arounds, usually involving strings that get compiled/interpreted on the fly. Which usually means some amount of scary security, and no static guarantees whatsoever. With TC, a parser/runtime for arbitrary (but pure) functions is a few dozens lines of code. And thanks to being intensional, one can imagine type checking those functions at the time of loading the config, not only when executing! Concrete demo/blog post for exactly this is in the works.

So anyways, I do belive this enables truly fundamental shifts, hence "democratizing".

Why don't you put a short version of this explanation on your main website instead of a vague "democratising functions"? What you wrote makes sense, but if all I see when I'm visiting your website is "democratizing functions/metatheory" and some contextless code examples, I'm not gonna be able to tell why I should care.
In some programming languages you can inspect functions, either by reflection on the bytecode or as in Picolisp:

    $ pil +
    : (de blepis (x y) (+ x y]
    -> blepis
    : (car 'blepis)
    -> ((x y) (+ x y))
I think more in a "available as a tool not a fixture" is ment with democratizing. As in - everyone can use them, invent them, rearrange them, even abuse them.

Unlike school, were you are thought to reproduce them apply them, fire and forget them.

Yes, exactly like the ability to spend unlimited amounts of money on political advertising, campaign contributions, and "political gratuities" is available to all americans. From Elon Musk to the family sleeping under the overpass, the system is available for everyone's money.

Just because something is available, doesn't mean its democratic. Even in a democracy.

In this case, they are not really democratizing metatheory. Am I understanding metatheory any better now?

I do not know what it would even mean to democratize functions. Functions just are. Can you democratize the number 5?

  • xpe
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yes, it is a fact the definition exists on Brittanica. This is a positive thing, not a normative value.

But we, together, also shape how words are used. Just because someone uses a word doesn’t mean we have to like it or accept the context. We certainly don’t have to blindly repeat it.

We all put this into practice. We choose to reinforce patterns we find useful. And ignore ones we don’t.

We don’t need to disempower ourselves by acting like we have no power over language.

As for me, I would rather democratize mean something meaningful. So when someone uses it for self-interested purposes, I try to call it out.

I’m not a stickler for no reason. It is because I care — the world is still only marginally and fleetingly democratic. I don’t want marketing speak to ooze into yet another domain. Instead, I want such efforts to cause eye-rolls and backpressure.

But, yes, I will grant the printing press “democratized” books. And the Internet “democratized” information. These kinds of societal breakthrough are worthy of the word. It also so happens that such broad empowerings mutually reinforce democracy.

[flagged]
It is absolutely common usage. Nothing about the parent is patronizing or gaslighting.
  • xpe
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I agree. See: “What democratize really means”

https://intage.us/articles/words/democratize/

> Remember, language is shaped by culture, and you are part of that culture. You do not have to abdicate responsibility. You have options.

Here are some pictures I made for myself trying to "feel" the logic of Tree Calculus's reduction rules: https://latypoff.com/tree-calculus-visualized/ — might be handy for other people who are visual thinkers.
That is very helpful. I particularly appreciate the carefully written text.

I think the second figure, captioned "Stem with a single leaf child" has a mistake, with the line down from the triangle descending to a square. But that square should be a circle.

Thank you for your kind words and also for noticing the mistake — fixed.
Super cool! I think a (potentially animated) version of this would be an excellent addition for the website's "Specification" page :)
Let me know if I can help. Or just feel free to take the pics as they are.
Thanks! For now, I added a link to your website to https://treecalcul.us/specification/ just now.
Is everyone upvoting this just pretending to understand what it is?
It's democratisating metatheory! What's not to understand?
  • jeltz
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I think so. To me it seems like yet another implementation of lambda calculus with a poorly made webpage which does not explain why it is interesting.
This is one of those articles which I hoped would generate an interesting discussion, and hence I upvoted it on that basis.
Same. Pretty much upvoted because I didn't understand but wanted to, and thought other people might be able to provide more insight.
We ARE on YCombinator here, after all. And there are lambdas there. Good enough?
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I upvoted in the hope someone would explain it to me.
Not I, but I upvoted you for telling it how it is.
Darnit, I upvoted. Guilty
Can someone explain how this is not Lisp with a different syntax? Or Forth?

I don’t mean that as a criticism or midwit dismissal. Just want to understand.

Lisp is based on lambda calculus which has no tools to modify programs written in itself, this is apparently a useful feature, so there are many lisp-like languages with added macros as such there are many different implementations of these features. Even something as prevalent in lisp-likes as eval isn't part of lambda calculus, which only has abstraction application and variables (no env). If the notion of reflection is well defined and if tree calculus is reflective then it's definitely not just lisp (certainly not forth) with a different syntax

I'm no expert so take this post with a mountain of salt. Without confusing the issue: In practice it's just a slow lisp, in theory it's different from lambda calculus and can be used to implement that slow lisp-like in a simpler way.

> Lisp is based on lambda calculus which has no tools to modify programs written in itself

Lisp is not based on lambda calculus and lisp can certainly modify programs written in itself

https://www.reddit.com/r/lisp/comments/e467tk/is_it_more_cor...

Lambda abstraction (which is used by this “tree calculus”) is a form of quotation.

Nit picky point: quote/eval is an integral part of Lisp, not a later addon.

If it doesn't have mandatory parentheses around and indentation sensitivity, that's already a huge plus for people with my taste.

It's fine other people have different tastes really. I'm just sad that homoiconicity is mostly trapped within lisp dialects.

Interesting, I tried to convert Z combinator in SKI to this using the lambda calculus example then print out the tree. Untested:

    z = (t (t (t t (t (t (t (t (t t (t (t (t (t (t t)) (t t))) (t (t (t t)) (t t))))) (t (t (t t (t (t (t t (t (t (t t t)) t))) (t t)))) (t (t (t t (t (t (t (t (t t)) (t t)))))) (t t))))) (t t (t (t (t (t (t t))))))))) (t (t (t (t (t t (t (t (t (t (t t (t (t (t t (t (t (t t t)) t))) (t t)))) (t (t (t t t)) t))) (t t (t t))))) (t (t (t t (t (t (t (t (t t (t (t (t t (t (t (t t t)) t))) (t t)))) (t (t (t t t)) t))) (t t (t t))))) (t (t (t t (t (t (t t t)) t))) (t t))))) (t t (t (t (t (t (t t (t (t (t (t (t t)) (t t))) (t (t (t t)) (t t))))) (t (t (t t (t (t (t t (t (t (t t t)) t))) (t t)))) (t (t (t t (t (t (t (t (t t)) (t t)))))) (t t))))) (t t (t (t (t t (t (t (t (t (t t (t (t (t (t (t t (t (t (t t (t (t (t t t)) t))) (t t)))) (t (t (t t t)) t))) (t t (t t))))) (t (t (t t (t (t (t (t (t t (t (t (t t (t (t (t t t)) t))) (t t)))) (t (t (t t t)) t))) (t t (t t))))) (t (t (t t (t (t (t (t (t t)) (t t)))))) (t t)))))))) (t t)))))))
Original tested but unoptimized and also converted using tool:

    var K = a => b => a;
    var S = a => b => c => a(c)(b(c));
    var Z = S(K(S(S(K(S(S(K)(K))(S(K)(K))))(S(K(S(K(S))(K)))(S(K(S(S(K)(K))))(K))))(K(S(S(K))))))(S(S(K(S(S(K(S(K(S))(K)))(S))(K(K))))(S(K(S(S(K(S(K(S))(K)))(S))(K(K))))(S(K(S))(K))))(K(S(S(K(S(S(K)(K))(S(K)(K))))(S(K(S(K(S))(K)))(S(K(S(S(K)(K))))(K))))(K(S(K(S(S(K(S(S(K(S(K(S))(K)))(S))(K(K))))(S(K(S(S(K(S(K(S))(K)))(S))(K(K))))(S(K(S(S(K)(K))))(K))))))(K))))));
https://en.wikipedia.org/wiki/Fixed-point_combinator
This code is so easy to read.
It's equivalent to binary machine code for a procesor with 5 opcodes.

Why would it be readable?

  • ktm5j
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
No kidding.. I didn't think parenthesis hell could get worse than Lisp
Well really this language just takes the next logical step by removing all but a single symbol.

The next one would be () = t, to finally get a language consisting solely of parentheses.

Any source code in any programming language can in theory be encoded in unary, requiring only a single symbol (and without any need for extra grouping or whitespace). It's not gonna be efficient, though.
Yeah, I think I'm going to take the author's advice and use Tree Calculus to generate JSON config files. That way I won't need to use a bloated tool with a JSON parser just to output configuration.
  • tromp
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Z = \f. (\x. f (\v. x x v)) (\x. f (\v. x x v)) can be much simpler, namely S (S (K (S S (K (S (S K K) (S K K))))) K) (S (K (S S (K (S (S K K) (S K K))))) K)
If you means that it would be like this:

    var Z = S(S(K(S(S)(K(S(S(K)(K))(S(K)(K))))))(K))(S(K(S(S)(K(S(S(K)(K))(S(K)(K))))))(K));
Unfortunately not works with the `count_fn` function from the wikipedia page :(, `too much recursion`.
It’s great to see Johannes experimenting with tree calculus, and making explicit the possibilities which are merely implicit in my book GitHub.com/barry-jay-personal/tree-calculus/tree_book.pdf Now that (finally) there is a typed tree calculus I have started blogging (all at GitHub.com/barry-jay-personal)
A working link to the book is: https://github.com/barry-jay-personal/tree-calculus/blob/mas... (look for the download button on the right)
I'm very interested in the "Typed Program Analysis Without Encodings" paper. But I can't seem to find it online. Where do you suggest I look for it?
Just spent a bunch of time with this and had a couple insights that might help (particularly if you have some familiarity with the lambda calculus or formal semantics and are trying to get a foothold on this):

- I had to go down to the OCaml implementation to work out what the small-step semantics were saying, in part because I couldn't see what the underlying tree structure was. In each of the four-element reductions in the definition, put parentheses around the first three to see what is applying to what. Also I think the right-hand sides are under-parenthesised. So:

    (t (t) a)     b       -> a           (1)
    (t (t a) b)   c       -> (a c) (b c) (2)
    (t (t a b) c) t       -> a           (3a)
    (t (t a b) c) (t u)   -> b u         (3b)
    (t (t a b) c) (t u v) -> (c u) v     (3c)
Relatedly, the table is missing some cases because (I think) the authors see them as "obviously" falling out from the associativity of the written syntax, but I think it's helpful to add:

    t             a       -> (t a)       (0a)
    (t a)         b       -> (t a b)     (0b)
Now you can look at an expression with the syntax E E and more cleanly apply these semantic reductions to them.

- So wtf is all this doing? In the same way that working out the lambda calculus is frequently about bundling a lambda to "choose" between two options, this tree calculus is built to "choose" between three options based on whether it's presented with a node that is a leaf, a "stem" (one child), or a "fork" (two children). This is the core of rules 3a, 3b, 3c. If the "function" being applied is a fork whose left child is a fork, we think of the left-left grandchild as A, the left-right grandchild as B, and the right child as C; and if applied to a leaf, we use A, if applied to a stem we apply B to the stem's sole child, and if applied to a fork we apply C to the fork's two children. That three-way "choosing" is going to be how the system builds up the rest of the things you can do with it.

Thank you, this should have been on the front page.

That makes this a fun calculus, I guess, but isn't necessarily any more amenable to inversion (hard), serialisation (easy), or compilation (easy-ish) that something like SKI or lambda calculus already is.

Correct! The huge value add of TC is that it is also intensional, which SKI or LC are not. This is a property one is not commonly confronted with (which is a shame), but means that all the things you say (and more) can be defined right in TC. The crucial thing to see is that, while I can surely write a (say) program analysis for C in C, the analysis will have to work on a quoted version of programs! For instance a string, or AST. In TC, thanks to being intensional, you can do this directly. Concretely, say you had a function "add", in one line you can call "add 123 234", in the next line you can all "type_check add yadayada" to check its type. Or to serialize it. Or to compile it into x86.

To be very clear, there are other calculi and even programming languages that are intensional, TC is not first. But it is the most compact formulation, having just one combinator. And IMO the most practical, as I try to prove with the various demos on the website. E.g. I'd recommend looking at https://treecalcul.us/live/?example=demo-fusion which demos stream fusion. All demos are 100% self contained, in line 1 nothing but "t" is defined, and in less than 200 lines we have both a working program optimization and a little demo of it!

In Python:

    Leaf = []
    Stem = lambda x: [x]
    Fork = lambda a, b: [a, b]

    is_leaf = lambda x: len(x)==0
    is_stem = lambda x: len(x)==1
    is_fork = lambda x: len(x)==2

    def apply(a, b):
        """ From https://treecalcul.us/specification/ (OCaml) """
        if is_leaf(a): return Stem(b)
        if is_stem(a): return Fork(a[0], b)
        x, y = a       # a == Fork(x, y)
        if is_leaf(x): return y
        if is_stem(x): return apply(apply(x[0], b), apply(y, b))
        u, v = x       # x == Fork(u, v)
        if is_leaf(b): return u
        if is_stem(b): return apply(v, b[0])
        s, t = b       # b == Fork(s, t)
        return apply(apply(y, s), t)

    T = {}
    T["false"] = Leaf
    T["true"]  = Stem(Leaf)
    T["not"]   = Fork (Fork (T["true"], Fork (Leaf, T["false"])), Leaf)

    def show(tree):
        name = [k for k in T if T[k]==tree][0]
        print(name or tree)

    show(apply(T["not"], T["false"])) # true
    show(apply(T["not"], T["true"]))  # false
And here it is in Racket or Scheme:

    (define (apply a b)
      (cond ((null? a) (list b))
            ((procedure? a) (cons (car a) b))
            ((null? (car a)) (cdr a))
            ((procedure? (car a)) (apply (apply (caar a) b) (apply (cdr a) b)))
            ((null? b) (caar a))
            ((procedure? b) (apply (cdar a) (car b)))
            (else (apply (apply (cdr a) (car b)) (cdr b)))))
    
    (define t-false null)
    (define t-true (list null))
    (define t-not (cons (cons (list null) (cons null null)) null))
    
    (apply t-not t-false)
    (apply t-not t-true)
Leaf is null, Stem is list and Fork is cons.
Here is a visualization I made of the tree calculus rules as a pattern matching on binary trees https://github.com/unrealwill/tree-calculus-visualizer
On paper, it's really cool and I hope that more comes out of it. In practice, I'm not sure that taking 9k eval steps to turn a string lowercase is viable for anything.
But you can make any arbitrary compiler within the language itself to reduce certain patterns to fast native instructions. It’s interesting as a universal Intermediate Representation.
Damn, that's true. I keep coming back to this thread hoping for more insight because the idea is very pretty.
This depends a lot on what an eval step actually is. Could you give us an example or a reference to compare it to?
The idea might be nice but the syntax is so easy to mess up for humans that in the spec itself the author gets the translation of `not true` wrong (maybe a copy-paste from `not false`?).

Should be "t (t (t t) (t t t)) t (t t)".

Agreed, as someone who spent far too long writing out tree-like expressions similar to this, it's very easy to get them wrong. Of course an editor with bracket-matching will _help_ but not to get the subtrees correct.

('signatures', in case you were wondering)

Barry Jay's got an upcoming paper at PEPM regarding typed tree calculus. Good read too.
I googled that, and I found that it is a sister conference of POPL: https://popl25.sigplan.org/details/pepm-2025-papers/5/Typed-...

I was inclined to think that Barry Jay is some kind of crackpot

See the Coq proofs in https://github.com/barry-jay-personal/tree-calculus/ for the pre-typed tree content as well.
It's a good thing there are Coq proofs. However, I would still like to see the paper though. A paper is supposed to be more than just a syntactic theory. Typically there is some motivation included as well.
  • js8
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
AFAIK there are many variations (I think infinite, even) of "tree calculi". You can build one easily from combinatory logic by using only one universal combinator, which will be implied at the leafs of the tree.

John Tromp in his repo has some research pertaining to short universal lambda expressions (each can be expressed as combinators) of this type: https://github.com/tromp/AIT/blob/master/Bases.lhs

Super cool, thanks for the pointer! I'll note, though, that one of the main value adds of (this) TC is that it is also intensional.

See website for some elaboration and examples, I'd particularly recommend looking at https://treecalcul.us/live/?example=demo-fusion which demos a little stream fusion optimizer, entirely from scratch, including small test.

  • js8
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Oh, OK! I think it would be interesting to know to which universal combinator (or lambda expression) is the 't' related to.
Reduction rules (1) and (2) correspond to those of K and S, respectively. Reduction rule (3) corresponds to the elimination form for a Scott-encoding of a datatype like `X = Leaf | Stem X | Fork X X`. The cases of rule (3) are essentially parsing the argument into one of these three forms, and dispatching on the result.
This strikes me as a good idea in the same way that counting in unary is a good idea.

Theoretically profound but ultimately needing to be somewhat removed from its application to be of much practical use.

So wait. How is this not a lisp? And consider that a compliment, we need more lisp in our lives.
[dead]
This is like a weird child between Lisp/Forth and Prolog (?). I do think it is neat tho, got me thinking on how you could implement it and parse it.

Thanks for the inspiration!

  • golol
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Lambda calculus consists pretty much just of function compositions, which in principle have a tree structure. Is this just lambda calculus in a dress?
  • tromp
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Close, but not quite; it's combinatory logic plus introspection in a tree dress.
This is a good counterexample to "syntax is not sufficient for semantics". It is sufficient when there is no distinction between data and code. Code can reflect on itself as data. Like bootstrapped autocompilers, Godel's Arithmetization or neural nets. In all cases syntax is both data and behavior, it is deep, self reflective and self generative.
> This is a good counterexample to "syntax is not sufficient for semantics"

Perhaps I'm misunderstanding. At the top of https://treecalcul.us/specification/ it shows 5 lines of small-step semantics. If you threw these away, could you recover them from the syntax?

syntax becomes semantics after sufficient iterative updates in the environment
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
So a given function would be reprsented by an unlabeled tree, and its result calculated by applying rules which transform the source-tree into a binary tree?

Then how do I "call" such a tree with some specific arguments? Do I have to create a new tree that represents both the function, and the arguments it is called with?

How do I represent numbers, and strings, and arrays?

Presumably you could implement Church numerals [0] or something similar in the tree calculus.

[0] https://en.wikipedia.org/wiki/Church_encoding

Timur Latypoff's page (elsewhere in this thread but also: https://latypoff.com/tree-calculus-visualized/ ) shows a graphical view of this; you get to lists first—easy because tree structure is native here—and then a (natural) number is just a list of booleans, of arbitrary length, least significant first. Very elegant.
Cool, so Church Encoding of numbers is not needed after all. Seems much more simple to do as you describe, numbers are lists of bits. And that is how it is in computer memory too.

But, I thought the nodes of the tree in Tree Calculus were supposed to be "unlabeled". How can they then be booleans?

Ah, a short read through the link you provided explaines it simply. Thanks.

The specification talks about numbers, strings, lists.

The playground has concrete, runnable (bash, JS) examples.

I think the main problem people encounter understanding this thing is just parsing the expression tree for the rules.

It isn't stated but all trees in question are binary trees.

Even though the expression tree syntax is really easy E := t | E E

The mental gymnastic needed to visualize that the string

not = t ( t (t t) (t t t)) t correspond to the only tree drawn in https://treecalcul.us/specification/

is quite overwhelming.

The left-associative notation to remove unneeded parenthesis makes it even harder.

It could be explained so much better if the author made the 10 pictures corresponding to the transformation rules of the trees. Eventually highlighting the subtrees a, b, c, in the corresponding color before after.

Brains are used to pattern matching images but not abstractly defined syntax unless you have been trained in grammar theory.

  • wslh
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I'm genuinely curious while skimming Jay’s book, I couldn’t help but notice parallels with LISP-based approaches, which are also tree-structured. How does this differ from those?

Additionally, I have another question: where is this theory [potentially] applied, even if only in niche areas?

Finally, I think there is a glitch in [1] where define "false = t, true = t, etc" I think they meant false = f ? I was mesmerized by a tree representation of not though.

[1] https://treecalcul.us/specification/

> Additionally, I have another question: where is this theory [potentially] applied, even if only in niche areas?

Intermediate languages - write an interpreter, a JIT, or a compiler for the tree calculus. Then write a high-level language that reduces to tree calculus.

Immediately I'm reminded of the SKI calculus as an extremely minimal model for computation [0].

Edit: in fact, they even define the SKI combinators in their demonstration of Turing completeness, so I wonder how the tree calculus differs aside from being based on only a single combinator.

[0] https://en.wikipedia.org/wiki/SKI_combinator_calculus

[1] https://treecalcul.us/live/?example=demo-turing-intensional

The point of tree calculus as I understand is that all programs are valid data structures, so they can be modified and analyzed during runtime with nothing but other programs.
That is also true of Lisp. And Forth. And a dozen other homoiconic languages.
Any Turing-complete language should be able to express programs that do this.
  • tromp
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It's not based on a single combinator. It's based on the two standard basis combinators S and K, and a non-combinator that can inspect normalized terms to distinguish between degrees 0,1,and 2.
As far as I understand it so far, the idea that the trees are "unlabeled" over simplifies.

The most common kind of binary tree is defined as

binary-tree = nil | node of binary-tree label binary-tree

for example

empty-tree

node nil Alice nil

node nil Bob (node nil Carol nil)

node (node nil David nil) Edward (node nil Fred nil)

node (node nil George nil) Harold nil

If we erase the labels, there remains an implicit label Alice = 00, Bob = 01, Carol = 00, David = 00, Edward = 11, Fred = 00, George = 00, Harold = 10 encoded by the pattern of empty trees.

The trees in the article seem to be doing it slightly differently, with implicit labels 0,1, or 2. Edward is erased, leaving an implicit 2 and the erasure of both Bob and Harold leaves an implicit 1 for both of them, removing the distinction between 01 and 10.

Edited to add: I'm up to page 27 (pdf reader says page 39 of 176) and some nodes have three children. 0, 1, or 2 children represent "values". "It follows that trees containing a node with three of more branches is a computation that is not a value, and so must be evaluated."

A combinator is just a term without free variables. All the rules can be put as combinators, it’s just that they don’t have common names like SKICB(…).
An Architecture for Combinator Graph Reduction (TIGRE) https://users.ece.cmu.edu/~koopman/tigre/index.html
Your example reminds me of Binary Lambda Calculus [0]:

[0] https://tromp.github.io/cl/Binary_lambda_calculus.html

  • xigoi
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The difference seems to be that tree-calculus is intensional – you can distinguish different trees within the language even if they have the same computational value.
Right, two questions, how do you do this infinite loops and how do you do side effects.

All it can do is take programs and return values.

I can see how this could be useful for configuration generation.

But for it to be a programming language you would have to make it produce "execution plant" to be executed on more classical virtual machine that can do system calls and decide what to do next depending on what it gets back.

So it’s based on a universal combinator like Iota?
Yes. And the author is Johannes Bader who wrote about that one as well:

https://lambada.pages.dev

https://johannes-bader.com

  • tromp
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
No, it's based on K and S, as is clear from the two rules

    (t (t) a)     b       -> a           (1)
    (t (t a) b)   c       -> (a c) (b c) (2)
Sorry, I don't understand what I just saw.
  • p4bl0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
On the semantics page the reductions presented in the definition are not the same thing as the OCaml code does. Or if it is actually equivalent, it should be largely commented because it really is not obvious.

But this criticism is valid for most of the website. Nothing is really clear.

Reminds me of Wolfram Physics: it's just "nodes" that exist in relation to each other - and are in all other ways identical -, with some rules for manipulating them leading to all possible computations?

It's certainly not a practical way to write programs.

I like this idea. It's hard to think about ways that trees might combine or operate on other trees. This is fantastic to get into. Thank you for posting and the other commenters for sources.
  • yayr
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Can someone explain to me what is so special here? It seems to be just a simple binary abstract syntax tree, which with varying syntax can represented by almost any programming language
Is there any programming language whose implementation uses tree calculus underneath? (Akin to how most programming languages use some variation of lambda calculus for its AST)
Oof. I thought I had a grip on my ADHD but these non-seekable, side-by-side demos are giving me whiplash.

EDIT: oh no, oh god, they are text, so maybe Asciinema, so, we're light-years ahead of a GIF/mp4, but also, please don't disable the seekbar, come on.

:/ Even worse, they don't scroll. They don't need to be animated at all. Come on frontend people, just stop. Please, just... stop.

EDIT2: I have zero idea what "stream fusion" is and the 4 inscrutable paragraphs of text don't explain it either. But maybe I'm just a dumb and don't know what that is.

Stream fusion is a fancier version of the concept of loop fusion: https://en.wikipedia.org/wiki/Loop_fission_and_fusion

At a high level and skipping details, suppose you want to do something like this:

  (filter predicate (map f sequence))
Naively, this is two linear passes over the data. Fusion would allow you to combine these into one single linear pass.

This has particular uses when there are intermediate, temporary allocations that can be eliminated, it's not just about time performance.

> Come on frontend people, just stop. Please, just... stop.

I suspect this is part of why you're getting downvoted. This site seems to be a bit of a gag in its presentation. It's over-the-top on purpose.

  • doall
  • ·
  • 1 week ago
  • ·
  • [ - ]
Is that something like Transducers in Clojure?
This is more typically called lazy evaluation.
I couldn't imagine trying to type the lambda character that much.
I'm too dumb for this
I've got a mathematics degree and I can't work it out either if that's any consolation.
Reminds me of Refal, which is somewhat surprisingly Turing-complete.
Isn't it 1 to 1 to iota language/combinator ?
the ability to invert a function sounds crazy to me. I never imagined that was possible
Feels like clojure from afar
This is simpler to undrestand than clojure.
I'm not sure that's true since most of the top-level comments here are saying they don't understand it.
There's some kind of problem with the presentation which is causing people not to engage with it. It's no more complex a concept than, say, Lisp; everyone here has learned more complex programming topics than this. But people are not really processing the information presented, eg, there's confusion about whether or not a page with code examples is describing a programming language.

I think the presentation is effective but intimidating? There's something about it that causes people to tune out? But if you actually pay it some attention, it's pretty clear. I'm a bit baffled, the presentation looks good to me, and I know these commenters aren't dumb, so I just don't understand what went wrong here.

The presentation is confusing because of a mismatch between the headings/blurbs and what is being shown in the code examples.

The heading of the entire page is "the essence of computation" but I'm not shown why the code examples (which at a cursory glance look like a run-of-the-mill FP language) warrant that claim.

Then we read that there is "one operator", but that's clearly not visible from the examples. You have to read the comments in the playground to understand that things like definitions, lambdas, lists, etc. are syntactic sugar added on top of TC (which is fine, but it's just not transparent initially). In general, the distinction between TC (a formal calculus) and the programming language implemented here based on TC is not made clear enough.

The code examples themselves are fine (and there's some interesting things in there), but I stumbled over the "try to invert function" example. While the example itself is reasonably clear, it's not at all clear what that would mean in the general case - you clearly can't invert all functions (especially not non-injective ones). That should definitely be explained better.

This is a good analysis. Thanks.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The text on the website linked to is entirely vacant and content-free. It doesn’t actually describe what the language is or how it works.
You're expecting to see some forest of information which isn't there, because it's such a little language.

At the top of https://treecalcul.us/specification/ you have 5 rules. Your (valid) program will match one of the left-hand-sides. To execute your program, turn whichever LHS your program is into its corresponding RHS. Keep doing that until you can't, and you'll be left with your final answer.

It also provides 2 reference interpreters (OCaml and JS) on the same page. Each interpreter is just a single recursive function 'apply' (which does what I described in the above paragraph). Note the 'match a with' section in the OCaml version. That's essentially the 5 rules from the top translated to OCaml. (I'm not sure why they're slightly different but there's probably a good reason).

The JS interpreter is harder to read ("length"? "push"? "pop"? these aren't in the spec above). My guess is that a machine-friendly translation was needed so that JS wouldn't die from recursing too deeply.

I clicked on 'runjs.app' link, looked at the examples at the bottom, and added my own.

    apply(_not, apply(_not, _true));
and sure enough it yielded

    [ [] ]
which is what it calls '_true':

    const _true = [[]];
There's a lot of precise, technical language, including a specification - that seems very contentful and specific to me.

The code examples seem very straightforward and compelling to me.

    > # Setup: Simple function on strings     
    > f = λa λb concat ["Hello ",a," ",b,"!"] 
    > f "Jane" "Doe"                          
    Hello Jane Doe!                           
This requires some familiarity with languages like Lisp or Forth to follow, but otherwise it's very straightforward.

    > # What else can we do with a function?  
    > g = f "Admiral"                         
    > # Try to invert it:                     
    > invert g "Hello Admiral 
    Alice!"         
    Alice
That's a really concise and impressive demo to me! I've never seen a language that can invert functions!
  • bux93
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> I've never seen a language that can invert functions!

prolog

Hi all, author of the website here (I'm https://johannes-bader.com/). Wow, thanks for the reactions and many good suggestions! I thought I'd add a bit of context here.

As has correctly been pointed out, Tree Calculus was developed by Barry Jay! The "Specification" page links to his book. And a preview of his PEPM 2025 paper (hi chewxy!) can now be found here: https://github.com/barry-jay-personal/typed_tree_calculus/bl...

Compared to how long Barry has been professionally researching this, I entered the picture yesterday and joined the effort to help. Potential mistakes on the website are mine, not Barry's, but I do firmly believe in some of the "ambitious" wording there. Blog posts and more concrete demos or details to come!

Just for the curious, here is my angle in all this:

I happened to (hobbyist!) research tiny, yet practical, languages for over a decade (see e.g. https://lambada.pages.dev/). In that work I started noticing that the Iota combinator (https://en.wikipedia.org/wiki/Iota_and_Jot) is not a combinator in the most traditional sense: It is usually defined as behaving like "\x x S K", but like, how can we just refer to a lambda in the definition of a logic? One could write reduction rules such as "Iota Iota x -> x", but now Iota appears on the left hand side, in argument position! Doesn't that allow reflecting on arguments? Horror! I just started realizing that, while SK is perfectly Turing complete and sane, forcing down the number of combinators from 2 to 1 magically forces some amount of intensionality! Curious, isn't it?

And that's when I came across Barry's work and book! A calculus that embraces intensionality. So I reached out to see if I can help. How can I be most useful? Barry has been figuring out theory, professionally, before I could think. So I felt like helping spread the word, maybe to a less academic developer crowd, would be a more effective way to contribute. I spent my entire career building developer tools, so I have some ideas what could be useful and what might be necessary to convince various kinds of developers. That's how the website came to be, and it is very much a work in progress. We sync regularly, for instance Barry had excellent ideas for demos and I suggested slightly tweaked reduction rules at some point (not more powerful in theory, just more convenient in practice), see "triage calculus" in typed_program_analysis.pdf

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Would help a lot if somewhere at the very top it explained what tree calculus is (may be extend the animation of the addition example to first show what the t is)

It took me a while on the website to understand what it was all about. As it is it looks more like a website for a functional programming language.

  • skavi
  • ·
  • 1 week ago
  • ·
  • [ - ]
Are the tweaked reduction rules what you have on this website? I might be misunderstanding, but rule 3 certainly looks different on your website than in Barry’s book.

Could you link typed_program_analysis.pdf? I can’t seem to find it.

this looks like OCaml/F# to me
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
[flagged]
  • Lerc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This doesn't seem any more esoteric than lambda calculus. In fact I'm not entirely sure that it isn't a notational variant of lambda calculus.

It will take some more investigation than a quick glance to see what truly distinguishes it.

[flagged]
[flagged]