https://news.ycombinator.com/context?id=43948657
Thesis:
1. LLMs are bad at counting the number of r's in strawberry.
2. LLMs are good at writing code that counts letters in a string.
3. LLMs are bad at solving reasoning problems.
4. Prolog is good at solving reasoning problems.
5. ???
6. LLMs are good at writing prolog that solves reasoning problems.
Common replies:
1. The bitter lesson.
2. There are better solvers, ex. Z3.
3. Someone smart must have already tried and ruled it out.
Successful experiments:
Plain Prolog's way of solving reasoning problems is effectively:
for person in [martha, brian, sarah, tyrone]:
if timmy.parent == person:
print "solved!"
You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.Arguably it lets a human express reasoning problems better than other languages by letting you write high level code in a declarative way, instead of allocating memory and choosing data types and initializing linked lists and so on, so you can focus on the reasoning, but that is no benefit to an LLM which can output any language as easily as any other. And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages. Arguably Python or JavaScript will benefit an LLM most because there are so many training examples inside it, compared to almost any other langauge.
SLD-Resolution with unification (Prolog's automated theorem proving algorithm) is the polar opposite of brute force: as the proof proceeds, the cardinality of the set of possible answers [1] decreases monotonically. Unification itself is nothing but a dirty hack to avoid having to ground the Herbrand base of a predicate before completing a proof; which is basically going from an NP-complete problem to a linear-time one (on average).
Besides which I find it very difficult to see how a language with an automated theorem prover for an interpreter "doesn't do reasoning". If automated theorem proving is not reasoning, what is?
___________________
[1] More precisely, the resolution closure.
In the sense that it cuts off part of the search tree where answers cannot be found?
member(X, [1,2,3,4]),
X > 5,
slow_computation(X, 0.001).
will never do the slow_computation - but if it did, it would come up with the same result. How is that the polar opposite of brute force, rather than an optimization of brute-force?If a language has tail call optimization then it can handle deeper recursive calls with less memory. Without TCO it would do the same thing and get the same result but using more memory, assuming it had enough memory. TCO and non-TCO aren't polar opposites, they are almost the same.
So basically Resolution gets rid of more and more irrelevant ...stuff as it goes. That's what I mean that it's "the polar opposite of brute force". Because it's actually pretty smart and it avoids doing the dumb thing of having to process all the things all the time before it can reach a conclusion.
Note that this is the case for Resolution, in the general sense, not just SLD-Resolution, so it does not depend on any particular search strategy.
I believe SLD-Resolution specifically (which is the kind of Resolution used in Prolog) goes much faster, first because it's "[L]inear" (i.e. in any Resolution step one clause must be one of the resolvents of the last step) and second because it's restricted to [D]efinite clauses and, as a result, there is only one resolvent at each new step and it's a single Horn goal so the search (of the SLD-Tree) branches in constant time.
Refs:
J. Alan Robinson, "A computer-oriented logic based on the Resolution principle" [1965 paper that introduced Resolution]
https://dl.acm.org/doi/10.1145/321250.321253
Robert Kowalski, "Predicate Logic as a Programming Language"
https://www.researchgate.net/publication/221330242_Predicate... [1974 paper that introduced SLD-Resolution]
I really recommend that anyone with an interest in CS and AI read at least J. Alan Robinson's paper above. For me it really blew my mind when I finally found the courage to do it (it's old and a bit hard to read). I think there's a trope in wushu where someone finds an ancient scroll that teaches them a long-lost kung-fu and they become enlightened? That's how I felt when I read that paper, like I gained a few levels in one go.
Resolution is a unique gem of symbolic AI, one of its major achievements and a workhorse: used not only in Prolog but also in one of the two dominant branches of SAT-Solving (i.e. the one that leads from Hillary-Putnam to Conflict Driven Clause Learning) and even in machine learning, in of the two main branches of Inductive Logic Programming (which I study) and which is based on trying to perform induction by inverting deduction and so by inverting Resolution. There's really an ocean of knowledge that flows never-ending from Resolution. It's the bee's knees and the aardvark's nightgown.
I sincerely believe that the reason so many CS students seem to be positively traumatised by their contact with Prolog is that the vast majority of courses treat Prolog as any other programming language and jump straight to the peculiarities of the syntax and how to code with it, and completely fail to explain Resolution theorem proving. But that's the whole point of the language! What they get instead is some lyrical waxing about the "declarative paradigm", which makes no sense unless you understand why it's even possible to let the computer handle the control flow of your program while you only have to sort out the logic. Which is to say: because FOL is a computational paradigm, not just an academic exercise. No wonder so many students come off those courses thinking Prolog is just some stupid academic faffing about, and that it's doing things differently just to be different (not a strawman- actual criticism that I've heard).
In this day and age where confusion reigns about what even it means to "reason", it's a shame that the answer, that is to be found right there, under our noses, is neglected or ignored because of a failure to teach it right.
The way to learn a language is not via its syntax but by understanding the computation model and the abstract machine it is based on. For imperative languages this is rather simple and so we can jump right in and muddle our way to some sort of understanding. With Functional languages it is much harder (you need to know logic of functions) and is quite impossible with Logic languages (you need to know predicate logic) Thus we need to first focus on the underlying mathematical concepts for these categories of languages.
The Robert Kowalski paper Predicate Logic as a Programming Language you list above is the Rosetta stone of logic languages and an absolute must-read for everybody. It builds everything up from the foundations using implication (in disjunctive form), clause, clausal sentence, semantics, Horn clauses and computation (i.e. resolution derivation); all absolutely essential to understanding! This is the "enlightenment scroll" of Prolog.
>> but if it did, it would come up with the same result
Meaning either changing the condition or the order of the clauses. How do you expect Prolog to proceed to `slow_computation` when you have declared a statement (X > 5) that is always false before it.
> "How do you expect Prolog to proceed to `slow_computation` when you have declared a statement (X > 5) that is always false before it"
I know it doesn't, but there's no reason why it can't. In a C-like language it's common to do short-circuit Boolean logic evaluation like:
A && B && C
and if the first AND fails, the second is not tested. But if the language/implementation doesn't have that short-circuit optimisation, both tests are run, the outcome doesn't change. The short-circuit eval isn't the opposite of the full eval. And yes this is nitpicking the term "polar opposite of" but that's the relevant bit about whether something is clever or brute - if you go into every door, that's brute. If you try every door and some are locked, that's still brute. If you see some doors have snow up to them and you skip the ones with no footprints, that's completely different. % "the boy eats the apple"
eats(boy, apple).
This is being taken advantage of in Prolog code generation using LLMs. In the Quantum Prolog example, the LLM is also instructed not to generate search strategies/algorithms but just planning domain representation and action clauses for changing those domain state clauses which is natural enough in vanilla Prolog.The results are quite a bit more powerful, close to end user problems, and upward in the food chain compared to the usual LLM coding tasks for Python and JavaScript such as boilerplate code generation and similarly idiosyncratic problems.
> "has convenient built-in recursive-decent parsing with backtracking built-in into the language semantics, but also has bottom-up parsing facilities for defining operator precedence parsers. That's why it's very convenient for building DSLs"
which I agree with, for humans. What I am arguing is that LLMs don't have the same notion of "convenient". Them dumping hundreds of lines of convoluted 'unreadable' Python (or C or Go or anything) to implement "half of Common Lisp" or "half of a Prolog engine" for a single task is fine, they don't have to read it, and it gets the same result. What would be different is if it got a significantly better result, which I would find interesting but haven't seen a good reason why it would.
Also, that you push Python and JavaScript makes me think you don't know many languages. Those are terrible languages to try to graft to anything. Just because you only know those 2 languages doesn't make them good choices for something like this. Learn a real language Physicist.
I didn't push them.
> Those are terrible languages to try to graft to anything.
Web browsers, Blender, LibreOffice and Excel all use those languages for embedded scripting. They're fine.
> Just because you only know those 2 languages doesn't make them good choices for something like this.
You misunderstood my claim and are refuting something different. I said there is more training data for LLMs to use to generate Python and JavaScript, than Prolog.
By grafting LLM into Prolog and not other way around ?
There would be MCP bindings to said server, which would be accessible upon request. The LLM would provide a message, it could even formulate Prolog statements per a structured prompt, and then await the result, and then continue.
When you add in the constraint solving extensions (CLP(Z) and CLP(B) and so on) it becomes even more powerful, since you can essentially mix vanilla Prolog code with solver tools.
Now, with that in mind, I'd like to understand how you and the OP reconcile the ability to carry out a formal proof with the inability to do reasoning. How is it not reasoning, if you're doing a proof? If a proof is not reasoning, then what is?
Yes maybe the Prolog way means concise code is easier for a human to tell whether the code is a correct expression of the intent, but an LLM won't look at it like that. Whatever the formalism brings, it isn't enough that every parser task is done in Prolog in the last 50 years. Therefore it isn't any particular interest or benefit, except academic.
> both acceptor and generator
Also academically interesting but practically useless due to the combinatorial explosion of "all possible valid grammars" after the utterly basic "aaaaabbbbbbbbbbbb" examples.
> "how you and the OP reconcile the ability to carry out a formal proof with the inability to do reasoning. How is it not reasoning, if you're doing a proof? If a proof is not reasoning, then what is?"
If drawing a painting is art, is it art if a computer pulls up a picture of a painting and shows it on screen? No. If a human coded the proof into a computer, the human is reasoning, the computer isn't. If the computer comes up with the proof, the computer is reasoning. Otherwise you're in a situation where dominos falling over is "doing reasoning" because it can be expressed formally as a chain of connected events where the last one only falls if the whole chain is built properly, and that's absurdum.
>> I'm not claiming that reason is incorrect, I'm handwaving it away as irrelevant and academic.
That's not a great way to have a discussion.
That is exactly what "formal logic programming" is all about. The machine is coming up with the proof for your query based on the facts/rules given by you. Therefore it is a form of reasoning.
Reasoning (cognitive thinking) is expressed as Arguments (verbal/written premises-to-conclusions) a subset of which are called Proofs (step-by-step valid arguments). Using Formalization techniques we have just pushed some of those proof derivations to a machine.
I pointed this out in my other comment here https://news.ycombinator.com/item?id=45911177 with some relevant links/papers/books.
See also Logical Formalizations of Commonsense Reasoning: A Survey (from the Journal of Artificial Intelligence Research) - https://jair.org/index.php/jair/article/view/11076
Prolog: 1, 2, 3, 4, 5 ...
You and me instantly: 10^8000
I recommend this book: https://www.amazon.com/Secrets-Mental-Math-Mathemagicians-Ca...
THAT’S THE POINT!
7 * 101 = 707 * 10 = 7,070
And computers don't brute-force multiplication either, so I'm not sure how this is relevant to the comment above?
Just note: human pattern matching is not Haskell/Erlang/ML pattern matching. It doesn't go [1] through all possible matches of every possible combination of all available criteria
[1] If it does, it's the most powerful computing device imaginable.
There are hundreds of trillions of synapses in the brain, and much of what they do (IANANS) could reasonably be described as pattern matching: mostly sitting idle waiting for patterns. (Since dendritic trees perform a lot of computation (for example, combining inputs at each branch), if you want to count the number of pattern matchers in the branch you can't just count neurons. A neuron can recognise more than one pattern.)
So yes, thanks to its insanely parallel architecture, the brain is also an insanely brute force pattern matcher, constantly matching against who knows how many trillions of previously seen patterns. (BTW IMHO this is why LLMs work so well)
(I do recognise the gap in my argument: are all those neurons actually receiving inputs to match against, or are they 'gated'? But we're really just arguing about semantics of applying "brute force", a CS term, to a neural architecture, where it has no definition.)
Well, my brain perhaps. Not sure about the rest of y'all.
(a) Trying adding another 100 or 1000 interlocking proposition to your problem. It will find solutions or tell you one doesn't exist. (b) You can verify the solutions yourself. You don't get that with imperative descriptions of problems. (b) Good luck sandboxing Python or JavaScript with the treat of prompt injection still unsolved.
Prolog isn't "thinking". Not about anything, not about your problem, your code, its implementation, or any background knowledge. Prolog cannot reason that your problem is isomorphic to another problem with a known solution. It cannot come up with an expression transform that hasn't been hard-coded into the interpreter which would reduce the amount of work involved in getting to a solution. It cannot look at your code, reason about it, and make a logical leap over some of the code without executing it (in a way that hasn't been hard-coded into it by the programmer/implementer). It cannot reason that your problem would be better solved with SLG resolution (tabling) instead of SLD resolution (depth first search). The point of my example being pseudo-Python was to make it clear that plain Prolog (meaning no constraint solver, no metaprogramming), is not reasoning. It's no more reasoning than that Python loop is reasoning.
If you ask me to find the largest Prime number between 1 and 1000, I might think to skip even numbers, I might think to search down from 1000 instead of up from 1. I might not come up with a good strategy but I will reason about the problem. Prolog will not. You code what it will do, and it will slavishly do what you coded. If you code counting 1-1000 it will do that. If you code Sieve of Eratosthenes it will do that instead.
Python and Prolog are based upon completely different kinds of math. The only thing they share is that they are both Turing complete. But being Turing complete isn't a strong or complete mathematical definition of a programming language. This is especially true for Prolog which is very different from other languages, especially Python. You shouldn't even think of Prolog as a programming language, think of it as a type of logic system (or solver).
Firstly, we must set one thing straight: Prolog definitionally does reasoning. Formal reasoning. This isn't debatable, it's a simple fact. It implements resolution (a computationally friendly inference rule over computationally-friendly logical clauses) that's sound and refutation complete, and made practical through unification. Your example is not even remotely close to how Prolog actually works, and excludes much of the extra-logical aspects that Prolog implements. Stripping it of any of this effectively changes the language beyond recognition.
> Plain Prolog's way of solving reasoning problems is effectively:
No. There is no cognate to what you wrote anywhere in how Prolog works. What you have here doesn't even qualify as a forward chaining system, though that's what it's closest to given it's somewhat how top-down systems work with their ruleset. For it to even approach a weaker forward chaining system like CLIPS, that would have to be a list of rules which require arbitrary computation and may mutate the list of rules it's operating on. A simple iteration over a list testing for conditions doesn't even remotely cut it, and again that's still not Prolog even if we switch to a top-down approach by enabling tabling.
> You hard code some options
A Prolog knowledgebase is not hardcoded.
> write a logical condition with placeholders
A horn clause is not a "logical condition", and those "placeholders" are just normal variables.
> and Prolog brute-forces every option in every placeholder.
Absolutely not. It traverses a graph proving things, and when it cannot prove something it backtracks and tries a different route, or otherwise fails. This is of course without getting into impure Prolog, or the extra-logical aspects it implements. It's a fundamentally different foundation of computation which is entirely geared towards formal reasoning.
> And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages.
It is extremely different, and the only reason you believe this is because you don't understand Prolog in the slightest, as indicated by the unsoundness of essentially everything you wrote. Prolog is as different from something like Javascript as a neural network with memory is.
> "A Prolog knowledgebase is not hardcoded."
No, it can be asserted and retracted, or consult a SQL database or something, but it's only going to search the knowledge the LLM told it to - in that sense there is no benefit to an LLM to emit Prolog over Python since it could emit the facts/rules/test cases/test conditions in any format it likes, it doesn't have any attraction to concise, clean, clear, expressive, output.
> "those "placeholders" are just normal variables"
Yes, just normal variables - and not something magical or special that Prolog has that other languages don't have.
> "Absolutely not. It traverses a graph proving things,"
Yes, though, it traverses the code tree by depth first walk. If the tree has no infinite left-recursion coded in it, that is a brute force walk. It proves things by ordinary programmatic tests that exist in other languages - value equality, structure equality, membership, expression evaluation, expression comparison, user code execution - not by intuition, logical leaps, analogy, flashes of insight. That is, not particularly more useful than other languages which an LLM could emit.
> "Your example is not even remotely close to how Prolog actually works"
> "There is no cognate to what you wrote anywhere in how Prolog works"
> "It is extremely different"
Well:
parent(timmy, sarah).
person(brian).
person(anna).
person(sarah).
person(john).
?- person(X), writeln(X), parent(timmy, X).
brian
anna
sarah
X = sarah
That's a loop over the people, filling in the variable X. Prolog is not looking at Ancestry.com to find who Timmy's parents are. It's not saying "ooh you have a SQLite database called family_tree I can look at". That it's doing it by a different computational foundation doesn't seem relevant when that's used to give it the same abilities.My point is that Prolog is "just" a programming language, and not the magic that a lot of people feel like it is, and therefore is not going to add great new abilities to LLMs that haven't been discovered because of Prolog's obscurity. If adding code to an LLM would help, adding Python to it would help. If that's not true, that would be interesting - someone should make that case with details.
> "and the only reason you believe this is because you don't understand Prolog in the slightest"
This thread would be more interesting to everybody if you and hunterpayne would stop fantasizing about me, and instead explain why Prolog's fundamentally different foundation makes it a particularly good language for LLMs to emit to test their other output - given that they can emit virtually endless quantities of any language, custom writing any amount of task-specific code on the fly.
You say:
>> Yes, though, it traverses the code tree by depth first walk.
Here's what I suggest: try to think what, exactly, is the data structure searched by Depth First Search during Prolog's execution.
You'll find that this structure is what we call and SLD-Tree. That's a tree where the root is a Horn goal that begins the proof (i.e. the thing we want to dis-prove, since we're doing a proof by refutation); every other node is a new goal derived during the proof; every branch is a Resolution step between one goal and one definite program clause from a Prolog program; and every leaf of a finite branch is either the empty clause, signalling the success of the proof by refutation, or a non-empty goal that can not be further reduced, which signals the failure of the proof. So that's basically a proof tree and the search is ... a proof.
So Prolog is not just searching a list to find an element, say. It's searching a proof tree to find a proof. It just so happens that searching a proof tree to find a proof corresponds to the execution of a program. But while you can use a search to carry out a proof, not every search is a proof. You have to get your ducks in a row the right way around otherwise, yeah, all you have is a search. This is not magick, it's just ... computer science.
It should go without saying that you can do the same thing with Python, or with javascript, or with any other Turing-complete language, but then you'd basically have to re-invent Prolog, and implement it in that other language; an ad-hoc, informally specified, bug-ridden and slow implementation of half of Prolog, most like.
This is all without examining whether you can fix LLMs' lack of reasoning by funneling their output through a Prolog interpreter. I personally don't think that's a great idea. Let's see, what was that soundbite... "intelligence is shifting the test part of generate-test into the generate part" [1]. That's clearly not what pushing LLM output into a Prolog interpreter achieves. Clearly, if good, old-fashioned symbolic AI has to be combined with statistical language modelling, that has to happen much earlier in the statistical language modelling process. Not when it's already done and dusted and we have a language model; which is only statistical. Like putting the bubbles in the soda before you serve the drink, not after, the logic has to go into the language modelling before the modelling is done, not after. Otherwise there's no way I can see that the logic can control the modelling. Then all you have is generate-and-test, and it's meh as usual. Although note that much recent work on carrying out mathematical proofs with LLMs does exactly that, e.g. like DeepMind's AlphaProof. Generate-and-test works, it's just dumb and inefficient and you can only really make it work if you have the same resources as DeepMind and equivalent.
_____________
[1] Marvin Minsky via Rao Kampampathi and students: https://arxiv.org/html/2504.09762v1
The way to look at this is first to pin down what we mean when we say Human Commonsense Reasoning (https://en.wikipedia.org/wiki/Commonsense_reasoning). Obviously this is quite nebulous and cannot be defined precisely but OG AI researchers have a done a lot to identify and formalize subsets of Human Reasoning so that it can be automated by languages/machines.
See the section Successes in automated commonsense reasoning in the above wikipedia page - https://en.wikipedia.org/wiki/Commonsense_reasoning#Successe...
Prolog implements a language to logically interpret only within a formalized subset of human reasoning mentioned above. Now note that all our scientific advances have come from our ability to formalize and thus automate what was previously only heuristics. Thus if i were to move more of real-world heuristics (which is what a lot of human reasoning consists of) into some formal model then Prolog (or say LLMs) can be made to better reason about it.
See the paper Commonsense Reasoning in Prolog for some approaches - https://dl.acm.org/doi/10.1145/322917.322939
Note however the paper beautifully states at the end;
Prolog itself is all form and no content and contains no knowledge. All the tasks, such as choosing a vocabulary of symbols to represent concepts and formulating appropriate sentences to represent knowledge, are left to the users and are obviously domain-dependent. ... For each particular application, it will be necessary to provide some domain-dependent information to guide the program writing. This is true for any formal languages. Knowledge is power. Any formalism provides us with no help in identifying the right concepts and knowledge in the first place.
So Real-World Knowledge encoded into a formalism can be reasoned about by Prolog. LLMs claim to do the same on unstructured/non-formalized data which is untenable. A machine cannot do "magic" but can only interpret formalized/structured data according to some rules. Note that the set of rules can be dynamically increased by ML but ultimately they are just rules which interact with one another in unpredictable ways. Now you can see where Prolog might be useful with LLMs. You can impose structure on the view of the World seen by the LLM and also force it to confine itself only to the reasoning it can do within this world-view by asking it to do predominantly Prolog-like reasoning but you don't turn the LLM into just a Prolog interpreter. We don't know how it interacts with other heuristics/formal reasoning parts (eg. reinforcement learning) of LLMs but does seem to give better predictable and more correct output. This can then be iterated upon to get a final acceptable result.
PS: You might find the book Thinking and Deciding by Jonathan Baron useful for background knowledge - https://www.cambridge.org/highereducation/books/thinking-and...
could you expand what is the point? That authors opinion without much justification is that this is not reasoning?
Imagine a medical doctor or a lawyer. At the end of the day, their entire reasoning process can be abstracted into some probabilistic logic program which they synthesize on-the-fly using prior knowledge, access to their domain-specific literature, and observed case evidence.
There is a growing body of publications exploring various aspects of synthesis, e.g. references included in [1] are a good starting point.
[1] https://proceedings.neurips.cc/paper_files/paper/2024/file/8...
I believe the ontology was indeed implemented in Prolog but I forget the architecture details.
______________
[1] https://en.wikipedia.org/wiki/Frame_(artificial_intelligence...
There are definitely people researching ideas here. For my own part, I've been doing a lot of work with Jason[1], a very Prolog like logic language / agent environment with an eye towards how to integrate that with LLMs (and "other").
Nothing specific / exciting to share yet, but just thought I'd point out that there are people out there who see potential value in this sort of thing and are investigating it.
1. web devs are scared of it.
2. not enough training data?
I do remember having to wrestle to get prolog to do what I wanted but I haven't written any in ~10 years.
Generally speaking, all the languages they know are pretty similar to each other. Bolting on lambdas isn't the same as doing pure FP. Also, anytime a problem comes up where you would actually need a weird language based upon different math, those problems will be assigned to some other kind of developer (probably one with a really strong CS background).
(Of course this is an overgeneralization, since obviously, there are web developers, who do still remember how to do things in HTML, CSS and, of course JS.)
Think of this way. In Python and Javascript you write code, and to test if its correct you write unit test cases.
A prolog program is basically a bunch of test cases/unit test cases, you write it, and then tell the Prolog compiler, 'write code, that passes these test cases'.
That is, you are writing the program specification, or tests that if pass would represent solution to the problem. The job of the compiler to write the code that passes these test cases.
This is a tokenization issue, not an LLM issue.
I still think that Prolog should be mandatory for every programmer. It opens up the mind in such a logical way... Love it.
Unfortunately, I never found an opportunity in my 11 years since then to use it in my professional practice. Or maybe I just missed the opportunities?????
tag(TypeOfTag, ParentFunction, Line).
Type of tag indicating things like an unnecessary function call, unidiomatic conditional, etc.
I then used the REPL to pull things apart, wrote some manual notes, and then consulted my complete knowledgebase to create an action plan. Pretty classical expert system stuff. Originally I was expecting the bug fixing effort to take a couple of months. 10 days of Prolog code + 2 days of Prolog interaction + 3 days of sepples weedwacking and adjusting the what remained in the plugboard.
But some parts, like e.g. the cut operator is something I've copied several times over for various things. A couple of prototype parser generators for example - allowing backtracking, but using a cut to indicate when backtracking is an error can be quite helpful.
Elmore Leonard, on writing. But he might as well have been talking about the cut operator.
At uni I had assignments where we were simply not allowed to use it.
I don't think I ever learned how it can be useful other than feeding the mind.
It's a mind-bending language and if you want to experience the feeling of learning programming from the beginning again this would be it
But I think I had the most difficulty designing the interface between the logic code and Dart. I ended up with a way to add "Dart-defined relations", where you provide relations backed dynamically by your ECS or database. State stays in imperative land, rules stay in logic land.
Testing on Queens8, SWI is about 10,000 times faster than my implementation. It's a work of art! But it doesn't have the ease of use in my game dev context as a simple Dart library does.
Do you have a different use case? I would be open to sharing it on a project- or time-limited basis in exchange for bug reports and feature requests.
Another way of getting stuff done would be to use another programming language with its standard library (with regex, networking, json, ...) and embed or call Prolog code for the pure logic stuff.
Never again :D
I believe your case (and many other students) is that you couldn't abstract yourself from imperative programming (python) into logic programming (prolog).
PS Prolog is a Horn clause solver. You characterizing it as a query language for a graph database, well it doesn't put you in the best light. It makes it seem like you don't understand important foundational CS math concepts.
I'm using SQL to do SQL things. And I'm sure when I somehow encounter the 1% of problems that prolog is the right fit for I'd be delighted to use it. However doing general algorithms in Prolog is as misguided as in SQL.
I'm not. I'm pointing out that saying a Horn clause interpreter is a graph query language indicates a fundamental misunderstanding on your part. Prolog handles anything you want to say in formal logic very well (at the cost of not doing anything else well).
SQL on the other handle uses a completely different mathematical framework (relational algebra and set theory). This allows really effective optimization and query planning on top of a DB kernel.
A graph DB query language on the other hand should be based upon graph theory. Which is another completely different mathematical model. I haven't been impressed by the work in this area. I find these languages are too often dialects of SQL instead of a completely different thing based upon the correct mathematical model.
PS I used to write DBs. Discretion is the better part of valor here.
let prologBlob = new ProLog()
prologBlob.add( "a => b" ).add( "b => c" )
prologBlob.query( "a == c?" ) == True
(not exactly that, but hopefully you get the gist)There's so much stuff regarding constraints, access control, relationship queries that could be expressed "simply" in prolog and being able to extract out those interior buts for further use in your more traditional programming language would be really helpful! (...at least in my imagination ;-)
pl.consult("some-facts.pl")
pl.assertz("new(fact)")
while pl.Query(...).nextSolution():
print( X.value )
...will definitely keep it in my back pocket!https://pypi.org/project/janus-swi/
https://www.swi-prolog.org/pldoc/man?section=janus-call-prol...
https://play.flix.dev/?q=PQgECUFMBcFcCcB2BnUBDUBjA9gG15JtAJb...
is an embedded Datalog DB and query in a general-purpose programming language.
More examples on https://flix.dev/
Ironically, the most common way I have seen people do this is use an embedded LISP interpreter, in which a small PROLOG is easily implemented.
https://www.metalevel.at/lisprolog/ suggests Lisprolog (Here are some embedded LISPs: ECL, PicoLisp, tulisp)
SWI-Prolog can also be linked against C/C++ code: https://stackoverflow.com/questions/65118493/is-there-any-re... https://sourceforge.net/p/gprolog/code/ci/457f7b447c2b9e90a0...
Racklog is an embedded PROLOG for Racket (Scheme): https://docs.racket-lang.org/racklog/
uKanren is conceptually small and simple, here's a Ruby implementation: https://github.com/jsl/ruby_ukanren
Which is separate from the actual types in the code.
Which is separate from the deployment section of the docs.
I see this sentiment a lot lately. A sense of missed nostalgia.
What happened?
In 20 years, will people reminisce about JavaScript frameworks and reminisce how this was an ideal world??
A side one is that the LISP ecology in the 80s was hostile to "working well with others" and wanted to have their entire ecosystem in their own image files. (which, btw, is one of the same reasons I'm wary of Rust cough)
Really, it's only become open once more with the rise of WASM, systemic efficiency of computers, and open source tools finally being pretty solid.
Had never heard of it before, and this is first I'm hearing of it since.
Also had other cool old shit, like CIB copies of Borland Turbo Pascal 6.0, old Maxis games, Windows 3.1
Learn Prolog Now - https://news.ycombinator.com/item?id=9246897 - March 2015 (72 comments)
Learn Prolog now - https://news.ycombinator.com/item?id=1976127 - Dec 2010 (31 comments)
No.
But the true power is unlocked once the underlying libraries are implemented in a way that surpassesthe performance that a human can achieve.
Since implementation details are hidden, caches and parallelism can be added without the programmer noticing anything else than a performance increase.
This is why SQL has received a boost the last decade with massively parallel implementations such as BigQuery, Trino and to some extent DuckDB. And what about adding a CUDA backend?
But all this comes at a cost and needs to be planned so it is only used when needed.
Professionals write Prolog by focusing on the predicates and relations and leaving the execution flow to the interpreter. They also use the Constraint Logic Programming extensions (like clpfd) which use smart, external algorithms to solve problems instead of relying on Prolog's relatively "dumb" brute-force search, which is what typically leads to the "exploding brain" effect in complex code.
--- Worth mentioning here is that I wrote Prolog all on my own in 1979. On top of Nokolisp of course. There was no other functioning Prolog at that time I knew about.
Thereafter I have often planned "Infinity-Prolog" which can solve impossible problem with lazy evaluation.
I just learned from @grok that this Constraint Logic is basically what was aiming at.
Advanced Turbo prolog - https://archive.org/details/advancedturbopro0000schi/mode/2u...
Prolog programming for artificial intelligence - https://archive.org/details/prologprogrammin0000brat_l1m9/mo...
In the end though, it mostly just feels enough of a separate universe to any other language or ecosystem I'm using for projects that there's a clear threshold for bringing it in.
If there was a really strong prolog implementation with a great community and ecosystem around, in say Python or Go, that would be killer. I know there are some implementations, but the ones I've looked into seem to be either not very full-blown in their Prolog support, or have close to non-existent usage.
In Prolog, anything that can't be inferred from the knowledge base is false. If nothing about "playsAirGuitar(mia)" is implied by the knowledge base, it's false. All the facts are assumed to be given; therefore, if something isn't given, it must be false.
Predicate logic is the opposite: If I can't infer anything about "playsAirGuitar(mia)" from my axioms, it might be true or false. It's truth value is unknown. It's true in some model of the axioms, and false in others. The statement is independent of the axioms.
Deductive logic assumes an open universe, Prolog a closed universe.
Prolog’s Closed-World Assumption: A Journey Through Time - https://medium.com/@kenichisasagawa/prologs-closed-world-ass...
Is Prolog really based on the closed-world assumption? - https://stackoverflow.com/questions/65014705/is-prolog-reall...
I think there should be a room for three values there: true, unprovable, false. Where false things are also unprovable. I wonder if Prolog has false, defined as "yes" of the opposite.
I don't think so, because in this case both x and not-x could be "no", but I think in Prolog, if x is "no", not-x is "yes", even if neither is known to be true. It's not a three-valued logic that doesn't adhere to the law of the excluded middle.
"Yes" is not "true" but rather "provably true". And "no" is not "false" but rather "not provably true".
Third sensible value in this framework (which I think Prolog doesn't have) would be "false" meaning "it's provably false" ("the opposite of it is provably true").
To be frank I think Prolog in newer implementations completely abandoned this nuance and just call states "true" and "false" instead of "yes" and "no".
“The little prover” is a fantastic book for that. The whole series is.
In fact, in recent years people have started contributing again and are rediscovering the merits.
You can have a module written in the `#racket` language (i.e., regular Racket) and then a separate module written in `#datalog` and the two can talk to each other!
Prolog has many implementations and you don't have the same wealth of libraries, but yes, it's Turing complete and not of the "Turing tarpit" variety, you could reasonably write entire applications in SWI-Prolog.
I think lua is the much better language for a wide variety of reasons (Most of the good Python libraries are just wrappers around C libraries, which is necessary because Python's FFI is really substandard), but I wouldn't reach for python or lua if I'm expecting to write more than 1000 lines of code. They both scale horribly.
PS The big companies that actually make the LLMs, don't use Python (anymore). Its a lousy language for ML/AI. Its designed to script Linux GUIs and automate tasks. Its started off as a Perl replacement afterall. And this isn't a slight on the folks who write Python itself. It is a problem for all the folks who insist on slamming it into all sorts of places that it isn't well suited because they won't learn any CS.
Its ease of use and deployment give it a lot more staying power.
The syntax is also pretty nice.
[0] Modulo that Python et al almost certainly have order(s) of magnitude more external libraries etc.
Don't threaten me with a good time
Earlier, Prolog was used in AI/Expert Systems domains. Interestingly it was also used to model Requirements/Structured Analysis/Structured Design and in Prototyping. These usages seems interesting to me since there might be a way to use these techniques today with LLMs to have them generate "correct" code/answers.
For Prolog and LLMs see - https://news.ycombinator.com/item?id=45712934
Some old papers/books that i dug up and seem relevant;
Prototyping analysis, structured analysis, Prolog and prototypes - https://dl.acm.org/doi/10.1145/57216.57230
Prolog and Natural Language Analysis by Fernando C. N. Pereira and Stuart M. Shieber (free digital edition) - http://www.mtome.com/Publications/PNLA/pnla.html
The Application of Prolog to Structured Design - https://www.researchgate.net/publication/220281904_The_Appli...
SWI Prolog (specifically, see [2] again) is a high level interpreted language implemented in C, with an FFI to use libraries written in C[1], shipping with a standard library for HTTP, threading, ODBC, desktop GUI, and so on. In that sense it's very close to Python. You can do everyday ordinary things with it, like compute stuff, take input and output, serve HTML pages, process data. It starts up quickly, and is decently performant within its peers of high level GC languages - not v8 fast but not classic Java sluggish.
In other senses, it's not. The normal Algol-derivative things you are used to (arithmetic, text, loops) are clunky and weird. It's got the same problem as other declarative languages - writing what you want is not as easy as it seemed like it was going to be, and performance involves contorting your code into forms that the interpreter/compiler is good with.
It's got the problems of functional languages - everything must be recursion. Having to pass the whole world state in and out of things. Immutable variables and datastructures are not great for performance. Not great for naming either, temporary variable names all over.
It's got some features I've never seen in other languages - the way the constraint logic engine just works with normal variables is cool. Code-is-data-is-code is cool. Code/data is metaprogrammable in a LISP macro sort of way. New operators are just another predicate. Declarative Grammars are pretty unique.
The way the interpreter will try to find any valid path through your code - the thing which makes it so great for "write a little code, find a solution" - makes it tough to debug why things aren't working. And hard to name things, code doesn't do things it describes the relation of states to each other. That's hard to name on its own, but it's worse when you have to pass the world state and the temporary state through a load of recursive calls and try to name that clearly, too.
This is fun:
countdown(0) :-
write("finished!").
countdown(X) :-
writeln(X),
countdown(X-1).
It's a recursive countdown. There's no deliberate typos in it, but it won't work. The reason why is subtle - that code is doing something you can't do as easily in Python. It's passing a Prolog source code expression of X-1 into the recursive call, not the result of evaluating X-1 at runtime. That's how easy metaprogramming and code-generation is! That's why it's a fun language! That's also how easy it is to trip over "the basics" you expect from other languages.It's full of legacy, even more than Python is. It has a global state - the Prolog database - but it's shunned. It has two or three different ways of thinking about strings, and it has atoms. ISO Prolog doesn't have modules, but different implementations of Prolog do have different implementations of modules. Literals for hashtables are contentious (see [2] again). Same for object orientation, standard library predicates, and more.
The same toplevel runs also from 'node' as well.
You can create a logical equivalent of the cut operator in Fortran if you wanted to, but there's no native mechanism or operator to rely on. The languages possess the same computing "power", the difference is not in what they can compute which is your claim with "there are things you can't express in Fortran but you can in Prolog" (utter nonsense). Anything you can get a Prolog program to do, you can get a Fortran program to do (and vice versa).
In isolation, no you can't. You could implement a Prolog interpreter in Fortran however. And if you did that, you would be able to write a cut operator because then you are interacting directly with Prolog's machinery. Part of the definition of the cut operator involves changing how code around it behaves. You can't do this with Fortran (or other languages) normally. Then there is the entire concept of backtracking with isn't native in any other language (that I know of).
You could probably make a very poor cut operator in a language with an Any/Object type and casting but why would you. You are not wrong about the math. But you are ignoring the absurd amount of code you would have to write to do it. Its a bit hand-wavy to say because you can implement Prolog in a language, its just as powerful. Although that is mathematically correct but in practice it really isn't.
-- from "The Valley of Fear" by Arthur Conan Doyle.
"Sometimes, when you introduce Prolog in an organization, people will dismiss the language because they have never heard of anyone who uses it. Yet, a third of all airline tickets is handled by systems that run SICStus Prolog. NASA uses SICStus Prolog for a voice-controlled system onboard the International Space Station. Windows NT used an embedded Prolog interpreter for network configuration. New Zealand's dominant stock broking system is written in Prolog and CHR. Prolog is used to reason about business grants in Austria."
Some other notable real projects using Prolog are TerminusDB, the PLWM tiling window manager, GeneXus (which is a kind of a low-code platform that generated software from your requirements before LLMs were a thing), the TextRazor scriptable text-mining API. I think this should give you a good idea of what "Prolog-shaped" problems look like in the real world.
In the words of a colleague responsible for said reports it 'eliminated the need for 50+ people to fill timesheets, saves 15 min x 50 people x 52 weeks per year'
It has been (and still is) in use for 10+years already. I'd say 90% of the current team members don't even know the team used to have to "punch a clock" or fill timesheets way back.
And then there are declarative languages like Prolog.
are you saying you can’t comprehend prolog programs?