We (TypeScript) used to do this for Visual Studio prior to tmLanguage. It was nice because we didn't have to write a second parser. Our parser was already error-tolerant and incremental, and syntax highlighting just involved descending into the syntax tree's tokens. So there was no room for divergence bugs in parsers, and there was also no need to figure out how to encode oddities and ambiguity-breaking logic in limited formats like tmLanguage.
This all predated TSServer (which predated LSP, though that's coming in TypeScript 7). The latency for syntax highlighting over JSON was too much, and other editors often didn't make syntax highlighting available outside of tmLanguage anyway. Eventually semantic highlighting became a thing, which is more latency-tolerant, and overlays colors on top of a syntactic highlighter in VS Code.
The other issue with this approach was that we still needed a dedicated thread just for fast syntax highlighting. That thread was a separate instance of the JS language service without anything shared, so that was a decent amount of memory overhead just for syntax highlighting.
> Language servers are powerful because they can hook into the language’s runtime and compiler toolchain to get semantically correct answers to user queries. For example, suppose you have two versions of a pop function, one imported from a stack library, and another from a heap library. If you use a tool like the dumb-jump package in Emacs and you use it to jump to the definition for a call to pop, it might get confused as to where to go because it’s not sure what module is in scope at the point. A language server, on the other hand, should have access to this information and would not get confused.
You are correct that a language server will generally provide correct navigation/autocomplete, but a language server doesn’t necessarily need to hook into an existing compiler: a language server might be a latency-sensitive re-implementation of an existing compiler toolchain (rust-analyzer is the one I’m most familiar with, but the recent crop of new language servers tend to take this direction if the language’s compiler isn’t query-oriented).
> It is possible to use the language server for syntax highlighting. I am not aware of any particularly strong reasons why one would want to (or not want to) do this.
Since I spend a lot of time writing Rust, I’ll use Rust as an example: you can highlight a binding if it’s mutable or style an enum/struct differently. It’s one of those small things that makes a big impact once you get used to it: editors without semantic syntax highlighting (as it is called in the LSP specification) feel like they’re naked to me.
Looking at this, I noticed how long it's been since I saw a new IDE feature that really made me more productive at understanding code. The last I can really remember was parameter inlay hints. It's a bummer - both the Jetbrains IDEs and VS Code seem to only focus on AI features I don't want, to the detriment of everything else.
Wow! That is an incredibly good reason. Thank you very much for telling me something I didn’t know. :)
UPDATE: I've added a paragraph talking about the ability of rust-analyzer. Thank you again!
Its surprisingly useful to know if you’re working with a entity that you made.
Then there are languages like Rust who are like, whelp, we already use the fastest language, but compilation is still slow, so they have to resort to solutions like the rust-analyzer.
It's not really a bad thing. IDEs want results ASAP, so a solution should focus on latency; query based compilers can compile just enough of the source to get the answer to a specific query, so they're a good answer.
Compiling a binary means compiling everything though, so "compiling just the smallest amount of source for a query" isn't specifically a goal, instead you want to optimise for throughput; stuff like batching is a win there.
These aren't language specific improvements, they're recognition that the two tasks are related, but have different goals.
The difference between regular parsers and treesitter, is that regular parsers start eating tokens from the start of the file, and try to assemble and AST from that. The AST is built from the top down.
Treesitter works differently, it grabs tokens from an arbitrary point, and assembles them into AST nodes, then tries to extend the AST until the whole file is parsed.
This method supports incremental edits (as you can throw away the AST for the modified part, and try to re-parse), but the problem is that most languages are designed to be unambiguous when parsed left to right, and parsing them like this might involve some retries and guesswork.
Also, unlike modern languages, like Go, which is designed to be parseable without any semantic analysis, a lot of older languages don't have this property, notably C/C++ needs a symbol table. In this case, treesitter has to guess, and it might guess wrong.
As for what can you do with an AST and what can't you: you can tell if something is a function call, a variable reference, or any other piece of syntax, but if you write something like x = 2; then tree-sitter has no idea what x is, is it a float, an int? is it a local, a class variable, or a global? You can tell this with a symbol table which the compiler uses to dereference symbols, but treesitter cant do this for you.
Hmm, the strong reason could be latency and layout stability. Tree-sitter parses on the main thread (or a close worker) typically in sub-ms timeframes, ensuring that syntax coloring is synchronous with keystrokes. LSP semantic tokens are asynchronous by design. If you rely solely on LSP for highlighting, you introduce a flash of unstyled content or color-shifting artifacts every time you type, because the round-trip to the server (even a local one) and the subsequent re-tokenization takes longer than the frame budget.
The ideal hygiene could be something like -> tree-sitter provides the high-speed lexical coloring (keywords, punctuation, basic structure) instantly and LSP paints the semantic modifiers (interfaces vs classes, mutable vs const) asynchronously like 200ms later. Relying on LSP for the base layer makes the editor feel sluggish.
Tree-sitter has okay error correction, and that along with speed (as you mentioned) and its flexible query language makes it a winner for people to quickly iterate on a working parser but also obviously integration into an actual editor.
Oh, and some LSPs use tree-sitter to parse.
One of the designers/architects of 'Roslyn' here, the semantic analysis engine that powers the C#/VB compilers, VS IDE experiences, and our LSP server.
Note: For roslyn, we aim for microsecond (not millisecond) parsing. Even for very large files, even if the initial parse is milliseconds, we have an incremental parser design (https://github.com/dotnet/roslyn/blob/main/docs/compilers/De...) that makes 99.99+% of edits happen in microseconds, while reusing 99.99+ of syntax nodes, while also producing an independent, immutable tree (thus ensuring no threading concerns sharing these trees out to concurrent consumers).
> you introduce a flash of unstyled content or color-shifting artifacts every time you type, because the round-trip to the server (even a local one) and the subsequent re-tokenization takes longer than the frame budget.
This would indicate a serious problem somewhere.
It's also no different than any sort of modern UI stack. A modern UI stack would never want external code coming in that could ever block it. So all, potentially unbounded, processing work will be happening off the UI thread, ensuring that that thread is always responsive.
Note that "because the round-trip to the server (even a local one)" is no different from round-tripping to a processing thread. Indeed, in Visual Studio that is how it works as we have no need to run our server in a separate process space. Instead, the LSP server itself for roslyn simply runs in-process in VS as a normal library. No different than any other component that might have previously been doing this work.
> Relying on LSP for the base layer makes the editor feel sluggish.
It really should not. Note: this does take some amount of smart work. For example, in roslyn's classification systems we have a cascading set of classifying threads. One that classifies lexically, one for syntax, one for semantics, and finally, one for embedded languages (imagine embedded regex/json, or even C# nested in c#). And, of course, these embedded languages have cascading classification as well :D
Note that this concept is used in other places in LSP as well. For example, our diagnostics server computes compiler-syntax, vs compiler-semantics, versus 3rd-party analyzers, separately.
The approach of all of this has several benefits. First, we can scale up with the capabilities of the machine. So if there are free cores, we can put them to work computing less relevant data concurrently. Second, as results are computed on some operation, it can be displayed to the user without having to wait for the rest to finish. Being fine-grained means the UI can appear crisp and responsive, while potentially slower operations take longer but eventually appear.
For example, compiler syntax diagnostics generally take microseconds. While 3rd-party analyzer diagnostics might take seconds. No point in stalling the former while waiting for the latter to run. LSP makes multi-plexing this stuff easy
So when we just have AI write it, it means we've avoided the thinking part, and so the written article will be much less useful to the reader because there's no actual distillation of thought.
Using voice to article is a little better, and I do find that talking out a thought helps me see its problems, but writing it seems to do better.
There's also the problem that while it's easy to detect AI writing, it's hard to tell the difference between someone who thought it out by talking and had AI write it versus someone who did little thinking and still had AI write it. So as soon you you smell the whiff of AI writing, the reasonable expectation is that there's less distillation of thought.
If we know the text is hand-authored, then we have a signal that at least one person believed the content was important enough to put meaningful effort into creating it. That's a sign it might be worth reading.
If it's LLM-authored, then it might still be useful, or it might be complete garbage. It's hard to tell because we don't know if even the "author" was willing to invest anything into it.
Anyway, I wrote a little more about that here: https://lambdaland.org/posts/2025-08-04_artifical_inanity/
Intent matters a ton when reading or writing something.
I use tree-sitter for developing a custom programming language, you still need an extra step to get from CST to AST, but the overall DevEx is much quicker that hand-rolling the parser.
Could you elaborate on what this involves? I'm also looking at using tree-sitter as a parser for a new language, possibly to support multiple syntaxes. I'm thinking of converting its parse trees to a common schema, that's the target language.
I guess I don't quite get the difference between a concrete and abstract syntax tree. Is it just that the former includes information that's irrelevant to the semantics of the language, like whitespace?
An example: in a CST `1 + 0x1 ` might be represented differently than `1 + 1`, but they could be equivalent in the AST. The same could be true for syntax sugar: `let [x,y] = arr;` and `let x = arr[0]; let y = arr[1];` could be the same after AST normalization.
You can see why having just the AST might not be enough for syntax highlighting.
As a side project I've been working on a simple programming language, where I use tree-sitter for the CST, but first normalize it to an AST before I do semantic analysis such as verifying references.
AST is just CST minus range info and simplified/generalised lexical info (in most cases).
> pacman -Ssq tree-sitter
tree-sitter
tree-sitter-bash
tree-sitter-c
tree-sitter-cli
tree-sitter-javascript
tree-sitter-lua
tree-sitter-markdown
tree-sitter-python
tree-sitter-query
tree-sitter-rust
tree-sitter-vim
tree-sitter-vimdoc
Where's R, YAML, Golang, and several others?For others, this is a sub optimal answer, but I’ve played with generating grammars with latest llms and they are surprisingly good at doing this (in a few shots).
That being said, if you’re doing something more serious than syntax highlighting or shipping it in a product, you’ll want to spend more time on it.
awk bash bibtex blueprint c c-sharp clojure cmake commonlisp cpp css dart dockerfile elixir glsl gleam go gomod heex html janet java javascript json julia kotlin latex lua magik make markdown nix nu org perl proto python r ruby rust scala sql surface toml tsx typescript typst verilog vhdl vue wast wat wgsl yaml
[1]: https://github.com/tree-sitter-grammars/tree-sitter-yaml
Since it comes from `tree-sitter-grammars/tree-sitter-yaml`, it may be quick to integrate the official repo.
But I really want the semantic highlighting from a language server, such as highlighting constants or macros special, and Emacs (among some other editors) make it trivial to blend the strengths of both together.
The original post conflates some concepts worth separating. LSP and language servers operate at an IDE/Editor feature level, whereas tree-sitter is a particular technological choice for parsing text and producing a syntax tree. They serve different purposes but can work together.
What does a language server actually do? LSP defines features like:
1. Finding references (`textDocument/references`)
2. Go-to-definition (`textDocument/definition`)
3. Syntax highlighting (`textDocument/semanticTokens/...`)
4. Code completion, diagnostics, refactorings
A language server for language X could use tree-sitter internally to implement these features. But it can use whatever technologies it wants. LSP is protocol-level; tree-sitter is an implementation detail.The article talks about tree-sitter avoiding the problem of "maintaining two parsers" (one for the compiler, one for the editor). This misunderstands how production compiler/IDE systems actually work. In Roslyn, we don't have two parsers. We have one parser that powers both the compiler and the IDE. Same code, same behavior, same error recovery. This works better, not worse. You want your IDE to understand code exactly the way the compiler does, not approximately.
The article highlights tree-sitter being "error-tolerant" and "incremental" as key advantages. These are real concerns. If you're starting from scratch with no existing language infrastructure, tree-sitter's error tolerance is valuable. But this isn't unique to tree-sitter. Production compiler parsers are already extremely error-tolerant because they have to be. People are typing invalid code 99% of the time in an editor.
Roslyn was designed from day one for IDE scenarios. We do incremental parsing (https://github.com/dotnet/roslyn/blob/main/docs/compilers/De...), but more importantly, we do incremental semantic analysis. When you change a file, we recompute semantic information for just the parts that changed, not the entire project. Tree-sitter gives you incremental parsing. That's good. But if you want rich IDE features, you need incremental semantics too.
The article suggests language servers are inherently "heavy" while tree-sitter is "lightweight." This isn't quite right. An LSP server is as heavy or light as you make it. If all you need is parsing and there's no existing language library, fine, use tree-sitter and build a minimal LSP server on top. But if you want to do more, LSP is designed for that. The protocol supports everything from basic syntax highlighting to complex refactorings.
Now, as to syntax highlighting. Despite the name, it isn't just syntactic in modern IDEs. In C#, we call this "classification," and it's powered by the full semantic model. A reference to a symbol is classified by what that symbol is: local, parameter, field, property, class, struct, type parameter, method, etc. Symbol attributes affect presentation. Static members are italicized, unused variables are faded, overwritten values are underlined. We classify based on runtime behavior: `async` methods, `const` fields, extension methods.
This requires deep semantic understanding. Binding symbols, resolving types, understanding scope and lifetime. Tree-sitter gives you a parse tree. That's it. It's excellent at what it does, but it's fundamentally a syntactic tool.
Example: in C#, `var x = GetValue();` is syntactically ambiguous. Is `var` a keyword or a type name? Only semantic analysis can tell you definitively. Tree-sitter would have to guess or mark it generically.
Tree-sitter is definitely a great technology though. Want to add basic syntax highlighting for a new language to your editor? Tree-sitter makes this trivial. Need structural editing or code folding? Perfect use case. However, for rich IDE experiences, the kind where clicking on a variable highlights all its uses, or where hovering shows documentation, or where renaming a method updates all call sites across your codebase, you need semantic analysis. That's a fundamentally different problem than parsing.
Tree-sitter definitely lowers the barrier to supporting new languages in editors. But it's not a replacement for language servers or semantic analysis engines. They're complementary technologies. For languages with mature compilers and semantic engines (C#, TypeScript, Rust, etc.), using the real compiler infrastructure for IDE features makes sense. For cases with simpler tooling needs, tree-sitter is an excellent foundation to build on.
- i got a hint of language server and tree sitter thanks to this wonderfully written post but it is still missing a lot of details like how does the protocol actually look like, what does a standard language server or tree sitter implementation looks like
- what are the other building blocks?
Let me be blunt: any article posted here should provide more information, or more in-depth analysis than Wikipedia. Since I'm not a compiler person, I might be too harsh to suggest that the article does not provide more in-depth analysis (because it is definitely shorter than it) than the Wikipedia article -- I apologize if that's the case.
Most of the time they rely on their own hand-rolled recursive descent parser. Writing these isn't necessarily hard but time-consuming and tedious especially if you're parsing a large language like C++.
Parser generators like yacc, bison, chumsky, ANTLR etc. can generate a parser for you given a grammar. However these parsers usually don't have the best performance or error reporting characteristics because they are auto-generated. A recursive descent parser is usually faster and because you can customize syntax error messages, easier for an LSP to use to provide good diagnostics.
Tree-sitter is also a parser generator but has better error tolerance properties (not quite as good as hand-written but generally better than prior implementations). Additionally, its incremental meaning it can reuse prior parses to more efficiently create a new AST. Most hand-written parsers are not incremental but are usually still fast enough to be usable in LSPs.
To use tree-sitter you define a grammar in JavaScript that tree-sitter will use to generate a parser in C which you can then use a dynamic or static library in your application.
In your case, this is useful because you can compile down those C libraries to WASM which can run right in the browser and will usually be faster than pure JS (the one catch is serialization overhead between JS and WASM). The problem is that you still need to implement all the language analysis features on top.
A good overview of different parsing techniques: https://tratt.net/laurie/blog/2020/which_parsing_approach.ht... LSP spec: https://microsoft.github.io/language-server-protocol/overvie... VSCode's guide on LSP features: https://code.visualstudio.com/api/language-extensions/progra... Tutorial on creating hand-rolled error-tolerant (but NOT incremental) recursive descent parsers: https://matklad.github.io/2023/05/21/resilient-ll-parsing-tu... Tree-sitter book: https://tree-sitter.github.io/tree-sitter/
Any tips for keeping the grammar sizes under control? I'm distributing a CLI tool that needs to support several languages, and I can see the grammars gradually bloating the binary size
I could build some clever thing where language packs are opt-in and distributed as WASM, maybe. But that could be complex