SBCL is a Common Lisp compiler written in Common Lisp that also can target RISC-V.
Can it run on the MCU mentioned in the post? Somehow I doubt that.

Can SBCL even target MCU boards like the pico?

Second sentence from TFA:

> You can run the compiler on the RISC-V core of a Raspberry Pi Pico 2 (or another RP2350-based board)

What article are you referring to? (Specifically, the parent comment asked about SBCL, Steel Bank Common Lisp, running on the pico 2, not about uLISP)
No more comments before coffee for me.
  • rwmj
  • ·
  • 2 months ago
  • ·
  • [ - ]
As I understand it, this compiles down to assembly instructions. What then assembles it to machine code? The reason I'm asking is I wanted to find out if the compiler/assembler supports compressed instructions (which are supported by the RP RISC-V core).

Edit: Yes it does support the compressed extension, although the page calls them "compact" instructions.

In the article there is a link to an earlier post with a RISC-V assembler (I think written by the same author), which generates the actual machine code
Thanks - corrected "compact" to "compressed".
There is something about RISC-V that really inspires lots of hackers and it’s not really technical thing AFAICT.
It is cool as heck that truly open hardware might actually win in our lifetimes. (ARM was an interesting start, but too much licensing).
For me, it's the fact that it is a truly open standard, with no licensing entanglements. It has the potential to be a durable ecosystem, worth investing in.
> There is something about RISC-V that really inspires lots of hackers

"Not Arm" :)

Simplicity, it's a modern MOS 6502. Base RISC-V has even less instructions than 6502.
Also it has less registers (32 vs. 256 in the zero page) and less addressing modes.
But the 6502's "registers" are much smaller and you can do much less with them. You can't really sensibly compare the two approaches so superficially.
  • snvzz
  • ·
  • 2 months ago
  • ·
  • [ - ]
Can they really be called registers, when they're bytes in DRAM?

It really is just a convenient short addressing mode.

The 6502 has actual registers, A/X/Y and the specialized S/P/PC.

> Can they really be called registers, when they're bytes in DRAM?

Yes they can, because that's just an implementation detail.

Registers are nothing more than a conveniently short address for frequently-accessed working storage. Sometimes they are in their own address space (which in modern use usually doesn't have indirect/computed addressing, but can), but sometimes they are in the same address space as RAM e.g. in AVR the first 32 bytes of RAM are the registers (which might or might not be implemented in the same technology). Some early / small AVRs didn't have any other RAM. The same is true of PIC and 8051. And then there is the TMS9900 where the only on-chip registers were the PC and a pointer to where in RAM the working registers were stored.

It seems entirely appropriate to refer to the 6502's Zero Page as "registers" given that 1) it barely has any others, and 2) the very fundamental for modern software base+offset addressing mode exists only using two zero page bytes as the base. You would otherwise be reduced to using self-modifying code for any access via pointer.

If the 6502 ISA had not become obsolete for other reasons -- the desire for more than 8 bit ALUs and 16 bit addresses -- it is entirely likely that as CPUs became faster than RAM and more transistors were able to be put in the CPU then future 6502s would have brought Zero Page on-chip.

> It seems entirely appropriate to refer to the 6502's Zero Page as "registers" given that 1) it barely has any others,

Enough to write any program, mind.

> and 2) the very fundamental for modern software base+offset addressing mode exists only using two zero page bytes as the base.

Correction: base indirect + offset.

It's not an implementation detail, if it has to go through the system bus
Going through the system bus IS an implementation detail.

You could build a 6502-compatible CPU with a (extra [1]) 256 byte on-chip register file, and treat, for example, `0x1265` as simply a 16 bit instruction `ADC A,R18`, or `0x0791` as an x86-ish `MOV [R7+Y],A`.

All binary programs would run just as they do on the 1975 6502, just a lot faster.

[1] in the original 6502, the registers aren't in a register file in the modern sense, they're implemented with flip flops and all are accessible simultaneously (with wired-OR on to a bus in some cases if the decode ROM selected several at the same time)

Or off the CPU regardless, mind.
  • snvzz
  • ·
  • 2 months ago
  • ·
  • [ - ]
I'd call it scratch page or something of the sort.

The main issue in my mind is that an actual set of registers which are inside the chip do already exist.

Yet the same number of register bytes (when only counting 6502's zero page)... 32x8 = 256 bytes...
31x4 = 124 bytes :-) The Zero register doesn't have to physically exist, and we're talking about 32 bit CPUs e.g. Pi Pico 2 here, right.

Ok, 128 bytes if you add in the PC.

Except for RV32E -- as seen in the very popular $0.10 CH32V003 -- which has 15x4 = 60 bytes of GPRs, plus the PC.

Plus usually a few CSRs on practical CPUs, though Zicsr is an extension so you don't have to have it.

The same could be said of the ARM Cortex-M0+.
The Cortex-M0's Thumb-1 is a really unpleasant instruction set compared to ARM, Thumb-2, RISC-V, or ARM64.
Though no worse than 16 or 32 bit x86 (without FPU), and probably better because the lower 8 registers are general-purpose.

Also you can get something useful from the "spare" five registers r8-r12 as they support MOV, ADD and CMP with any other register, plus BX. Sadly you're on your own with PUSH/POP except for PUSH LR / POP PC.

Thumb-1 (or ARMv6-M) is fairly similar to RISC-V C extension. It's overall a bit more powerful because it has more opcodes available and because RVC dedicates some opcodes to floating point. RVC only lets you do MV and ADD on all 32 (or 16 in RV32) registers, not CMP (not that RISC-V has CMP anyway). Plus, RVC lets you load/store any register into the stack frame. Thumb-1 r8-r14 need to be copied to/from r0-r7 to load or store them.

But on the other hand, RVC is never present without the full-size 4 byte instructions, even on the $0.10 CH32V003, making that a bit more pleasant than the similar price Cortex M0 Puya PY32F002.

My initial experience with Thumb-1 was like stepping on a series of rakes. Can't use ADD? Why not? Oh, it turns out you have to use ADDS. Wait, why am I getting an error when I try to use ADDS? Turns out that inside an ITTE (etc.) block, you can't use ADDS; you have to use ADD. And the various other irregular restrictions on what you can express are similarly unpredictable. Maybe my gripe isn't really with Thumb-1 but with GAS, but even when you learn the restrictions, it still takes extra mental effort to program under them. I did have some similar experiences with 8086 code (it took me a certain amount of trial and error to learn which registers I could use as base registers and index registers, as I recall) but never 80386 code, where all of its registers are just as general-purpose as on Thumb-1, unless you're looking for sizecoding hacks to get your demo down under 64 bytes or whatever.

I agree that RVC is similar in theory, but being able to mix 4-byte instructions into your RVC code largely eliminates the stepping-on-rakes problem, even on Graham Smecher's redoubtable Minimax which Jecel Assumpção mentioned. I still prefer ARM assembly over RISC-V, but both definitely have their merits.

If you have ITTE (etc.) then you're not on Thumb-1 (e.g. ARM7TDMI) or ARMv6-M (Cortex M0+), you're on Thumb-2.

> but being able to mix 4-byte instructions into your RVC code largely eliminates the stepping-on-rakes problem

Absolutely, which is why I pointed out that no one (at least no one commercial) has ever implemented RVC alone, not even on the 10c CH32V003.

Oh, you're right, of course. I misremembered that rake. I stepped on some others I can't remember now, though.

I wouldn't be surprised to see commercial implementations of Minimax. It seems like it would have a much better cost/benefit ratio than SeRV for some applications.

  • jecel
  • ·
  • 2 months ago
  • ·
  • [ - ]
It is better to say RVC is almost never present without the full-size 4 byte instructions since we have one counter example:

https://github.com/gsmecher/minimax

This is an experimental rather than practical design that only directly implements the compressed instructions in hardware and then implements the normal RV32I instructions in "microcode" written using the compressed instructions.

Minimax is a super cool design! I think it's not really a counterexample, because it does implement the uncompressed instructions, just more slowly.
The LUT counts do look competitive, until you realise that this doesn't include the cost of the microcode.

Probably fine on FPGA where there's lots of almost free BRAM, but on an ASIC where you'd need to use SRAM or mask ROM, or if you used LUTRAM, it would look very different.

Plus, the speed penalty for the microcoded instructions is huge. perhaps not as huge as SeRV :-)

That sounds reasonable, yeah. Presumably you'd write your inner loops purely in RVC instructions; in the situations where you'd use SeRV, you wouldn't be using it for your computational bottlenecks, which you'd build special-purpose hardware for, but just to sort of orchestrate a sequence of steps. But Minimax seems like it could really reduce the amount of stuff you had to design special-purpose hardware for.
For me, part of it is also the beauty of the ISA. I think it is just really well thought out with its extensions and namespacing for custom ISAs.
It's a bunch of things; many of the reasons are actually technical. It's very simple to compile to RISC-V instructions.
The offsets for J{AL} and Bcc are a little tricky, though only half a dozen lines of code to sort out.
Tyyygyg
I got a RP2350 "Feather"[1] from Adafruit[2]. Amazing little thing, with lots of stuff built-in. The lipoly charge port is super useful and Just Works, and the STEMMA QT connector means no soldering or breadboards for simple projects. My main half-baked idea for this is to control a CPU usage monitor[3], but I also want to make some better lights for my Lego SHIELD Helicarrier, and maybe add some movement too.

And now you're telling me I can use Lisp on this? It would be interesting to see how streamlined the development process is for each one of uLisp, CircuitPython, MicroPython, and Arduino/C.

[1] https://www.adafruit.com/product/6000

[2] https://www.adafruit.com/new <-- one of my favourite places to window-shop :)

[3] Yeah I'm rambling but my end goal is to drive an LED matrix that ends up looking like btop's CPU meter. Why not just show btop on a separate small screen? That is a very good question to which I have no answer.

Rust also runs on picos and Esp32s, if that’s your jam.
ulisp is an incredible achievement and has brought me a lot of joy.

There is something very fun about writing lisp for an Arduino nano, and trying to golf your intentions into ~300 characters :)

That's neat but I don't know why you'd minimize characters rather than ROM size for a microcontroller.
"very fun" ;)
I don't think it's yet complete enough to compile itself; though I haven't looked at the assembler code, I'm pretty sure it requires bitwise operations the compiler can't compile yet. Also, the compiler itself requires things like null, symbolp, eq, and atom, which it also doesn't implement yet. Without those I'm not sure that it's fair to describe its input language as Lisp, though it does support car and cdr.

But it's still super cool. A really great thing about Lisp for purposes like this is that you don't get hung up on syntax and parsing, which is the most salient part of writing a compiler but not the most important.

Things like atom and symbols are functions in the runtime. A Lisp compiler only has to handle special forms, and function calls. If we see the compiler source code using a special operator that it doesn't handle, either directly using it or via macro expansion, then we know it's not yet self-hosting.
Normally I would assume you were right, but the list of things supported in the compiler includes these items:

List functions: car, cdr

Arithmetic functions: +, -, *, /, mod, 1+, 1-

Arithmetic comparisons: =, <, <=, >, >=, /=

So I think the compiler may not be able to compile calls to arbitrary functions? Maybe I should read the code.

Handling specific functions is optional. If the compiler can compile a function call, it can compile a call to the + or car functions, which then have to be president in the run-time.

Functions like these are obvious targets for special recognition and inlining. Arithmetic code won't be as fast as it could be if every + has to be a call to a function, but it will work.

A compiled Lisp implementation can be bootstrapped to the point where the definition of car in the library looks like:

  (defun car (x) (car x))
And similarly for some other functions. Then there are only two places in the system that know how to actually extract the car field: the compiler source code, and the corresponding compiler executable needed for bootstrapping.

In that case if you remove the car handling from the compiler then the system's self-hosting and bootstrapping ability breaks.

Of course! That's what I did last time I wrote a Lisp compiler, but this one seems a little more limited. The page explains:

> Finally, comp-funcall compiles code for function calls to the built-in functions, or a recursive call to the main function: (...)

(Emphasis mine.)

And none of his example functions calls any function other than the builtin ones (which, indeed, his compiler does inline) and itself. But it isn't obvious where the limitation comes from; the subroutine call is just a jal instruction to an assembler label, which you would think would work just as well to call another function as to make a recursive call.

Maybe the limitation is that the assembler only assembles a single function at a time, and he doesn't have a link-editing stage or a symbol table implicitly or explicitly shared between assembler calls. In that case it would be fairly simple to extend the compiler to support more general calls, as you reasonably but apparently incorrectly assumed this one already does.

Looking a bit at the assembler http://www.ulisp.com/list?31OE it looks like it only supports jumps to previously defined labels? In $jal and offset I don't see anything that resembles adding a relocation to a list of relocations for a label so it can be backpatched later. But I also don't see how it gets the numerical value for a label that it subtracts *pc* from in offset. In the compiler itself http://www.ulisp.com/list?4Y4Q it seems to be consing up lists of assembly instructions that eventually get evaled, which seems like a kind of janky way to invoke your assembler but whatever, but I don't see where the binding of label names to addresses happens. I can't find anything resembling a symbol table.

I think I'd make a few other changes, though. I'd add closure support and some kind of type tagging; right now it depends on knowing the types are compile time so it's kind of more like a Forth or C compiler with Lisp syntax. And I'd indirect the calls through a runtime-mutable symbol table so you could redefine a function without having to rewrite all the calls to the old function in existing code.

That's what I did last time I wrote a Lisp compiler, anyway; maybe it's not a good tradeoff on today's hardware anymore, since people presumably still only interactively load new definitions a few times a minute at most, but CPUs have gone from a MIPS to a hundred BIPS. So making all your function calls much slower by frustrating the CPU's branch predictor with a PLT in order to speed up relinking after an edit might no longer be a good tradeoff, even if it is what glibc does—glibc doesn't have FASLs.

How are you doing it these days?

The assembler is two-pass and the labels are simply local variables in the defcode form. They are assigned the value of the program counter in the first pass, and the assembler instructions are evaluated in the second pass. I got the idea from the assembler in the Acorn Atom, if anyone remembers that.
I see! So in fact this compiler can compile arbitrary calls from one function to another, even though the web page says it can't?
  • lispm
  • ·
  • 2 months ago
  • ·
  • [ - ]
> So I think the compiler may not be able to compile calls to arbitrary functions?

A Lisp compiler will by default compile a call to ANY function as a "jump subroutine" (or as a non-returning jump in the case of a tail call) machine code call. The function then has to be present at runtime (in the runtime) and the code will call it at runtime. Lisp code by default also calls a global function through its symbol's cell -> late binding.

"supported by the compiler" here probably means that the compiler can generate inline code for these functions in various cases. Thus if the call is to the function 1+ and it knows that the argument is an integer number (and, possibly, the result also has to be an integer number), then it will not use a subroutine call via the global function, but will inline the call to an integer addition machine instruction. Obviously this then defeats late binding.

If a Lisp (for a tiny machine) only has fixnum integers as its single numeric data type, then the thing is simple -> every 1+ will get an integer argument and thus one can inline it. Inlining makes for tiny computers (which uLisp was developed for) only sense if the inlined function code isn't too long and doesn't use to much of the precious memory for the machine code. Inlining OTOH like will have a positive effect in reducing execution time and a negative effect by increasing compilation time.

In languages like Common Lisp or Scheme, which have several numeric types (fixnums, bignums, floats, ratios, complex, ...) this gets more complicated. Common Lisp compilers typically use optional type declarations and type inference to determine what inlined code to generate.

What you say about Lisp compilers in general is correct; probably the uLisp author would learn some things from it.

With respect to uLisp itself, however, maybe you should read the code too.

  • dang
  • ·
  • 2 months ago
  • ·
  • [ - ]
Edit: It's a pity we missed http://www.ulisp.com/show?4W2I. It was posted (https://news.ycombinator.com/item?id=41190553) but didn't get attention. We'd have put it in the SCP for sure (https://news.ycombinator.com/item?id=26998308) if we had seen it.

---

Related. Others?

uLisp: Lisp for Microcontrollers - https://news.ycombinator.com/item?id=41681705 - Sept 2024 (1 comment)

An ARM Assembler Written in Lisp - https://news.ycombinator.com/item?id=36646277 - July 2023 (31 comments)

uLisp wireless message display with a Pi Pico W - https://news.ycombinator.com/item?id=32722475 - Sept 2022 (6 comments)

Visible Lisp Computer: embedded real-time display of Lisp workspace using uLisp - https://news.ycombinator.com/item?id=30612770 - March 2022 (7 comments)

uLisp on the Raspberry Pi Pico - https://news.ycombinator.com/item?id=29970231 - Jan 2022 (14 comments)

uLisp - https://news.ycombinator.com/item?id=27036317 - May 2021 (87 comments)

Lisp Badge: A single-board computer that you can program in uLisp - https://news.ycombinator.com/item?id=23729970 - July 2020 (25 comments)

A new RISC-V version of uLisp - https://news.ycombinator.com/item?id=22640980 - March 2020 (35 comments)

uLisp – ARM Assembler in Lisp - https://news.ycombinator.com/item?id=22117241 - Jan 2020 (49 comments)

Ray tracing with uLisp - https://news.ycombinator.com/item?id=20565559 - July 2019 (10 comments)

uLisp: Lisp for microcontrollers - https://news.ycombinator.com/item?id=18882335 - Jan 2019 (16 comments)

GPS mapping application in uLisp - https://news.ycombinator.com/item?id=18466566 - Nov 2018 (4 comments)

Tiny Lisp Computer 2 - https://news.ycombinator.com/item?id=16347048 - Feb 2018 (2 comments)

uLisp – Lisp for the Arduino - https://news.ycombinator.com/item?id=11777662 - May 2016 (33 comments)

  • ·
  • 2 months ago
  • ·
  • [ - ]
  • pjmlp
  • ·
  • 2 months ago
  • ·
  • [ - ]
I see Lisp compilers and upvote. :)

Great work.

I see people who upvote Lisp compilers and upvote.
I see people who see people who upvote people who upvote Lisp compilers and I upvote.

Thank goodness for tail call optimization.

  • snvzz
  • ·
  • 2 months ago
  • ·
  • [ - ]
I see Lisp and upvote.

I see RISC and upvote.

  • ·
  • 2 months ago
  • ·
  • [ - ]
  • ·
  • 2 months ago
  • ·
  • [ - ]
This is all very neat and all but could anyone please explain to me how this thing handles forward label resolution e.g. in the "if" construct? I think I know how it does that but I am very likely to be be wrong.
The assembler is two-pass.
  • anthk
  • ·
  • 2 months ago
  • ·
  • [ - ]
I'd love a cheap $100 netbook with its speed close to the specs Intel Atom n270 one or similar.

Not everyone needs a 16GB machine to compile huge current C++ projects.

Oh, interesting.

I was going to say nothing in the RISC-V world is comparable to the N100 yet, at any price, but it looks like the N100 is anywhere from 10 to 20 times faster than the N270.

Geekbench 6 doesn't have any N270 results but it has a couple of Atom 230 results, and other sources indicate those two are very similar.

So, ok, on Geekbench a single core of the JH7110 comes in a bit faster than the Atom 230. And it's got four of them. You can get a Milk-V Mars CM with a 1.5 GHz JH7110 with 2 GB RAM for $34. You should be able to build a decent little netbook around that for $100. It's compatible with the Raspberry Pi 4 CM, so if there is a suitable netbook enclosure for the Pi 4 CM then it should work.

Otherwise I think the ClockworkPi DevTerm R-01 would be the closest that actually exist at the moment. The single 1.0 GHz C906 core is a bit slower and the price is unfortunately $239.

But the MuseBook for $299 is much much better.

  • snvzz
  • ·
  • 2 months ago
  • ·
  • [ - ]
>Not everyone needs a 16GB machine to compile huge current C++ projects.

These days 64GB is barely sufficient for that.

We really need to switch to mold linker.

  • anthk
  • ·
  • 2 months ago
  • ·
  • [ - ]
You can always reduce the number of jobs from make/ninja to 4...

But, yes, you are right; the issue lies on linking. LibTD requires 2GB as minimum, but linking takes ages. At least with GCC, Clang requires far less RAM.

  • pjmlp
  • ·
  • 2 months ago
  • ·
  • [ - ]
Or Rust, Android,....
Vulkan shaders have entered the chat...
  • ·
  • 2 months ago
  • ·
  • [ - ]
[dead]
Here's a copy in case the website goes down.

; Lisp compiler to RISC-V Assembler - Version 1 - 11th October 2024 ; #| Language definition: Defining variables and functions: defun, setq Symbols: nil, t List functions: car, cdr Arithmetic functions: +, -, *, /, mod, 1+, 1- Arithmetic comparisons: =, <, <=, >, >=, /= Conditionals: if, and, or |# ; Compile a lisp function (defun compiler (name) (if (eq (car (eval name)) 'lambda) (eval (comp (cons 'defun (cons name (cdr (eval name)))))) (error "Not a Lisp function"))) ; The main compile routine - returns compiled code for x, prefixed by type :integer or :boolean ; Leaves result in a0 (defun comp (x &optional env tail) (cond ((null x) (type-code :boolean '(($li 'a0 0)))) ((eq x t) (type-code :boolean '(($li 'a0 1)))) ((symbolp x) (comp-symbol x env)) ((atom x) (type-code :integer (list (list '$li ''a0 x)))) (t (let ((fn (first x)) (args (rest x))) (case fn (defun (setq *label-num* 0) (setq env (mapcar #'(lambda (x y) (cons x y)) (second args) *locals*)) (comp-defun (first args) (second args) (cddr args) env)) (progn (comp-progn args env tail)) (if (comp-if (first args) (second args) (third args) env tail)) (setq (comp-setq args env tail)) (t (comp-funcall fn args env tail))))))) ; Utilities (defun push-regs (&rest regs) (let ((n -4)) (append (list (list '$addi ''sp ''sp (* -4 (length regs)))) (mapcar #'(lambda (reg) (list '$sw (list 'quote reg) (incf n 4) ''(sp))) regs)))) (defun pop-regs (&rest regs) (let ((n (* 4 (length regs)))) (append (mapcar #'(lambda (reg) (list '$lw (list 'quote reg) (decf n 4) ''(sp))) regs) (list (list '$addi ''sp ''sp (* 4 (length regs))))))) ; Like mapcon but not destructive (defun mappend (fn lst) (apply #'append (mapcar fn lst))) ; The type is prefixed onto the list of assembler code instructions (defun type-code (type code) (cons type code)) (defun code-type (type-code) (car type-code)) (defun code (type-code) (cdr type-code)) (defun checktype (fn type check) (unless (or (null type) (null check) (eq type check)) (error "Argument to '~a' must be ~a not ~a" fn check type))) ; Allocate registers - s0, s1, and a0 to a5 give compact instructions (defvar *params* '(a0 a1 a2 a3)) (defvar *locals* '(a4 a5 s0 s1 a6 a7 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11)) (defvar used-params nil) ; Generate a label (defvar label-num 0) (defun gen-label () (read-from-string (format nil "lab~d" (incf *label-num*)))) ; Subfunctions (defun comp-symbol (x env) (let ((reg (cdr (assoc x env)))) (type-code nil (list (list '$mv ''a0 (list 'quote reg)))))) (defun comp-setq (args env tail) (let ((value (comp (second args) env tail)) (reg (cdr (assoc (first args) env)))) (type-code (code-type value) (append (code value) (list (list '$mv (list 'quote reg) ''a0)))))) (defun comp-defun (name args body env) (setq used-params (subseq *locals* 0 (length args))) (append (list 'defcode name args) (list name) (apply #'append (mapcar #'(lambda (x y) (list (list '$mv (list 'quote x) (list 'quote y)))) used-params params)) (code (comp-progn body env t)))) (defun comp-progn (exps env tail) (let* ((len (1- (length exps))) (nlast (subseq exps 0 len)) (last1 (nth len exps)) (start (mappend #'(lambda (x) (append (code (comp x env t)))) nlast)) (end (comp last1 env tail))) (type-code (code-type end) (append start (code end))))) (defun comp-if (pred then else env tail) (let ((lab1 (gen-label)) (lab2 (gen-label)) (test (comp pred env nil))) (checktype 'if (car test) :boolean) (type-code :integer (append (code test) (list (list '$beqz ''a0 lab1)) (code (comp then env t)) (list (list '$j lab2) lab1) (code (comp else env tail)) (list lab2) (when tail '(($ret))))))) (defun $sgt (rd rs1 rs2) ($slt rd rs2 rs1)) (defun comp-funcall (f args env tail) (let ((test (assoc f '((< . $slt) (> . $sgt)))) (teste (assoc f '((= . $seqz) (/= . $snez)))) (testn (assoc f '((>= . $slt) (<= . $sgt)))) (logical (assoc f '((and . $and) (or . $or)))) (arith1 (assoc f '((1+ . 1) (1- . -1)))) (arith (assoc f '((+ . $add) (- . $sub) (* . $mul) (/ . $div) (mod . $rem))))) (cond ((or test teste testn) (type-code :boolean (append (comp-args f args 2 :integer env) (pop-regs 'a1) (cond (test (list (list (cdr test) ''a0 ''a1 ''a0))) (teste (list '($sub 'a0 'a1 'a0) (list (cdr teste) ''a0 ''a0))) (testn (list (list (cdr testn) ''a0 ''a1 ''a0) '($xori 'a0 'a0 1)))) (when tail '(($ret)))))) (logical (type-code :boolean (append (comp-args f args 2 :boolean env) (pop-regs 'a1) (list (list (cdr logical) ''a0 ''a0 ''a1)) (when tail '(($ret)))))) (arith1 (type-code :integer (append (comp-args f args 1 :integer env) (list (list '$addi ''a0 ''a0 (cdr arith1))) (when tail '(($ret)))))) (arith (type-code :integer (append (comp-args f args 2 :integer env) (pop-regs 'a1) (list (list (cdr arith) ''a0 ''a1 ''a0)) (when tail '(($ret)))))) ((member f '(car cdr)) (type-code :integer (append (comp-args f args 1 :integer env) (if (eq f 'cdr) (list '($lw 'a0 4 '(a0))) (list '($lw 'a0 0 '(a0)) '($lw 'a0 4 '(a0)))) (when tail '(($ret)))))) (t ; function call (type-code :integer (append (comp-args f args nil :integer env) (when (> (length args) 1) (append (list (list '$mv (list 'quote (nth (1- (length args)) params)) ''a0)) (apply #'pop-regs (subseq params 0 (1- (length args)))))) (cond (tail (list (list '$j f))) (t (append (apply #'push-regs (cons 'ra (reverse used-params))) (list (list '$jal f)) (apply 'pop-regs (append used-params (list 'ra)))))))))))) (defun comp-args (fn args n type env) (unless (or (null n) (= (length args) n)) (error "Incorrect number of arguments to '~a'" fn)) (let ((n (length args))) (mappend #'(lambda (y) (let ((c (comp y env nil))) (decf n) (checktype fn type (code-type c)) (if (zerop n) (code c) (append (code c) (push-regs 'a0))))) args)))

Wait it fits in a comment? What sort of magic is this?
  • snvzz
  • ·
  • 2 months ago
  • ·
  • [ - ]
The beautiful simplicity of Lisp and RISC-V.

They similarly share further magic in their inevitability.

  • ·
  • 2 months ago
  • ·
  • [ - ]
Sorry, I'd posted the link to the code in the wrong format - corrected now. Would you like to delete that copy?
It is easier to read but that defeats the point of this comment.
I think the idea was to delete and replace.
  • ·
  • 2 months ago
  • ·
  • [ - ]
Gretchen! Stop trying to make Lisp happen. It’s not going to happen.