LLVM
LLVM
LLVM
GCC is adding cool new languages too!
They just recently added COBOL and Modula-2. Algol 68 is coming in GCC 16.
cool new languages
COBOL
I guess I should have put a /s but I thought it was pretty obvious. The 68 in Algol 68 is 1968. COBOL is from 1959. Modula-2 is from 1977.
My point exactly was that all the hot new languages are built with LLVM while the “new” language options on GCC are languages from the 50’s, 60’s, and 70’s.
I am not even exaggerating. That is just what the projects look like right now.
It's new to gcc!
BEGIN BEGIN Wow, Modula 2! END; I remember Modula 2. END.
Honestly, now that I can see the "business productivity" through-line from COBOL, to BASIC, and most recently, Python, I should probably just learn COBOL.
Great optimisation, awwwful compile times.
New kid on the block, roc, has it right by splitting application code from "platform"/framework code, precompiling and optimising the platform, then using their fast surgical linker to sew the app code to the platform code.
Platforms are things like cli program, web server that kind of thing. Platforms provide an interface of domain specific IO primitives and handle all IO and memory management, and they also specify what functions app code must supply to complete the program.
It's pretty cool, and they're getting efficiency in the area of systems programming languages like C and Rust, but with none of the footguns of manual memory management, no garbage collection pauses, but yet also no evil stepparent style borrow checker to be beaten by. They pay a lot of attention to preventing cache misses and branch prediction failures, which is his they get away with reference counting and still being fast.
A note of caution: I might sound like I know about it, but I know almost nothing.
That sounds pretty great. My impression is that relatively little code actually runs that often.
but with none of the footguns of manual memory management, no garbage collection pauses, but yet also no evil stepparent style borrow checker to be beaten by.
That part sounds implausible, though. What kind of memory management are they doing?
Yeah, I think Go's compiler is so fast partially because it doesn't use LLVM
TinyGo isn’t that much slower and it uses LLVM
That would work!
Isn't Zig working on their own backend?
Also, pretty excited about the cranelift project.
Yes, and it’s now default for x86_64
I'll make my own LLVM, with blackjack and hookers.
That's like... It's purpose. Compilers always have a frontend and a backend. Even when the compiler is entirely made from scratch (like Java or go), it is split between front and backend, that's just how they are made.
So it makes sense to invest in just a few highly advanced backends (llvm, gcc, msvc) and then just build frontends for those. Most projects choose llvm because, unlike the others, it was purpose built to be a common ground, but it's not a rule. For example, there is an in-developement rust frontend for GCC.
Can confirm, even the little training compiler we made at Uni for a subset of Java (Javali) had a backend and frontend.
I can't imagine trying to spit out machine code while parsing the input without an intermediary AST stage. It was complicated enough with the proper split.
I can imagine;
I have built single pass compilers that do everything in one shot without an AST. You are not going to get great error messages or optimization though.