FOSS driven by hackers is about increasing and maintaining support (old and new hardware, languages etc..) while FOSS influenced by corporate needs is about standardizing around 'blessed' platforms like is happening in Linux distributions with adoption of Rust (architectures unsupported by Rust lose support).
Rust's target tier support policies aren't based on "corporate needs". They're based, primarily, on having people willing to do the work to support the target on an ongoing basis, and provide the logistics needed to make sure it works.
The main difference, I would say, is that many projects essentially provide the equivalent of Rust's "tier 3" ("the code is there, it might even work") without documenting it as such.
Both sets of constraints produce patterns and gaps. UX and documentation are commonly cited gaps for volunteer programming efforts, for example.
But I think it's true that corporate funding has its own gaps and other distinctive tendencies.
Algol 68 isn’t any more useful than obsolete Rust, however.
But we are carefully adding many GNU extensions to the language, as was explicitly allowed by the Revised Report:
[RR page 52]
"[...] a superlanguage of ALGOL 68 might be defined by additions to
the syntax, semantics or standard-prelude, so as to improve
efficiency or to permit the solution of problems not readily
amenable to ALGOL 68."
The resulting language, which we call GNU Algol 68, is a strict super-language of Algol 68.You can find the extensions currently implemented by GCC listed at https://algol68-lang.org/
It was an AWK like task, but I decided up front it was too much trouble to do in AWK, as I needed to build a graph of data structures from the input data.
In part the program had an AWK like pattern matching and processing section, which wasn't too awkward. I found having to use REF's more trouble that dealing with pointers, in part due to the forms of auto dereferencing the language uses; but that was expected.
The real problem though was that I ended up needing something like a map / hash-table, and I concluded it was too much trouble to write from scratch.
So in the end I switched the program to be written in Go.
That then suggests a few things to me:
- it should have an extension library (prelude) offering some form of hash table.
- it would be useful to add syntax for explicit pointers (PTR keyword) which are not automatically dereferenced when used.
- maybe have with either something like the Go (or Zig) style syntax for selecting a member of a pointed to struct (a.b) and maybe Zig like explicit defer (ptr.\*).
That latter pointer suggestions because I found the "field OF struct" form too verbose, and especially confusing when juggling REFs which may or may not get auto dereferenced.However I think there's the retro-computing, and other hobby niches that align with your hacker view. And certainly there's a bunch of corp enthusiasm for standardizing shiny things.
A number of years ago I worked on a POWER9 GPU cluster. This was quite painful - Python had started moving to use wheels and so most projects had started to build these automatically in CI pipelines but pretty much none of these even supported ARM let alone POWER9 architecture. So you were on your own for pretty much anything that wasn’t Numpy. The reason for this of course is just that there was little demand and as a result even fewer people willing to support it.
Most people don't care about our history, only what is shiny.
It is sad!
For people who aren't familiar with the language, pretty much all modern languages are descended from Algol 60 or Algol 68. C descends from Algol 60, so pretty much every popular modern language derives from Algol in some way [1].
Sure it's ideas spawned many of today's languages, But wasn't that because at the time nobody could afford to actually implement the spec. So we ended up with a ton of "algols buts" (like algol but can actually be implemented and runs on real hardware).
https://academic.oup.com/comjnl/article-abstract/22/2/114/42...
https://en.wikipedia.org/wiki/Burroughs_Large_Systems
There is a large system emulator that runs in a browser, I did not get any algol written but I did have way to much fun going through the boot sequence.
The Burroughs large systems architecture didn't really protect you from yourself, system security/integrity depended on only letting code from vetted compilers run (only a compiler could make a code file, and only a privileged person could make a program a compiler) - so the Algol 60 compiler made code that was safe, Espol could make code that wasn't, could do things a normal user couldn't - you kept the espol compiler somewhere safe away from the students ....
(there was a well known hole in this whole thing involving mag tapes)
Structs come from PL/I, not from ALGOL 68, together with the postfix operators "." and "->". The term "pointer" also comes from PL/I, the corresponding term in ALGOL 68 was "reference". The prefix operator "*" is a mistake peculiar to C, acknowledged later by the C language designers, it should have been a postfix operator, like in Euler and Pascal.
Examples of things that come from ALGOL 68 are unions (unfortunately C unions lack most useful features of the ALGOL 68 unions. which are implicitly tagged unions) and the combined operation-assignment operators, e.g. "+=" or "*=".
The Bourne shell scripting language, inherited by ksh, bash, zsh etc., also has many features taken from ALGOL 68.
The explicit "malloc" and "free" also come from PL/I. ALGOL 68 is normally implemented with a garbage collector.
The original structs were pretty bad too - field names had their own address space and could sort of be used with any pointer which sort of allowed you to make tacky unions) we didn't get a real type system until the late 80s
Therefore the operation with assignment operators were like "+:=".
The initial syntax of C was indeed weird and it was caused by the way how their original parser in their first C compiler happened to be written and rewritten, the later form of the assignment operators was closer to their source from ALGOL 68.
I never liked Pascal style Pointer^. As the postfix starts to get visually cumbersome with more than one layer of Indirection^^. Especially when combined with other postfix Operators^^.AndMethods. Or even just Operator^ := Assignment.
I also think it's the natural inverse of the "address-of" prefix operator. So we have "take the address of this value" and "look through the address to retreive the value."
You can apply the "*" operator as many times you want, but applying "address-of" twice is meaningless.
Moreover, in complex expressions it is common to mix the indirection operator with array indexing and with structure member selection, and all these 3 postfix operators can appear an unlimited number of times in an expression.
Writing such addressing expressions in C is extremely cumbersome, because they require a great number of parentheses levels and it is still difficult to see which is the order in which they are applied.
With a postfix indirection operator no parentheses are needed and all addressing operators are executed in the order in which they are written.
So it is beyond reasonable doubt that a prefix "*" is a mistake.
The only reason why they have chosen "*" as prefix in C, which they later regretted, was because it seemed easier to define the expressions "*++p" and "*p++" to have the desired order of evaluation.
There is no other use case where a prefix "*" simplifies anything and for the postfix and prefix increment and decrement it would have been possible to find other ways to avoid parentheses and even if they were used with parentheses that would still have been simpler than when you have to mix "*" with array indexing and with structure member selection. Moreover, the use of "++" and "--" with pointers was only a workaround for a dumb compiler, which could not determine by itself whether it should access an array using indices or pointers. Normally there should be no need to expose such an implementation detail in a high-level language, the compiler should choose the addressing modes that are optimal for the target CPU, not the programmer. On some CPUs, including the Intel/AMD CPUs, accessing arrays by incrementing pointers, like in the old C programs, is usually worse than accessing the arrays through indices (because on such CPUs the loop counter can be reused as an index register, regardless of the order in which the array is accessed, including for accessing multiple arrays, avoiding the use of extra registers and reducing the number of executed instructions).
With a postfix "*", the operator "->" would have been superfluous. It has been added to C only to avoid some of the most frequent cases when a prefix "*" leads to ugly syntax.
This is due to the nature of lvalue and rvalue expressions. You can only get an object where * is meaningful twice if you've applied & meaningfully twice before.
int a = 42;
int *b = &a;
int **c = &b;
I've applied & twice. I merely had to negotiate with the language instead of the parser to do so.> and all these 3 postfix operators can appear an unlimited number of times in an expression.
In those cases the operator is immediately followed by a non-operator token. I cannot meaningfully write a[][1], or b..field.
> The only reason why they have chosen "*" as prefix in C, which they later regretted, was because it seemed easier to define the expressions "++p" and "p++" to have the desired order of evaluation.
It not only seems easier it is easier. What you sacrifice is complications is defining function pointers. One is far more common than the other. I think they got it right.
> With a postfix "*", the operator "->" would have been superfluous.
Precisely the reason I dislike the Pascal**.Style. Go offers a better mechanism anyways. Just use "." and let the language work out what that means based on types.
I'm offering a subjective point of view. I don't like the way that looks or reads or mentally parses. I'm much happier to occasionally struggle with function pointers.
**c is valid but &&b makes no sense.
You could argue this is inconsistent or confusing. It is certainly useful though.
Incidentally, C99 lets you do something similar with compound literal syntax; this is a valid expression:
&(int *){&b}There has been no shortage of speculation, much of it needlessly elaborate. The reality, however, appears far simpler – the prefix pointer notation had already been present in B and its predecessor, BCPL[0]. It was not invented anew, merely borrowed – or, more accurately, inherited.
The common lore often attributes this syntactic feature to the influence of the PDP-11 ISA. That claim, whilst not entirely baseless, is at best a partial truth. The PDP-11 did support pre-increment and post-increment indirect address manipulation – but notably lacked their symmetrical complements: pre-increment and post-decrement addressing modes[1]. In other words, it exhibited asymmetry – a gap that undermines the argument for direct PDP-11 ISA inheritance, i.e.
MOV (Rn)+, Rm
MOV @(Rn)+, Rm
MOV -(Rn), Rm
MOV @-(Rn), Rm
existed but not MOV +(Rn), Rm
MOV @+(Rn), Rm
MOV (Rn)-, Rm
MOV @(Rn)-, Rm
[0] https://www.thinkage.ca/gcos/expl/b/manu/manu.html#Section6_...[1] PDP-11 ISA allocates 3 bits for the addressing mode (register / Rn, indirect register (Rn), auto post-increment indirect / (Rn)+ , auto post-increment deferred / @(Rn)+, auto pre-decrement indirect / -(Rn), auto pre-increment deferred / @-(Rn), index / idx(Rn) and index deferred / @idx(Rn) ), and whether it was actually «let's choose these eight modes» or «we also wanted pre-increment and post-decrement but ran out of bits» is a matter of historical debate.
The prefix "*" WAS NOT inherited from BCPL, it was purely a B invention due to Ken Thompson.
In BCPL, "*" was actually a postfix operator that was used for array indexing. It was not the operator for indirection.
In CPL, the predecessor of BCPL, there was no indirection operator, because indirection through a pointer was implicit, based on the type of the variable. Instead of an indirection operator, there were different kinds of assignment operators, to enable the assignment of a value to the pointer, instead of assigning to the variable pointed by the pointer, which was the default meaning.
BCPL has made many changes in the syntax of CPL, whose main reason was the necessity of adapting the language to the impoverished character set available on American computers, which lacked many of the characters that had been available in Europe before IBM and a few other US vendors have succeeded to replace the local vendors, also imposing thus the EBCDIC and later the ASCII character sets.
Several of the changes done between BCPL and B had the same kind of reason, i.e. they were needed to transition the language from an older character set to the then new ASCII character set. For instance the use of braces as block delimiters was prompted by their addition into ASCII, as they were not available in the previous character set.
The link that you have provided to a manual of the B language is not useful for historical discussions, as the manual is for a modernized version of B, which contains some features back-ported from C.
There is a manual of the B language dated 1972-01-07, which predates the C language, and which can be found on the Web. Even that version might have already included some changes from the original B language of 1969.
The BCPL manual[0] explains the «monadic !» operator (section 2.11.3) as:
2.11.3 MONADIC !
The value or a monadic ! expression is the value of the storage cell whose address is the operand of the !. Thus @!E = !@E = E, (providing E is an expression of the class described in 2.11.2).
Examples.
!X := Y Stores the value of Y into the storage cell whose address is the value of X.
P := !P Stores the value of the cell whose address is the value of P, as the new value of P.
The array indexing used the «V ! idx» syntax (section 2.13, «Vector application»).So, the ! was a prefix operator for pointers, and it was an infix operator for array indexing.
In Richard's account of BCPL's evolution, he noted that on early hardware the exlamation mark was not easily available, and, therefore, he used a composite *( (i.e. a diagraph):
«The star in *( was chosen because it was available … and it seemed appropriate for subscription since it was used as the indirection operator in the FAP assembly language on CTSS. Later, when the exclamation mark became available, *( was replaced by !( and exclamation mark became both a dyadic and monadic indirection operator».
So, in all likelihood, !X := Y became *(X := Y, eventually becoming *X = Y (in B and C) whilst retaining the exact and original semantics of the !.[0] https://rabbit.eng.miami.edu/info/bcpl_reference_manual.pdf
The use of the character "!" in BCPL is much later than the development of the B language from BCPL, in 1969.
The asterisk had 3 uses in BCPL, as the multiplication operator, as a marker for the opening bracket in array indexing, to compensate for the lack of different kinds of brackets for function evaluation and for array indexing, and as the escape character in character strings. For the last use the asterisk has been replaced by the backslash in C.
There was indeed a prefix indirection operator in BCPL, but it did not use any special character, because the available character set did not have any unused characters.
The BCPL parser was separate from the lexer, and it was possible for the end users to modify the lexer, in order to assign any locally available characters to the syntactic tokens.
So if a user had appropriate characters, they could have been assigned to indirection and address-of, but otherwise they were just written RV and LV, for right-hand-side value and left-hand-side value.
It is not known whether Ken Thompson had modified the BCPL lexer for his PDP computer, to use some special characters for operators like RV and LV.
In any case, he could not have used asterisk for indirection, because that would have conflicted with its other uses.
The use of asterisk for indirection in B became possible only after Ken Thompson has made many other changes and simplifications in comparison with BCPL, removing any parsing conflicts.
You are right that BCPL already had prefix operators for indirection and address-of, which was different from how this had been handled in CPL, but Martin Richards did not seem to have any reason for this choice and in BCPL this was a less obvious mistake, because it did not have structures.
On the other hand, Ken Thompson did want to have "*" as prefix, after introducing his increment and decrement operators, in order to need no parentheses for pre- and post-incrementation or decrementation of pointers, in the context where postfix operators were defined as having higher precedence than prefix.
Also in his case this was not yet an obvious mistake, because he had no structures and the programs written in B at that time did not use any complex data structures that would need correspondingly complex addressing expressions.
Only years later it became apparent that this was a bad choice, while the earlier choice of N. Wirth in Euler (January 1966; the first high-level language that handled pointers explicitly, with indirection and address-of operators) had been the right one. The high-level languages that had "references" before 1966 (the term "pointer" has been introduced in IBM PL/I, in July 1966), e.g. CPL and FORTRAN IV, handled them only implicitly.
Decades later, complex data structures became common while the manual optimization of incrementing/decrementing explicitly pointers for addressing arrays became a way of writing inefficient programs, which prevent the compiler from optimizing correctly the array accessing for the target CPU.
So the choice of Ken Thompson can be justified in its context from 1969, but in hindsight it has definitely been a very bad choice.
However, to be entirely candid, I have submitted two references and a direct quotation throughout the discourse in support of the position – each of which has been summarily dismissed with an appeal to some ostensibly «older, truer origin», presented without citation, without substantiation, and, most tellingly, without the rigour such a claim demands.
It is important to recall that during the formative years of programming language development, there were no formal standards, no governing design committees. Each compiled copy of a language – often passed around on a tape and locally altered, sometimes severely – became its own dialect, occasionally diverging to the point of incompatibility with its progenitor.
Therefore, may I ask that you provide specific and credible sources – ones that not only support your historical assertion, but also clarify the particular lineage, or flavour, of the language in question? Intellectual honesty demands no less – and rhetorical flourish is no substitute for evidence.
On the other hand, I have provided all the information that is needed for anyone to find those documents through a Web search, in a few seconds.
I have the quoted documents, but it is not helpful to know from where they were downloaded a long time ago, because, unfortunately, the Internet URLs are not stable. So for links, I just have to search them again, like anyone else.
These documents can be found in many places.
For instance, searching "b language manual 1972" finds as the first link:
https://www.nokia.com/bell-labs/about/dennis-m-ritchie/kbman...
Searching "martin richards bcpl 1967" finds as the first link:
https://www.nokia.com/bell-labs/about/dennis-m-ritchie/bcpl....
Additional searching for CPL and BCPL language documents finds
https://archives.bodleian.ox.ac.uk/repositories/2/archival_o...
where there are a lot of early documents about the languages BCPL and CPL.
Searching for "Wirth Euler language 1966" finds the 2-part paper
https://dl.acm.org/doi/10.1145/365153.365162
https://dl.acm.org/doi/10.1145/365170.365202
There exists an earlier internal report about Euler from April 1965 at Stanford, before the publication of the language in CACM, where both indirection and address-of were prefix, like later in BCPL. However, before the publication in January 1966, indirection has been changed to be a postfix operator, choice that has been retained in the later languages of Wirth.
http://i.stanford.edu/pub/cstr/reports/cs/tr/65/20/CS-TR-65-...
The early IBM PL/I manuals are available at
http://bitsavers.org/pdf/ibm/360/pli/
Searching for "algol 68 reports" will find a lot of documents.
And so on, everything can be searched and found immediately.
some·object-pin-pin-pin-transform is not harder to parse nor to interpret as human than (***some_object)->transform().
C also had a reserved keyword, «entry», which had never been used before eventually being relinquished from its keyword status when the standardisation of C began.
Aside from historical interest, why are you excited for it?
Admittedly, every language I really enjoy and get along with is one of those languages that produced little compared to the likes of C (APL, Tcl/Tk, Forth), and as a hobbyist I have no real stake in the game.
While I doubt I'll do any major development in it, I'll definitely have a play with it, just to revisit old memories and remind myself of its many innovations.
Ironically, Algol 68 and Modula-2 are getting more contributions than Go, on GCC frontends, which seems stuck in version 1.18, in a situation similar to gcj.
Either way, today is for Algol's celebration.
These were fairly popular for awhile and supported advanced features like multiprocessing. The demand for exercising the full range of capabilities was kind of niche but an "amateur", like myself, could make a few bucks if you knew ALGOL.
I used to have the grey manual for the Burrough's variant - I'll have to poke around to see if it's in the attic somewhere.
begin
real procedure A(k, x1, x2, x3, x4, x5);
value k; integer k;
real x1, x2, x3, x4, x5;
begin
real procedure B;
begin k := k - 1;
B := A := A(k, B, x1, x2, x3, x4)
end;
if k ≤ 0 then A := x4 + x5 else B
end;
outreal(1, A(10, 1, -1, -1, 1, 0))
end
The whole "return by assigning to the function name" is one of my least favorite features of Pascal, which I suppose got it from Algol 60. Where I'm confused though is, what is the initial value of B in the call to A(k, B, x1, x2, x3, x4)? I'm guessing the pass-by-name semantics are coming into play, but I still can't figure out how to untie this knot. #include <functional>
#include <iostream>
using cf = std::function<int()>;
int A(int k, cf x1, cf x2, cf x3, cf x4, cf x5)
{
int Aval;
cf B = [&]()
{
int Bval;
--k;
Bval = Aval = A(k, B, x1, x2, x3, x4);
return Bval;
};
if (k <= 0) Aval = x4() + x5(); else B();
return Aval;
}
cf I(int n) { return [=](){ return n; }; }
int main()
{
for (int n=0; n<10; ++n)
std::cout << A(n, I(1), I(-1), I(-1), I(1), I(0)) << ", ";
std::cout << std::endl;
}
So in the expression `A(k, B, x1, x2, x3, x4)`, the `B` there is not called, it simply refers to the local variable `B` (inside the function `A`), that was captured by the lambda (by reference): the same B variable that is currently being assigned.That said, it really stands out to me that the two latest GCC languages are Cobol and Algol68 while LLVM gets Swift and Zig.
And Rust and Julia come from LLVM as well of course.
GCC has some rules to add, and keep frontends on the main compiler, instead of additional branches, e.g. GNU Pascal never got added.
So if there is no value with maintenance effort, the GCC steering will eventually discuss this.
https://github.com/search?q=algol68&type=repositories
Without knowing what your interests/motivations and backgrounds are, it is hard to make good recommendations, but if you didn't know about rosettacode or github I figured I should start with that
Godcc is a command-line interface for Compiler Explorer written in Algol 68.
Many have been digitalized throughout the years across Bitsavers, ACM/SIGPLAN, IEEE, or university departments.
Also heavily influenced languages like ESPOL, NEWP, PL/I and its variants.