[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3531.txt
Add a namespacing macro and you have a whole generics system, unlike that in TFA.
So, it might add more value to have the C std add an `#include "file.c" name1=val1 name2=val2` preprocessor syntax where name1, name2 would be on a "stack" and be popped after processing the file. This would let you do types/functions/whatever "generic modules" with manual instantiation which kind of fits with C (manual management of memory, bounds checking, etc.) but preprocessor-assisted "macro scoping" for nested generics. Perhaps an idea to play with in your slimcc fork?
I guess ctags-type tools would need updating for the new possible definition location. Mostly someone needs to decide on a separation syntax for stuff like `name1(..)=expansion1 name2(..)=expansion2` for "in-line" cases. Compiler programs have had `cc -Dname(..)=expansion` or equivalents since the dawn of the language, but they actually get the OS/argv idea of separation from whatever CL args or Windows APIs or etc.
Anyway, might makes sense to first get experience with a slimcc/tinycc/gcc/clang cpp++ extension. ;-) Personally, these days I mostly just use Nim as a better C.
Note as the newer versions are basically C++ without Classes kind of thing.
Second to that I'd say the appeal is just watching something you've known for a long time grow slowly and steadily.
templates is the main thing c++ has over c. its trivial to circumvent or escape the thing u dont 'like' about c++ like new and delete (personal obstacle) and write good nice modern c++ with templates.
C generic can help but ultimately, in my opinion, the need for templating is a good one to go from C to C++.
#include <stdlib.h>
#include <stdio.h>
#define vec(T) struct { T* val; int size; int cap; }
#define vec_push(self, x) { \
if((self).size == (self).cap) { \
(self).cap = (self).cap == 0 ? 1 : 2 * (self).cap; \
(self).val = realloc((self).val, sizeof(*(self).val) * (self).cap); \
} \
(self).val[(self).size++] = x; \
}
#define vec_for(self, at, ...) \
for(int i = 0; i < (self).size; i++) { \
auto at = &(self).val[i]; \
__VA_ARGS__ \
}
typedef vec(char) string;
void string_push(string* self, char* chars)
{
if(self->size > 0)
{
self->size -= 1;
}
while(*chars)
{
vec_push(*self, *chars++);
}
vec_push(*self, '\0');
}
int main()
{
vec(int) a = {};
vec_push(a, 1);
vec_push(a, 2);
vec_push(a, 3);
vec_for(a, at, {
printf("%d\n", *at);
});
vec(double) b = {};
vec_push(b, 1.0);
vec_push(b, 2.0);
vec_push(b, 3.0);
vec_for(b, at, {
printf("%f\n", *at);
});
string c = {};
string_push(&c, "this is a test");
string_push(&c, " ");
string_push(&c, "for c23");
printf("%s\n", c.val);
}
Never mix unsigned and signed operands. Prefer signed. If you need to convert an operand, see (2).
https://nullprogram.com/blog/2024/05/24/You cannot even check the signedness of a signed size to detect an overflow, because signed overflow is undefined!
The remaining argument from what I can tell is that comparisons between signed and unsigned sizes are bug-prone. There is however, a dedicated warning to resolve this instantly.
It makes sense that you should be able to assign a pointer to a size. If the size is signed, this cannot be done due to its smaller capacity.
Given this, I can't understand the justification. I'm currently using unsigned sizes. If you have anything contradicting, please comment :^)
IMO, this is a better approach than using signed types for indexing, but AFAIK, it's not included in GCC/glibc or gnulib. It's an optional extension and you're supposed to define `__STDC_WANT_LIB_EXT1__` to use it.
I don't know if any compiler actually supports it. It came from Microsoft and was submitted for standardization, but ISO made some changes from Microsoft's own implementation.
https://www.open-std.org/JTC1/SC22/WG14/www/docs/n1173.pdf#p...
Unsigned types in C have modular arithmetic, I think they should be used exclusively when this is needed, or maybe if you absolutely need the full range.
int somearray[10];
new_ptr = somearray + signed_value;
or
element = somearray[signedvalue];
this seems almost criminal to how my brain does logic/C code.
The only thing i could think of is this:
somearray+=11; somearray[-1] // index set to somearray[10] ??
if i'd see my CPU execute that i'd want it to please stop. I'd want my compiler to shout at me like a little child, and be mean until i do better.
-Wall -Wextra -Wextra -Wpedantic <-- that should flag i think any of these weird practices.
As you stated tho, i'd be keen to learn why i am wrong!
Arrays aren't the best example, since they are inherently about linear, scalar offsets, but you might see a negative offset from the start of a (decayed) array in the implementation of an allocator with clobber canaries before and after the data.
1. Certain your added value is negative.
2. Checking for underflows after computation, which you shouldn't.
The article was interesting.
Why?
By the definition of ptrdiff_t, ISTM the size of any object allocated by malloc cannot be out of bounds of ptrdiff_t, so I'm not sure how can you have a useful size_t that uses the sign bit?
"struct Goose { float weight; }" and "struct Beaver { float weight; }" would remain incompatible, as would "struct { float weight; }" and "struct { float weight; }" (since they're declared without tags.)