fun(a,b): return a+b
and if the a + b is doable, it will be done. You don't need to specify the list of types that accepts this syntax. The difference between this and duck typing is that you can also specify interfaces (or traits in c++) that will say that this type is quackable so fun(a <? implements Quackable>): a.quack()
is reusable. What is the difference between this and simple interface implementation? It took me some time to find this example in the narrowest version possible. class <T has trait Number> Complex(a T, b T):
Complex<T> operator+(Complex<T> other): return new Complex(this.a + other.a, this.b+other.b)
Complex<T> operator-(Complex<T> other): return new Complex(this.a - other.a, this.b-other.b)
Complex<T> operator*(Complex<T> other):
return Complex(this.a * other.a - this.b * other.b, this.a * other.b + this.b * other.a)
The generic renders this code reusable, so if only you can create a new type, let's say vector, that supports +,-, and multiply you can have complex algebra on those vectorsThat's why C++ libraries with generic interfaces need to have their full implementations in the header or else have explicit instantiations for known supported types. If you compile the library first and the project later, there's no way of knowing at library-compile-time which instantiations of the generic will later be needed, and like, the types might not even exist yet at that point.
One of my pet-hates is fellow developers who call an implementation 'generic', but when you peek inside, there's just if-statements that (at best) cover the already-known input types.
Usually I point to Generics as an example of what "generic" actually means: You peek inside List<T>, it doesn't know about your type, it does the right thing anyway.
Your list_contains function should be able to just do a == comparison regardless of whether it's an int or a string.
This is effectively no different than adding a parameter to one of your non-"generic" functions and just swapping behaviour based on that?
Why not just use a weakly typed language and add type checking were needed?
It seems strange to put in so much effort for type checking then only to throw it overboard by implementing something that ignores type.
They are not.
They are applicable to a well-defined range of sets of input types, instead of a single specific type combination, and produce a well-defined output type for each input type combination.
> It seems strange to put in so much effort for type checking then only to throw it overboard by implementing something that ignores type.
Generics do not ignore types. That's kind of the whole point.
Generic functions to not ignore types. An `inThere::a list -> a -> bool` very much enforces that the list passed in, as well as the element have the same type. With a sufficiently powerful type system, this allows for statically checked code that's not much less flexible than dynamically checked code.
Observing current developments in Python, but also Rust gives me the impression that dynamically typed languages were more a reaction to the very weak type systems languages like C or Java provided back in the day. A lot of Python code has very concrete or rather simple generic types for example - Protocols, Unions, First-class functions and Type parameters handle a lot. The tools to express these types better existed in e.g. Caml or Haskell, but weren't mainstream yet.
This is different from ‘parametric polymorphism’, which is what people call generics.
Given that, this isn’t that different from C generics (https://en.cppreference.com/w/c/language/generic.html), and people call that generics, too.
Having said that, even ignoring that this requires all implementations to be in a single source file (yes, you probably could use m4 or #include or whatever it’s called in this language) I do not find this syntax elegant.
Also, one thing that it doesn’t seem to support is generating compiler errors when calling a function with a type that isn’t supported.
It's implementation has the same issues as generics in Zig, which is also not parametric.
It's ok to explore other points in the design space, but the language designer should be aware of what they're doing and the tradeoffs involved. In the case of adhoc (non-parametric) polymorphism, there is a lot of work on type classes to draw on.
IMO, https://en.wikipedia.org/wiki/Generic_programming is more appropriate. It talks of “data types to-be-specified-later”, something that this and C generic lack. That’s one of the reasons that I wrote “I _somewhat_ disagree”.
Also, I don’t see how one would define “act in the same way”. A function that fully acts in the same way regardless of the types of its arguments cannot do much with its arguments.
For example, a function “/” doesn’t act in exactly the same way on floats and integers in many languages (5.0/2.0 may return 2,5 while 5/2 returns 2; if you say it should return 2,5 instead you’re having a function from T×T to T for floats but a function from T×T to U for ints; why would you call that “act in the same way”?), “+” may or may not wrap around depending on actual type, etc.
“Generics” should mean that the compiler or interpreter will generate new code paths for a function or structure based on usage in the calling code.
If I call tragmorgify(int), tragmorgify(float), or tragmorgify(CustomNumberType), the expectation is that tragmorgify(T: IsANumber) tragmorgifies things that are number-like in the same way.
For a compiled language this usually means monomorphization, or generating a function for each occurring tuple of args types. For an interpreted language it usually means duck-typing.
This is not a bad language feature per se, but also not what engineers want from generics. I would never write code like your example. The pattern of explicit type-checking itself is a well-known codesmell.
There is not a good usecase for adding 2.0 to a float input but 1 to an integer input. That makes your function, which should advertise a contract about what it does, a liar ;)