Srsly, there's 3 types of languages as far as I can tell: those that define a specific size for numerical types, those that let those types be any size, and Go+C.
Cause if C does it it has to be a good idea, right?
I mean... Correct me if this is wrong, I am sleepy. But I can't think of any other languages where your most common number type varies in size as the machine word size changes.
If you have programs that talk over a network it's just begging for trouble.
@icefox F* does something interesting: integers are “mathematical” integers (so, argitrary-precision arithmetic), and you can define more precise types, like type u128 = n: nat { 0 <= n < 2¹²⁸ }, and so on.
The compiler then maps those to the best arithmetic implementation, IIRC.
@kellerfuchs actually that's quite nice cause it gets you transparent bignums, a la Lisps. Sorta wish that was more common, but alas, you need transparent memory allocation to make it actually work.
Hmmmm...
@icefox Yeah. F* is in a bit of a strange situation there, because it's a pretty usual functional language (ignoring the ridiculously-powerful type system)... up until you discover there is a subset of the language (called Low*), where basically memory is explicit, and you interact with it through some (abstract) model.
Once you proved correctness of what you are doing in the abstract model (i.e. not use-after-free, no out-of-bounds, ...), the Low* compiler turns it into runtime-free C code.
@icefox So there is this weird tension between having transparent memory, and having opt-in very-explicit memory.
@kellerfuchs That sounds really cool. I should check it out.
@icefox in my imaginary "C, but better," there would be an explicit difference between "native"-sized ints and explicitly-sized ints. Both are useful in different cases.
@impiaaa yeah, native-sized ints are for array indices and pointers. Explicit ints are for everything else.
Rust does this and it is actually kinda wonderful.
@icefox For me it would be more, explicit size for net/disk/other comms and where the precision/size is important, and native everywhere else where I don't care. As long as there's the option. I've considered making a few #defines so I could have this in C already, but the library interoperability would make it confusing.
@impiaaa Yeah, except you always care, because numerical overflows/underflows are almost always unchecked and almost never what you want.
"Don't care" is i32, or f64, or i64 if you want. Because then you don't *have* to care. It will never act differently from one platform to another, and how it will act is always exactly how you want it to act. You will never get fucked over by it unexpectedly.
@impiaaa "This isn't going to be a problem" just means it's not a problem until someone else touches it and does something unexpected, or it means the size invariants are enforced by something other than the compiler (loading a known file format, for example).