But I am nonetheless puzzled. Is this not a case of constant folding, which I would expect to be done with as large a precision as possible (assuming the principle of least surprise)? Suppose I said:
d = 100 + 110
and I did that because the numbers (100, and 110) each had some specific meaning, would that provoke overflow because each will fit in an int8 but the sum does not? What about if I had defined those numbers as constants - how wide is a constant?
I like Rolf’s answer but does that involve two additions at runtime?
I looked for but couldn’t find information about casting and what the definition of a number is. Something like this page would be useful:
You need to make sure to define ‘d’ prior to its use. Otherwise you are leaving it up to the compiler to define it, and you don’t know what that will look like (same thing as it is happening now - as it appears the base units of the compiler are 32bits; which means you should implicitly cast any number which is not 32bits if you want to guarantee the answer).