Double vs Single

Ok… back in the day… there was DOUBLE, SINGLE and INTEGER (16bit)
and to a point it made sense, since memory and disk storage was slow and limited in size,
so a SINGLE took up less space at the cost of precision.

an Integer took up 2 bytes, a Single took up 4, and a Double took up 8 bytes.

But I was wondering why it persists today (such as in XOJO)… Memory and disk storage are no longer an issue.

This came up when I was designing the datatype structure for my latest upgraded RetroBasic intepeter.
Since a modern DOUBLE doesn’t seem to fit the specifications of a 1985 double, and I’d have to “totally” fake the way singles were dealt with.
As it is, I’m going with a 32bit Integer instead of a 16bit one. So it will be “more” than the “GW-BASIC” I am using a a reference.

Good Point, what is your suggestion ?

No suggestion… it was a question

I suspect it persists solely for directly calling (using Declare) specific OS APIs that still rely on Single for some legacy reason.

Sorry, I guess I just understood the situation now, damn google translate. kkk

When using OpenGLSurface and some older OpenGL code, all three types (Double, Single, and integer (16-bit)) are still used. As Dave mentioned it would be easier from a progamming point-of-view to lower the number of numerical types. I am glad the many options exist because some legacy code still needs these numerical types in 2016.

Edit: @Dave S your hat looks great!

Oh man… You shoulda seen that chart I made for the xDev magazine, with all the different number types that are available today…

Yeah: legacy files that save and load Singles in binary format…
It took up less disk space.