I cant seem to convert a double to a uint32 when compiled for a 64 bit application, the same code works perfectly when compiled for 32 bit. I know doubles arent the best place to keep numbers due to the loss in precision with decimals, but many of the internal functions return them like val() so youre forced to work with them sometimes internally at least. The simplest example of the problem is just this:
dim d as double = 4294967181
dim i as uint32 = d
MsgBox( str( i))
the maximum number a uint32 can hold is 4,294,967,295 so the number assigned above should fit into it and be displayed properly. In a 32 bit compile it is displayed properly. When the same code is compiled for 64 bit I get 2147483648.
Is this how its supposed to work? and if thats so can someone explain to me why it does that and how I can do a proper conversion? Or is this a bug as I suspect?
As a work around it seems if I convert the double to a uint64 and then the uint64 to a uint32 then I can get the proper value. But this still doesnt seem right as it broke code in my app that was working properly while debugging and before I released a 64 bit version. I have to go check a lot of my bitwise math now for large numbers and bitbanging as there are probably other things broken I havent noticed yet.
yeah that is messed up, I just confirmed that OSX-32bit works find
but OSX-64 bit returns very strange result
should return 11111111111111111111111110001101
but instead returned 11001101000000000000000000000000
smaller numbers work fine, but above some point it starts to be very strange. It seems that numbers that dont use the 32nd bit of the uint32 work fine, but as soon as you roll over that bit the numbers come out very strange, for example this works:
dim d as double = &b01111111111111111111111111111111
dim k as uint32 = d
but this does not
dim d as double = &b10000000000000000000000000000000
dim k as uint32 = d
The UInt64 bit code looks much different. So it would be interesting to know why the code generator treats it as signed integer here.
Another way would be to implicit cast to 64bit.