I wrote the following routine to convert a Uinteger into a string value in any base from 2 (binary) to 16 (hexadecimal). Recently I discovered that the values are being converted to type double during the calculation, because input values greater than 2^53 are being rounded off, giving incorrect results. Is there a way to force all intermediate calculations to be done as Uintegers?

Public Function Str_B(n As UInteger, b As UInteger, nDigits As Integer) As string
'Converts the input number into a string in the specified base b (2..16)
'with left leading zeroes if nDigits>0
'
dim digits(),temp As string
dim ntemp As UInteger
if n=0 then
return right("00000000000000000000000000000000000000000000000000000000000000000000"+temp,nDigits)
elseIf b>1 and b<17 then
ntemp=n
temp=""
digits=array("0","1","2","3","4","5","6","7","8","9","A","B","C","D","E","F")
while ntemp>0
temp=digits(ntemp mod b)+temp
ntemp=ntemp\b ' This appears to be the problem line
wend
if nDigits=0 then
return temp
else
return right("00000000000000000000000000000000000000000000000000000000000000000000"+temp,nDigits)
end if
else
return "<Error>"
end if
End Function

Thanks. Iâ€™ll give that a try. However, Iâ€™m fairly certain that it must have been doing a conversion to double for the intermediate calculation, because the function result was correct for values up to 53 bits (2^53) which corresponds to the mantissa of a double. Iâ€™ll have another look though, maybe I was wrong.

Xojo needs to have a compiler directive to force the variable type in a calculation. If we had such a pragma, it would solve a lot of these problems.

Yeah - unexpected numeric value conversions pop up in frustrated posts every now and again in this forum. The compiler engineers made some ill-advised choices at some point in the deep past and weâ€™re still paying for it for compatibilityâ€™s sake; a compiler directive is an excellent idea, allowing modern code to work more efficiently and intuitively.

Hi Derk.
Iâ€™m aware of Ctype, but I donâ€™t see how it would be of any use in this case, because all of my variables are already Uintegers, and I donâ€™t see how using Ctype would prevent the result of the Ctype function from being converted to whatever Xojo randomly decides to convert it to when it applies the integer divide operation.

I presume you can use the CType(myUint64_1 / myUint64_2, Uint64) to force such division ? @William_Yu may be able to explain whatâ€™s best for your case. I donâ€™t know the actual internals of ctype nor the division types. Itâ€™s all about how the compiler works on this.

It turned out that there was nothing wrong with my function. The problem was with how I was creating the value that I was sending to the function. I was creating the Uinteger value like this:

var n as Uinteger = val("&b"+myInput)

where myInput was a string of ones and zeros. Clearly it was being converted to a double as an intermediate step, which resulted in input values greater than 2^53 being rounded to 53 bits (the least significant bits being set to zero). Changing the input to:

var n as Uinteger = Uinteger.FromBinary(myInput)

sets n to the correct value, and the Str_B() function gives the correct output for numbers up to 64 bits.

The thought of â€śwhat about his input valuesâ€ť crossed my mind, because I never was able to get your original code to fall, but I figured I just wasnâ€™t pushing it in just the right way to trigger the issue.