Some tests:
157573952589676412928 //works and is bigger than the other number
147573952589676412928 //works
147573952589676012928 //don't work -> 147573952589676019712
weird that putting a 0 there instead of 4 will give a wrong double.
@LouisDesjardins @Robert D created math plugins that handle very large numbers. I would look here to find the right one. At first glance, at least one of Robert's plugins should work. In particular, his free fp Plugin seems to be right for the job.
Yes, I mentioned this in the initial post. I use his plugins already, but wanted to know about pure Xojo behaviour.
This should help with the large numbers:
https://www.rapidtables.com/math/algebra/logarithm/Logarithm_Rules.html
@Beatrix W This should help with the large numbers:
https://www.rapidtables.com/math/algebra/logarithm/Logarithm_Rules.html
Sorry to be dense, but I don't quite follow. Using logs will help with accuracy?
@Beatrix W Grin.. instead of doing x/y you do
logb(x) - logb(y)
and then you expb on the result. This makes the numbers much smaller. In the times before pocket calculators you had logarithm tables for this type of calculation.
Well, I know that. But I don't think this increases accuracy in the calculation, so I don't see a reason to use it.
Have you looked at Bob Delaney’s plugin?
@LangueRodriguez Sigh ... What do you think they used to send a rocket to the moon, a super-duper math intensive processor (that was not available at the time)?
Seriously, how is this relevant? Logs are lookup tables. You're trying to tell me that using logs is going to increase the accuracy of calculations in Xojo? Show me?
You guys seem to be jumping into the conversation without bothering to see what the topic is, grinning and sighing and talking to me like I never completed 8th grade math. Funny stuff!
I see how one response to the initial post could be "use logs". Sure. But is that any more accurate than using doubles?
@Greg OLone Have you looked at Bob Delaney’s plugin?
Yes, thanks, I mention it in the initial post, and also answered someone else here already who missed that too.
@Aaron H I'd like to read in an enormous fraction like this one from a text file as a string ...
156348578434374084375/147573952589676412928... and get a float from it. If I try to parse this as two integers and do the math, there is an overflow and I get a NaN or division by zero error. I know it's possible to use libraries like Bob Delaney's fp plugin. My question is whether there is there any way to do it in pure Xojo. Someone told me they can do this in C, and I told them I don't believe them, because in order to get a value out of the string, the following has to happen:
- convert the string to numbers
- do the math
... and doing the math simply fails. What am I missing?
I'd love to see the C code they claim to use. I expect that it's using some vendor specific extension that supports 128 bit integers since the values given exceed the range of a 64 bit integer (signed or unsigned). That would roughly be the equivalent of using a plugin like Bob Delaney's which would let you do infinite precision far exceeding what could be represented in a 128 bit integer.
If i am right the result of this calculation should be: 1,0594591775223041863783555831258 according to the in Windows buildin calculator.
In the eighties of the previous century on the Atari ST you had Omikron basic that had a 10 byte floating point datatype that allows 18 to 19 digit accuracy and also Delphi has a 10 byte float. I guess that a 12 byte floating point datatype would suffice for this calculation but i don't know a programming language that suports that natively.
@Aaron H Seriously, how is this relevant? Logs are lookup tables. You're trying to tell me that using logs is going to increase the accuracy of calculations in Xojo? Show me?
So you are probably right. Using lookup tables will not give greater precision than the real answer. But the Xojo code you are using doesn't either. I don't have the time right now to go through it, but I know that when I was working with 8 and 16 bit MCUs and I needed to do bigger integer math (read 32-bit), I had to use logs and lookup tables. So is it perfect, no. Is it better than the built-in support, probably. Didn't mean to upset you, just thought you discarded the initial notion without even thinking about it. But you are probably right, that is not what you were looking for.
Using Bob Delaney's decimal plugin with scale at 31 I get the same result:
1.0594591775223041863783555831258
decimal place 32 show as 0 if I set scale to 32. I think because the default precision is 32. If I set the precision to 40, then I get 39 decimal places:
1.059459177522304186378355583125765448926
I will start liking every post that mention the Atari ST :) that bring back good memories.
@LangueRodriguez when I was working with 8 and 16 bit MCUs and I needed to do bigger integer math (read 32-bit), I had to use logs and lookup tables.
No worries. That's a useful trick for such a situation, which is really not so different from the problem here, so I can see why it came to mind as a good strategy to get around data type / memory limitations.
@Alberto D;Poo Using Bob Delaney's decimal plugin with scale at 31 I get the same result:
1.0594591775223041863783555831258
decimal place 32 show as 0 if I set scale to 32. I think because the default precision is 32. If I set the precision to 40, then I get 39 decimal places:
1.059459177522304186378355583125765448926
Could you please post the code you used to get that? Because I also use the decimal plugin but when I tried this I also got a Nan and DecDivideByZero exception (!). In my software I'm using DecSetPrecision(36) in order to accurately represent 128-bit integers.
@Aaron H Could you please post the code you used to get that? Because I also use the decimal plugin but when I tried this I also got a Nan and DecDivideByZero exception (!). In my software I'm using DecSetPrecision(36) in order to accurately represent 128-bit integers.
Here is the code:
Dim n, d, x As Decimal DecSetPrecision(40) DecSetScale(39) n = New Decimal( "156348578434374084375" ) d = New Decimal( "147573952589676412928" ) x = n / d MsgBox Str(x)
@Alberto D;Poo
Dim n, d, x As Decimal DecSetPrecision(40) DecSetScale(39) n = New Decimal( "156348578434374084375" ) d = New Decimal( "147573952589676412928" ) x = n / d MsgBox Str(x)
D'oh! In my test I was assigning the Decimals from integer literals (slaps forehead). Thanks.
If your final result is going to be type double, then the precision is limited to about 16 significant digits. Knowing that, you can simply truncate the numerator and denominator of your fraction to about 17 digits (to allow for round off) which will fit in int64s, then divide these two values.
Function BigFracToDouble(bigFrac As String) as Double dim nd() As String = split(ReplaceAll(bigFrac," ",""),"/") dim nTrunc As String = left(nd(0),17) dim dTrunc As String = left(nd(1),17) dim pwr As Integer = (len(nd(0))-len(nTrunc))-(len(nd(1))-len(dTrunc)) return nTrunc.Val/dTrunc.Val*10^pwr End Function
This produces a result of 1.059459177522304e+0 which agrees with the BigFloat result to the precision limit of type double.