huge fractions from string to number

[quote=405399:@Dave S]those may “appear” to work… but the answers are probably wrong.

I would propose that

dim s as string= "156348578434374084375" 
if s<>str(val(s)) then msgbox "Double is not accurate"

your examples fail, as they all go to SciNot format (1.5eXX)[/quote]

I agree the answers are wrong. But they should be as close as possible, and not the result of overflow errors, right? That’s my main concern: something that is way off because of an overflow, which shouldn’t happen with doubles.

overflow isn’t an issue here… just precision…

dim x as double
		
// also appears to work
dim n, d as double
n = val( "156348578434374084375" )
d = val( "147573952589676412928" )
x = n / d
MsgBox str(x)
		
	
// also appears to work
		
n = val( "1563485" )
d = val( "1475739" )
x = n / d
MsgBox str(x)

both give the same answer… even after losing 14 digits

Some tests:

157573952589676412928 //works and is bigger than the other number
147573952589676412928 //works
147573952589676012928 //don’t work -> 147573952589676019712

weird that putting a 0 there instead of 4 will give a wrong double.

@Robert Delaney created math plugins that handle very large numbers. I would look here to find the right one. At first glance, at least one of Robert’s plugins should work. In particular, his free fp Plugin seems to be right for the job.

Yes, I mentioned this in the initial post. I use his plugins already, but wanted to know about pure Xojo behaviour.

This should help with the large numbers:

https://www.rapidtables.com/math/algebra/logarithm/Logarithm_Rules.html

[quote=405413:@Beatrix Willius]This should help with the large numbers:

https://www.rapidtables.com/math/algebra/logarithm/Logarithm_Rules.html[/quote]

Sorry to be dense, but I don’t quite follow. Using logs will help with accuracy?

Grin… instead of doing x/y you do

logb(x) - logb(y)

and then you expb on the result. This makes the numbers much smaller. In the times before pocket calculators you had logarithm tables for this type of calculation.

[quote=405422:@Beatrix Willius]Grin… instead of doing x/y you do

logb(x) - logb(y)

and then you expb on the result. This makes the numbers much smaller. In the times before pocket calculators you had logarithm tables for this type of calculation.[/quote]

Well, I know that. But I don’t think this increases accuracy in the calculation, so I don’t see a reason to use it.

Have you looked at Bob Delaney’s plugin?

http://delaneyrm.com/fpPlugin.html

Sigh … What do you think they used to send a rocket to the moon, a super-duper math intensive processor (that was not available at the time)?

Seriously, how is this relevant? Logs are lookup tables. You’re trying to tell me that using logs is going to increase the accuracy of calculations in Xojo? Show me?

You guys seem to be jumping into the conversation without bothering to see what the topic is, grinning and sighing and talking to me like I never completed 8th grade math. Funny stuff!

I see how one response to the initial post could be “use logs”. Sure. But is that any more accurate than using doubles?

[quote=405425:@Greg O’Lone]Have you looked at Bob Delaney’s plugin?

http://delaneyrm.com/fpPlugin.html[/quote]

Yes, thanks, I mention it in the initial post, and also answered someone else here already who missed that too.

[quote=405371:@Aaron Hunt]I’d like to read in an enormous fraction like this one from a text file as a string …

156348578434374084375/147573952589676412928

… and get a float from it. If I try to parse this as two integers and do the math, there is an overflow and I get a NaN or division by zero error. I know it’s possible to use libraries like Bob Delaney’s fp plugin. My question is whether there is there any way to do it in pure Xojo. Someone told me they can do this in C, and I told them I don’t believe them, because in order to get a value out of the string, the following has to happen:

  1. convert the string to numbers
  2. do the math

… and doing the math simply fails. What am I missing?[/quote]

I’d love to see the C code they claim to use. I expect that it’s using some vendor specific extension that supports 128 bit integers since the values given exceed the range of a 64 bit integer (signed or unsigned). That would roughly be the equivalent of using a plugin like Bob Delaney’s which would let you do infinite precision far exceeding what could be represented in a 128 bit integer.

If i am right the result of this calculation should be: 1,0594591775223041863783555831258 according to the in Windows buildin calculator.

In the eighties of the previous century on the Atari ST you had Omikron basic that had a 10 byte floating point datatype that allows 18 to 19 digit accuracy and also Delphi has a 10 byte float. I guess that a 12 byte floating point datatype would suffice for this calculation but i don’t know a programming language that suports that natively.

So you are probably right. Using lookup tables will not give greater precision than the real answer. But the Xojo code you are using doesn’t either. I don’t have the time right now to go through it, but I know that when I was working with 8 and 16 bit MCUs and I needed to do bigger integer math (read 32-bit), I had to use logs and lookup tables. So is it perfect, no. Is it better than the built-in support, probably. Didn’t mean to upset you, just thought you discarded the initial notion without even thinking about it. But you are probably right, that is not what you were looking for.

Using Bob Delaney’s decimal plugin with scale at 31 I get the same result:
1.0594591775223041863783555831258
decimal place 32 show as 0 if I set scale to 32. I think because the default precision is 32. If I set the precision to 40, then I get 39 decimal places:
1.059459177522304186378355583125765448926

I will start liking every post that mention the Atari ST :slight_smile: that bring back good memories.

No worries. That’s a useful trick for such a situation, which is really not so different from the problem here, so I can see why it came to mind as a good strategy to get around data type / memory limitations.

[quote=405463:@Alberto De Poo]Using Bob Delaney’s decimal plugin with scale at 31 I get the same result:
1.0594591775223041863783555831258
decimal place 32 show as 0 if I set scale to 32. I think because the default precision is 32. If I set the precision to 40, then I get 39 decimal places:
1.059459177522304186378355583125765448926[/quote]

Could you please post the code you used to get that? Because I also use the decimal plugin but when I tried this I also got a Nan and DecDivideByZero exception (!). In my software I’m using DecSetPrecision(36) in order to accurately represent 128-bit integers.

Here is the code:

Dim n, d, x As Decimal DecSetPrecision(40) DecSetScale(39) n = New Decimal( "156348578434374084375" ) d = New Decimal( "147573952589676412928" ) x = n / d MsgBox Str(x)