Change limits for BigNumberMBS class?

BigNumberMBS is our implementation of a 320bit floating point number.
You tried it?

It has 64bit for exponent and 256bit for fraction.

Compare that to 11 bit + 52 bit for a normal double value.

This gives about 75 digits after the dot in precision.

Not sure why we picked those length originally, but I’d like to let everyone know we could change them.

If there is a need, we could

  • increase those limits above
  • make a new class with higher limits
  • somehow make it work dynamically.

Anyone in need for more precision?

Next pre-release version will have 30 new methods to add trigonometry functions.


1 Like

We changed it to 640 bits for now. And added 30+ methods. Enjoy!

Hi Christian.

Does changing it to 640 bits affect performance?

We have some performance critical code that currently uses Bob Delaney’s plugin. Since he isn’t planning on supporting Apple Silicon we were going to switch to BigNumberMBS but concerned that this might now be slower.

Well, first I may help Bob Delaney to port the plugin for Apple Silicon.

Second, please make your benchmarks and let us know how this affects you.
I was thinking about making two classes, so you can choose which you plan to use.

Hi Christian

There were two other reasons why I was going to switch. Bob’s plugin had a small memory leak that would build up over time and BigNumberMBS was faster in some areas.

I’ll run some tests in the next few days to see if performance has reduced.

I have been thinking about this issue and been looking at how other languages, and specifically how Python addresses this need. A good big number class is based on mantissas and exponents that are unlimited. Both are based on a high speed arbitrary precision integer with wicked fast multiplies to include FFT multiplies if the precision is high enough and divide and conquer methods for medium precision. Addition is usually accelerated via some very well placed assembly using add with carry instructions. An example of this done right is the mpz integer capability in the gmp library.

Once you have a high speed integer capability, creating the floating point functions is relatively easy.

I have been using a library for Python called mpmath this is outstanding and a good example of doing it right. It has been in development since at least 2002 with much of the development funded by Google during Google Summer of Code grants. With mpmath I have been able to compute bernoulli(6000000) in 4.2 hours (numerator has 33274145 digits!) on my 2013 MacBook pro. BTW, making it multiprocessing was easy.

In any event this capability could be developed for Xojo with a good c-based plugin for big integers and the floating point code in Xojo itself. Now that Xojo has a pretty good compiler it should not be necessary for all code to be in a plugin. If python can create a fast capability, then Xojo being compiled surely should.

So in my opinion, why limit the size of the mantissa and exponent? Just my two cents.

Bill J.

you may look for the plugins from Bob Delaney:

Way back when I did work with Bob and compared my results with his. His calculator was limited to 40000 digits if I recall correctly and the speed suffered greatly at that precision. I also checked with him and he had not implemented FFT multiplication which is absolutely necessary for high precision.


I have tested your new BigNumberMBS class and it is very fast. I have also tested all but two of the trig functions and they are working beautifully. Nice job. Adding the ability to select two or more levels of precision would be a welcome addition provided that can be done with a variable one would have access to at run time.

in BigNumberMBS in v 20.0 much faster for you than the one in 21.1pr?

Because making different sizes only make sense if a small one is faster than the big one.

There is a small difference, but not enough to cause any issues except for a few of my applications. But there are times when speed is critical, or more digits of precision is critical, and it would be nice to be able to select one or the other. I would also like to see the option to double the precision to over 400 digits of accuracy, which is necessary for implementing some of my algorithms and functions.

I can put it on the wish list.
Not sure how to do it efficiently.
Like if we use two sizes, the plugin may double in size if compiler duplicates all code for new type.

Then would it be feasible to simply increase the precision of the current plugin to provide 400 digits or so of precision? Then I think the vast majority of users would have enough precision for their needs. Your plugin is about 8 times faster for the same precision than the excellent fp plugin of Bob Delaney, which up until BigNumberMBS was the only show in town.

Double the range again?
256bit + 1024 bit?

Sure, that is easy.

Alternative could be to make multiple BigNumber classes, one for each size type.

BigNumber256MBS, BigNumber512MBS and BigNumber1024MBS and three plugin segments.

Well, simply doubling the size now seems like an optimal short term solution because your plugin is so fast. Ultimately having multiple sizes would be great, but would it be possible to have a variable that can select the size at runtime? That is, call all the options BigNumberMBS with a variable such as BigNumberOptionMBS = 1,2,3,etc to select 256, 512, 1024,…

Well, there are several scenarios possible.

One is to have different classes in different plugin segments, so you can pick which one you use.

Other thing is to have it all in one plugin and you select it with a global property and then internal we create them using the right objects.
Problem may be to assign one type to other. With the property, it may be that you can only switch while you have no objects, so there is nothing to convert. Like set the level of precision before you calculate.