What kind of Integer, does it matter!?

http://documentation.xojo.com/index.php/Integer

As you all know, there are different kind of Integers to choose from when coding.

But… if you KNOW “i” will never be more than 10, do you use “DIM i as Int8” ?

Du you care what kind of Integer you use and does it really matter…!?
In giant project, that may be the case… But then, where do you draw the line!?

I usually use Integer, unless there is a very good reason to do otherwise.

For example, when working on binary data stored in MemoryBlocks, you sometimes need to read and write data as Int16 or Int64 values, but generally I found it easier just to stick to Integer where possible.

For now the only time I use anything other than INTEGER to represent a non-floating point value is when it is to communicate with an external process (DLL, Declare etc) and it is a requirement of the specified protocol.

In certain Binary processes where big numbers are required with single bit precision, I will use INT64 or UINT64 (depending on circumstance)…

Important statement to note from the LR

meaning it may be INT32 or INT64, which could be important if you use INTEGER for memoryblock, DLL or Declare protocols

[quote=141237:@Jakob Krabbe]But… if you KNOW “i” will never be more than 10, do you use “DIM i as Int8” ?

Du you care what kind of Integer you use and does it really matter…!?
In giant project, that may be the case… But then, where do you draw the line!?[/quote]
I’d say that 99% of the time you just want to use Integer. Generally, the size-specific Integer types are for dealing with outside APIs that require a specific size. The one exception would be Int64/UInt64, which you might want to use for things that could possibly get very large, such as database primary key values.

Thank you!
I will do Integer “all in” in the future! Eventually, by databases will become quite big… but we’re not there yet!
I think it’s messy to use different kinds… and if it doesn’t matter, then use one kind is fine with me.

Does choosing Uint32 give you more efficiency than a regular int32?

Thanks

No… and there are two type “UInt” and “Int” because they behave differently. “UInt” cannot represent a negative number, but it can be 2x the maximum value of “Int”, while Int can be negative, but a smaller maximum positive value… so each has their place.

When we get the 64 bit framework, “Integer” will be Int64, right ? Would it not be a good habit to start using explicitly Int32 in declare parameters, as part of the 1% of the time when it matters ?

Thanks Dave. I was just wondering why people are using int when they can use uint32. I suppose as long as it holds the value you require, it does not matter.

Uint32 is unsigned and when you want a value that can NEVER be negative it might be appropriate

But it also has lead to fun bugs in peoples code because they forget a Uint CAN NEVER BE NEGATIVE :stuck_out_tongue:
Things like “Why does the following code result in an infinite loop ?” :stuck_out_tongue:
(and yes this really has occurred)

   dim counter as Uint32 = 1
   while counter > -1
          counter = counter - 1
   wend

So counter starts as 1
Decrements to 0
Then the next loop iteration subtracts 1 more and now a Unit32 is now … what ?
The MAXIMUM VALUE a Uint32 can hold because they can NEVER be negative :slight_smile:
and the loop goes on and on and repeats this FOR EVER

If you deal with external protocols (DLL, Declare etc) then I would think that to be a good idea, as you code would then be able to run (discounting other possible issues) on either a 32bit or 64bit platform

And INT32 not UINT32 … and then only when the protocol is also 32bit… You may need to change the external item to 64bit, depending on what it is.

While kinda off topic… this can be an existing issue with ObjC/Swift as iPhones/Ipads are either 32bit or 64bit today depending on the device and/or level of iOS involved. And at least Swift works like Xojo in the fact the Int=Int32 or Int64… and UInt=Uint32 or UInt64 so it is best there to declare the exact version of integer you want/need in some cases.

If I recall correctly way back when Aaron said most of the special integer types were intended for declares only
A bug meant that they were available for general use
And we live with that

99.9% of the time integer is all a person needs in Xojo but for declares & supporting other protocols you may need them
So in a way its handy they are available when needed - but usually you can ignore them

[quote=141335:@Norman Palardy]If I recall correctly way back when Aaron said most of the special integer types were intended for declares only
A bug meant that they were available for general use
And we live with that

99.9% of the time integer is all a person needs in Xojo but for declares & supporting other protocols you may need them
So in a way its handy they are available when needed - but usually you can ignore them[/quote]
So Uint is not intended for general use? ‘A bug meant that they were available for general use’
What does this mean? There is a bug in Xojo? Should I continue to use special int types?

Thanks

A bug in the IDE and compiler about 8 or 9 years ago meant that types meant just for declares were available everywhere
We’re not changing that

Use them but you do need to understand them to use them properly - like that loop I posted where people reported a but about certain data types causing infinite loops. The lack of understanding about what Uint32 meant was the cause - not the data type itself.

[quote=141340:@Norman Palardy]A bug in the IDE and compiler about 8 or 9 years ago meant that types meant just for declares were available everywhere
We’re not changing that

Use them but you do need to understand them to use them properly - like that loop I posted where people reported a but about certain data types causing infinite loops. The lack of understanding about what Uint32 meant was the cause - not the data type itself.[/quote]
And this lack of understanding simply because of people going below 0?

Thanks

Why was the bug not removed? Was it simply because so many people used these special data types?

Thanks

[quote=141342:@Oliver Scott-Brown]And this lack of understanding simply because of people going below 0?
[/quote]
Its unsigned.
It CANNOT go below zero - EVER
Every possible bit pattern represents a valid positive integer so trying to count & “go below zero” CANNOT happen
So a loop like the one I wrote will NEVER end because the counter cannot get < 0 :stuck_out_tongue:

As far as I know the bug was not realized until sometime after release
But this predates my arrival as an employee by a lot so I’m not 100% sure about the why’s and wherefore’s

[quote=141347:@Norman Palardy]Its unsigned.
It CANNOT go below zero - EVER
Every possible bit pattern represents a valid positive integer so trying to count & “go below zero” CANNOT happen
So a loop like the one I wrote will NEVER end because the counter cannot get < 0 :stuck_out_tongue:

As far as I know the bug was not realized until sometime after release
But this predates my arrival as an employee by a lot so I’m not 100% sure about the why’s and wherefore’s[/quote]
Okay, thanks. I understand it cannot go below 0 but I was referring to the behaviour if you attempt that (which I am unsure of). But thanks everything is cleared now.

The behavior is as I said in https://forum.xojo.com/17039-what-kind-of-integer-does-it-matter/p1#p141325

I thought that was the behaviour but I wasn’t sure enough to mention it.

Thanks anyway. As I said everything is cleared for me here.