Something funky about UInt32 comparisons

Before I file a feedback report, I’d like a sanity check please. Consider this code

Var Value As UInt32 = 4294967295
If Value <= 0 Then
  MessageBox("Value is less than or equal to 0")
Else
  MessageBox("Value is greater than 0")
End If

On 64-bit Mac and Windows, the message box says “Value is greater than 0.” On 32-bit Mac and Windows, the message box says “Value is less than or equal to 0.”

Changing the code to

Var Value As UInt32 = 4294967295
Var Zero As UInt32 = 0
If Value <= Zero Then
  MessageBox("Value is less than 0")
Else
  MessageBox("Value is greater than 0")
End If

So the comparison between UInt32 and Integer seems to be the issue, since Integer will equal Int32 in 32-bit builds and Int64 in 64-bit builds. Right?

Is this worth filing a bug report about? Or is this just one of those things the compiler won’t be able to handle?

That seems like a bug to me. I would not guess that comparing a UInt32 to 0 would be a problem. Also, I can confirm that I see this on 32 bit Win10 using 2020r1 (Windows executable compiled on my Mac).

  1. What does the doc say about the width of an integer constant on 32 and 64 bit builds?

  2. You don’t report what your second example does

  3. I’m guessing that an integer contant is the same width as the build, and as you are not using an Integer anywhere, then can you report what the doc says about comparing integers of unequal width. I’m guessing that the shorter one is zero-bit left-extended to make both be the same width, then a comparison is done between two now-equal-width integers.

You’re right. I did forget to mention that the second block compares as expected, because there is no casting necessary.

I’m pretty sure that Value is being cast to Integer to match the 0 constant. That’s not exactly a bug, but it’s definitely unexpected. I would have expected the left side to remain unaffected and the right side to be casted into UInt32.

Not being a compiler engineer, I’m not sure why it happens this way, but it does.

What does the doc say about what cast does? Your second example will definitely do a 32-bit unsigned compare as that is the type of both arguments. If what the compiler does is as I surmised in my previous post, you can’t complain if that behaviour is already nicely documented.

If OTOH it isn’t, then you can kick Xojo’s ■■■■ until it is, and we’ll all stand on the touch-line throwing bread rolls at the participants.

That’s why I’m asking. I’m not able to find documentation about what should happen in this instance. It’s definitely not intuitive one way or another.

Is 32 bit Mac and 32 bit Windows not their old compiler ?? Which could explain the difference.

Personally I would say its a bug no matter how you look at it.

Edit - I read your case again, its not compiler diff I guess.

It sort of shows us things we are missing from the language like to pre or postfix constant values to ensure correct integer type.

1 Like

You’re saying the doc doesn’t explain how an integer is promoted to the next higher unit? Doc bug if that’s the case; ask for it to be documented.

It was already reported 10 years ago. @Geoff_Perlman this is what I mean when I say case ranking isn’t as important as actually doing something about the bugs. It’s been verified for 10 years. I was working for you at the time. We had Joe Ranieri around who could have done something about this. I’m sorry to be so harsh, but this is where the disenfranchisement comes from. This is arithmetic, the thing computers were invented for, and this bug languishes for 10 years? I know this fell through the cracks, because I wasn’t aware of it while working at Xojo. I have a feeling you weren’t aware of it either. And that’s my point. It was verified, and then what? There’s a disruption in the bug reporting process, and that’s what needs to be addressed.

And I know I’m being rougher than normal. I feel so strongly about this because we failed this report. I hope something can be learned from this.

Edit: Forgot to add the case link: feedback://showreport?report_id=11935

5 Likes

Hey, what do you know, I got a notification when you updated the report because at some point in the past I favorited it, so it must have bitten me in the past too. Huh.

Thom, can you take a look at case feedback://showreport?report_id=2218

I’m not at a level to comprehend what was said there, but if I understand correctly some problem with unsigned and signed comparisons.

Because is working as expected on 64 bit maybe something changed that did not get updated for 32 bit?

Edit: thanks, it is more clear now.

I have looked at it. I don’t think I can explain it better than Aaron. But I’ll try.

&hFFFFFFFF as a UInt32 is 4294967295
&hFFFFFFFF as an Int32 is -2147483648

It’s the same bytes. What matters is how they are interpreted. This is similar to text encodings.

In pseudocode, what we think should be happening is If UInt32(&hFFFFFFFF) > UInt32(&h00000000) Then. What is actually happening is If Int32(&hFFFFFFFF) > Int32(&h00000000) Then. The constant (0) is an Integer. On 32-bit builds, that is Int32. On 64-bit builds that is Int64. It works on 64-bit systems because of the larger signed integer.

2 Likes

I documented it in “Spot the error” in xDev 13.5, page 13 (Sep/Oct 2015):

I was going over my code, cleaning it up and preparing it for Xojo 64-bit, paying particularly close attention to integers and potential overflow errors. I had thought I was done with overflow errors, but they find new ways to annoy me. Have a look at the following code and guess which ones evaluate to true and which ones result in false (and no peeking at the solution!):

Dim n1 As Integer = 3123456789 
MsgBox Str(n1 >= 0)

Dim n2 As UInt32 = 3123456789 
MsgBox Str(n2 >= 0)

Dim n3 As UInt64 = 3123456789 
MsgBox Str(n3 >= 0)

Dim n4 As Int64 = 3123456789 + 8 
MsgBox Str(n4 >= 0)

Dim n5 As Int64 = 2147483640 + 8
MsgBox Str(n5 >= 0)

Dim n6 As Int64 = 3000 * 1000 * 1000 * 1.2
MsgBox Str(n6 >= 0)

Dim n7 As Int64 = 3000 * 1000 * 1.2 * 1000 
MsgBox Str(n7 >= 0)

Dim n8 As Variant = 3123456789
MsgBox Str(n8 >= 0)

I’ll give you the first two solutions:

n1>= 0 is false. n1 contains wrong value.

The first one is easy. After all, we are talking about overflow errors here, so I wasn’t expecting to catch you out with a simple Integer overflow. If you did get it wrong, then read what I wrote about overflow errors in the last issue (hint: Integer is the same as Int32 and can only hold values between -2,147,483,648 to 2,147,483,647).

n2>= 0 is false. n2 contains correct value.

Now this one is surprising at first sight as 3123456789 is well within the defined UInt32 range of 0 to 4294967295 (and my thanks to Norman for the explanation). But what the compiler does whenever it gets any binary operator (+, -, =, <, etc) is to first compute a common type between the operands. And for backwards compatibility reasons, the common type between signed and unsigned is signed. So for the comparison, the compiler will assume the number is a signed integer, and trying to squeeze 3123456789 into a signed integer results in an integer overflow. To deal with this you would need to specify a type by casting like this:

MsgBox Str(n2 >= UInt32(0))

This now treats both n2 and 0 as an UInt32 and therefore evaluates to true… as it should.

Now this might not be a bona fide integer overflow error, so maybe I should call it a conversion or comparison overflow error instead.

Care to try the rest?

1 Like

My fear/expectation is that this will move from verified to not a bug, for the same reasoning as 2218. Though interestingly, Norman says this is “100% a bug” despite having closed 2218.

Honestly, what’s happening behind the scenes doesn’t really matter to me. I like to understand why this is happening, but the fact of the matter is any user would expect the simple comparison in my original post to correctly detect that Value is greater than 0.

3 Likes

But what a computer expects is not necessarily the same. 0 by default is an Integer and not an UInteger, so the comparison Integer and UInteger casts to Integer.

If you want to treat 0 as an UInt then you need to cast it to one: UInt32(0)

The road to hell is plastered with assumptions - and the one here is that the computer can inherently discover whether a 0 is meant to be Integer or UInteger. It can’t.

You might as well complain “Why can’t I put 628,963,273,1324 into a 32bit Integer???” or “Why is 0.1 not precise in the debugger?”

You COULD alternatively cast such a comparison to use UIntegers … and have fun comparing negative numbers with UInt …

Not a bug. Nature of the beast.

Basic rule: beware of type conversions.

My recommendation: use constants zeroInt and zeroUInt to keep you on your toes …

Or, as suggested in the Feedback report, this problem was solved ages ago and Xojo could do that.

But, I’m not a compiler engineer. I don’t know if it’ll actually work.

What I know is this comparison shouldn’t be failing. I can understand why it is. I understand how to work around it. I understand that it’s not simple. It’s still wrong.

1 Like

Thanks Thom for finding this.

2 Likes

Sorry but this isn’t true in other languages.
I don’t remember to have seen documentation about type conversion/promotion used by Xojo’s compiler.

There are a LOT of things that aren’t true in other languages … :roll_eyes: