Microseconds different Carbon vs Cocoa

I could not find any post about this so I thought that I would add this to make folks aware. Earlier today there was a post on the NUG digest about someone having problems putting microseconds values into strings and then getting the same number back when reconverting the string to a double again. It ends up that in Carbon Microseconds returns whole numbers with no fraction portion. In Cocoa Microseconds doubles are not whole numbers, they have a fractional component.

To verify this I wrote a little program and post 25 microseconds values in to a listbox, placing then in the row using the Format command with a format string of “###.000”. Running this under Carbon all of the values ended in .000 indicating the values were all whole numbers. Running it again under Cocoa all of the values came out with digits in the three places to the right of the decimal place indicating that the values are not whole numbers.

So, be aware of this, just in case. Either that or always use the Floor function on the value returned from Microseconds to get the value back to a whole number when running under Cocoa.

Harrie Westphal Thanks for sharing !

Fascinating. The fractional part goes up 17 digits. Far beyond the frequency of the processor.

it’s a floating point number…

What is the logic to have a fractional number of microseconds ?

If that number was to represent something, one would imagine it is in relation to the speed of the processor. In theory, with a 3 Ghz processor, one processor tick is 0.0000000003 second, 30/10,000 th of a microsecond. That would seem to be the extreme limit in possible temporal resolution.

That floating point number is simply ridiculous. It is a bug.

Not a bug
Prior to the introduction of things like Uint64 int64 etc there were 2 choices
Integer - which is 32 bit
Double - which can represent a much wider range
Since microsecond can exceed a 32 bit integer fairly rapidly a double is/was used & has been used ever since
Changing it to be a Uint64 or Int64 breaks existing code for no real reason

[quote=124808:@Norman Palardy]Not a bug
Prior to the introduction of things like Uint64 int64 etc there were 2 choices
Integer - which is 32 bit
Double - which can represent a much wider range
Since microsecond can exceed a 32 bit integer fairly rapidly a double is/was used & has been used ever since
Changing it to be a Uint64 or Int64 breaks existing code for no real reason[/quote]

Sure. But why keep the fractional part ?

because it’s there?

Because it’s a double
The fact that Carbon returned a double with all 0’s for the fractional part and Cocoa doesn’t suggests that relying on it ALWAYS not having a fractional part in a floating point value is / was a bad assumption

If thats what was expected then a suitable format cures it
Not doing so leaves you open to surprises - like the start of this thread indicates

[quote=124820:@Norman Palardy]Because it’s a double
The fact that Carbon returned a double with all 0’s for the fractional part and Cocoa doesn’t suggests that relying on it ALWAYS not having a fractional part in a floating point value is / was a bad assumption

If thats what was expected then a suitable format cures it
Not doing so leaves you open to surprises - like the start of this thread indicates[/quote]

My original point was that if a fractional part is returned, it should represent something. From what I can figure, 17 digits after the dot cannot represent any valid time resolution. I have no issue in having a result in fractions of a microsecond, I have issue with getting a fantasy result. It should be possible to truncate the number of digits to return a true measurement of time, would that precision seem excessive. If what appears after the dot is pure random and unreliable, then the Carbon Microseconds function appears more sensible than having to dump it afterwards.

I will not lose sleep over it, though :wink:

Microseconds may be calculated depending on the platform.
Suppose the actual available clock measured only “seconds since boot”.
Calculating microseconds from this could result in a fractional value.

If getting fantasy results is an issue then the jumps forward & backward in time for DST ought to really bother you since its possible to have 2AM twice when DST goes out of effect or NOT have 2AM at all on nights when it does go into effect :stuck_out_tongue:

At least in this regard microseconds is strictly increasing up to the point it overflows the double (some time way in the distant future)

[quote=124830:@Norman Palardy]Microseconds may be calculated depending on the platform.
Suppose the actual available clock measured only “seconds since boot”.
Calculating microseconds from this could result in a fractional value.

If getting fantasy results is an issue then the jumps forward & backward in time for DST ought to really bother you since its possible to have 2AM twice when DST goes out of effect or NOT have 2AM at all on nights when it does go into effect :stuck_out_tongue:

At least in this regard microseconds is strictly increasing up to the point it overflows the double (some time way in the distant future)[/quote]

Norman, you try to confuse me with administrative time that has nothing to do with Microseconds but is a political decision.

I may be old fashion, but I feel a figure is a figure is a figure. If it represents nothing, better truncate the result to what Microseconds is supposed to report. Not thousands of microseconds, or millionth of it. It is not a matter of what constitutes a double or not, it is what the reported value represents.

Once again, I do not intend to drag this discussion until dawn. If you feel different, fine. The world will not come to an end.

It does what it claims
Reports microseconds - and it happens to use a double for that purpose
It states nothing about “whole microseconds” which is what truncating the fractional portion would imply
In fact the docs make no statement about whole or not whole microseconds

If anyone doesn’t expect or want to deal with the fractions portion then certainly

  1. assign it to a Uint64 and that will truncate the fraction
  2. save it as a string with out the fraction using a format without decimal specifiers
    All I’m encouraging is that no one should assume that it does / does not include the fractional microseconds
    It IS a floating point value and it may

Whatever is calculating the microseconds value may very well compute hundredths or thousandths of a microsecond; but, once that fractional portion goes in to a floating point value it does what floating point does…gives a very close approximation of the fractional portion. It is just the way that floating point works. My only reason for starting this post was to make folks aware that Cocoa returns the microseconds value with a fractional portion and it could possibly lead to problems if code was written assuming that it is always whole numbers. I can readily understand that it now does this as processors are way beyond the speed of microseconds.

[quote=124830:@Norman Palardy]Microseconds may be calculated depending on the platform.
Suppose the actual available clock measured only “seconds since boot”.
Calculating microseconds from this could result in a fractional value.
[/quote]

Really? I can’t see how. “seconds since boot” would yield a maximum resolution of 1 sec = 1e6 us, so how would you ever get anywhere close to < 1us (and therefore fractional us) resolution from this?

If Microseconds is supposed to return (whole) microseconds then why the fraction (and did it come from if the OS is reading an integer (likely a UInt64) from a hardware timer? so I agree with Michel, it’s incorrect. But I too will sleep well tonite.

P.

Just trying to wrap my head around in what possible app, constrained by the Xojo framework, this would make a difference.

[quote=124855:@Norman Palardy]It does what it claims
Reports microseconds - and it happens to use a double for that purpose
[/quote]

so if I time a block of code and get m1 = 1000.0 and m2 = 1005.7 did my code take 5.7 or 5 us to execute? Does Microseconds have a resolution better than 1 us that the FP result would imply? If the hardware timer runs at the system clock speed then in theory it could have a 300 ps resolution. Does it?

P.

A little searching I find Carbon has a function ‘Microseconds’ that returns an integer type, but this was deprecated in 10.8 and the other, apparently modern timing functions work in nanoseconds. So maybe Carbon is getting that integer Microseconds value, hence why no fractional part, but Cocoa is getting nanoseconds and scaling it directly.

that would explain the switch to FP result: backward compatibility with Microseconds yet yielding ns resolution. Now that makes sense to me.

[quote=124869:@Peter Stys]Really? I can’t see how. “seconds since boot” would yield a maximum resolution of 1 sec = 1e6 us, so how would you ever get anywhere close to < 1us (and therefore fractional us) resolution from this?

If Microseconds is supposed to return (whole) microseconds then why the fraction (and did it come from if the OS is reading an integer (likely a UInt64) from a hardware timer? so I agree with Michel, it’s incorrect. But I too will sleep well tonite.

P.[/quote]
Nothing anywhere says it "returns (whole) microseconds"
I went & had a peek in the framework code & we do convert from nanoseconds to microseconds - hence fractional microseconds is correct