Microseconds different Carbon vs Cocoa

[quote=124888:@Norman Palardy]Nothing anywhere says it "returns (whole) microseconds"
I went & had a peek in the framework code & we do convert from nanoseconds to microseconds - hence fractional microseconds is correct[/quote]

Now we know why the mystery. They are microseconds with a resolution of one nanosecond :slight_smile:

On the contrary. It is absolutely necessary for the Date function :wink:

I don’t follow. How would a fraction of a microsecond make a difference in a Date. My point was that the kind of applications that require microsecond precision probably aren’t practical to create using Xojo. The framework is just too bloaty. That may change with LLVM, but I still don’t see why microsecond with no fraction vs. microsecond with a fractional component makes any appreciable difference to any Xojo program.

It was a joke
 I picked Date because it makes no difference to know it to the nanosecond. Even on new years eve :wink:

IMO this difference is caused by an evolutionary development from early on:
From releasenotes:
5.5.4fc2 [Fix] [Lnx]
Microseconds: Now returns precision in microseconds instead of whatever incorrect precision it was before. However, it does not return the amount of time that has passed since the user’s computer was started.

2007r2 [Fix] [Win]
Microseconds: Microseconds now has microsecond resolution on Windows (instead of millisecond resolution). This also fixes possible rollover issues due to 32-bit integer limitations.

Microseconds is convenient to measure execution time, and indeed having that precision in Windows was necessary. Given faster processors it made sense to see the evolution pursued down to the nanosecond.

Now, optimising code in nanoseconds looks like cutting grass on a golf course with a pair of scissors


Worse, at that level of precision, the length of connections makes a difference. The speed of light is 30 centimeters per nanosecond : about 12 inches (11.81102362206). It is more complex and slower for electricity, as it depends on the frequency of the signal and conductivity https://en.wikipedia.org/wiki/Speed_of_electricity

Beam me up, Scotty :wink:

Don’t forget to come back! :wink:

There is nothing mysterious about it. The choice of unit has nothing to do with precision. I can state a given length as 1234 mm, 123.4 cm, or 1.234 m – three different units, but still it is the same length expressed with the same precision. Why a time stated in (fractions of) microseconds couldn’t be precise to the nanosecond is beyond me.

What a pontifying, contemptuous and unnecessary remark 


Firstly, who the heck cares about nanosecond resolution!

Secondly, does a 2-3 GHz processor really provide nanosecond resolution? I don’t know what factors the timing resolution of the system (OS+hardware) depends on, but that would imply that for every 2-3 executed instructions the system is able to provide a time value
 maybe, I don’t know.

In my opinion, a function giving such a result is (conceptually) wrong (I am not saying this is a bug or against what the documentation states). The result should be rounded to the actual resolution of the system (OS). I know this could cause some code to break if the resolution ever changes
 but it has already happened anyway (as explained in this thread).

In any case, maybe a note in the documentation explaining the current behavior for each platform could be a good idea.

Julen

On OS X this is what we use as the underlying function
https://developer.apple.com/library/mac/qa/qa1398/_index.html
Its resolution is nanoseconds - at least thats what they claim it is so thats what we use

Some do, some don’t, it doesn’t matter. The microseconds function returns a Double value in microseconds, without any promises as to its resolution. As Andre Kuiper pointed out, for some operating systems the resolution used to be milliseconds rather than microseconds and now under OS X it can actually be nanoseconds, but that doesn’t enter the documentation and probably shouldn’t be relied on.

The documentation should be clear about the behavior of that function in each supported platform, I think. I checked the corresponding page and based on it I wouldn’t expect a different behavior depending on the OS, although admittedly the contrary is not stated either.

Whats wrong with
Returns the number of microseconds (1,000,000th of a second) that have passed since the user’s computer was started
.
There seems to be some assumption about “this means WHOLE nanoseconds only”.
Not true - it’s the number of microseconds - and that may optionally be accurate to some finer level or not on each platform.

If you want only whole microseconds then strip the fractional portion off on all platforms & safely ignore it
Assign it to a Uint64 and you’re fine and can ignore the fact there might have been some useful fractional portion.