[quote=124888:@Norman Palardy]Nothing anywhere says it "returns (whole) microseconds"
I went & had a peek in the framework code & we do convert from nanoseconds to microseconds - hence fractional microseconds is correct[/quote]
Now we know why the mystery. They are microseconds with a resolution of one nanosecond
On the contrary. It is absolutely necessary for the Date function
I donât follow. How would a fraction of a microsecond make a difference in a Date. My point was that the kind of applications that require microsecond precision probably arenât practical to create using Xojo. The framework is just too bloaty. That may change with LLVM, but I still donât see why microsecond with no fraction vs. microsecond with a fractional component makes any appreciable difference to any Xojo program.
IMO this difference is caused by an evolutionary development from early on:
From releasenotes:
5.5.4fc2 [Fix] [Lnx]
Microseconds: Now returns precision in microseconds instead of whatever incorrect precision it was before. However, it does not return the amount of time that has passed since the userâs computer was started.
2007r2 [Fix] [Win]
Microseconds: Microseconds now has microsecond resolution on Windows (instead of millisecond resolution). This also fixes possible rollover issues due to 32-bit integer limitations.
Microseconds is convenient to measure execution time, and indeed having that precision in Windows was necessary. Given faster processors it made sense to see the evolution pursued down to the nanosecond.
Now, optimising code in nanoseconds looks like cutting grass on a golf course with a pair of scissorsâŠ
Worse, at that level of precision, the length of connections makes a difference. The speed of light is 30 centimeters per nanosecond : about 12 inches (11.81102362206). It is more complex and slower for electricity, as it depends on the frequency of the signal and conductivity https://en.wikipedia.org/wiki/Speed_of_electricity
There is nothing mysterious about it. The choice of unit has nothing to do with precision. I can state a given length as 1234 mm, 123.4 cm, or 1.234 m  three different units, but still it is the same length expressed with the same precision. Why a time stated in (fractions of) microseconds couldnÂt be precise to the nanosecond is beyond me.
Firstly, who the heck cares about nanosecond resolution!
Secondly, does a 2-3 GHz processor really provide nanosecond resolution? I donât know what factors the timing resolution of the system (OS+hardware) depends on, but that would imply that for every 2-3 executed instructions the system is able to provide a time value⊠maybe, I donât know.
In my opinion, a function giving such a result is (conceptually) wrong (I am not saying this is a bug or against what the documentation states). The result should be rounded to the actual resolution of the system (OS). I know this could cause some code to break if the resolution ever changes⊠but it has already happened anyway (as explained in this thread).
In any case, maybe a note in the documentation explaining the current behavior for each platform could be a good idea.
Some do, some donÂt, it doesnÂt matter. The microseconds function returns a Double value in microseconds, without any promises as to its resolution. As Andre Kuiper pointed out, for some operating systems the resolution used to be milliseconds rather than microseconds and now under OS X it can actually be nanoseconds, but that doesnÂt enter the documentation and probably shouldnÂt be relied on.
The documentation should be clear about the behavior of that function in each supported platform, I think. I checked the corresponding page and based on it I wouldnât expect a different behavior depending on the OS, although admittedly the contrary is not stated either.
Whats wrong with Returns the number of microseconds (1,000,000th of a second) that have passed since the userâs computer was started .
There seems to be some assumption about âthis means WHOLE nanoseconds onlyâ.
Not true - itâs the number of microseconds - and that may optionally be accurate to some finer level or not on each platform.
If you want only whole microseconds then strip the fractional portion off on all platforms & safely ignore it
Assign it to a Uint64 and youâre fine and can ignore the fact there might have been some useful fractional portion.