I am wondering if the xojo equivalents for shifting (in c: >> / << ) adequate in speed or is it just a slow wrapper func ?
Don’t ask, test.
it would be nice to know if those are inlined.
Bitwise.ShiftLeft/Bitwise.ShiftRight are function calls and have the same overhead as any function call. However, you can get the same effect by multiplying or dividing by 2^bitsToShift, and that should be faster. Just be sure to use integer division with “” instead of “/”.
Pre-populating an array with the values of 2 should be faster still, but as Maximilian said, best to test.
There is a feedback request to create an operator for bit shifts.
I believe all modern processors have a machine instruction for a shift. It would be nice if LLVM implementation included this capability.
A comment that one of the engineers made to me once: There is no reason that Bitwise.ShiftLeft/ShiftRight can’t be treated like an operator by the compiler. In other words, an “ordinary” operator may not be required, and existing code would immediately benefit.
This was presented as a musing, not a plan, just so nobody thinks this is in the works. I have no idea if it is.
An operator isn’t needed for performance because an optimization pass could be written that recognizes calls to the Bitwise module and lowers them into the appropriate code. The only possible complication is the fact that Bitwise functions work in terms of 64-bit integers, which can involve sign/zero extension and truncation. The optimized code would have to behave the same.
What a though answer…
Well, I was trying to replace the abs() function but I couldn’t find big differences.
I did not use the bitwise.shiftxx as i won’t make a sense here. abs()=function call, bitwise=function call and it made
the code less readable (for the abs() replacement so to say) but interesting discussion.
If Xojo wouldn’t stick to much on the basic syntax, it would make sense to integrate an << >> operator but that would
open the world for other c syntax x++ , x-- and so on I guess.
This might not really fit into this discussion, but not long ago I implemented a bucket sort to sort arrays of objects by their properties. I wrote many different versions of it, some parts with recursion, then replacing recursion with a stack. I was playing with separate compare functions for each data type, then inlining these compare functions to see if that was faster. The interesting thing was, that removing function calls and inlining the code was almost not measurable (and this means removing ten thousands of function calls when sorting large arrays of several 10k objects).
I’ve seen that Bitwise.XOR and plain old XOR are not the same speed, so this is definitely something you’d want to test.
Also, remember that for shifts which don’t encounter an overflow, they are equivalent to multiplication, e.g.
X << 1 = X*2 X << 2 = X*4 X << 3 = X*8 ... X >> 1 = X\\2 X >> 2 = X\\4 X >> 3 = X\\8 ...