Compiler optimization & overflow checking

Here’s quite an interesting paper by Felix von Leitner, aka Fefe, a german member of the Chaos Computer Club and political blogger.
Whereby: most of the Assembler code in it is quite Chinese to me. The interesting point he stresses is that you shouldn’t worry about overflow checks making your code slower. I don’t know how far the Xojo compiler is with this, but using LLVM for iOS will surely open the path for a general use of LLVM in the near future.

In general RAM access is still much slower than internal computations, and modern CPUs (supported by optimizing compilers) work asynchronously, meaning there’s a lot of idle time when the CPU waits for another piece of RAM to load. This is, as he has proven, more than enough to perform addition overflow checks without any delay.

For multiplication overflow checks there is still some delay but some compilers offer intrinsics like builtin_mul_overflow which can be checked without performance loss. Nice to know, I think.

The paper:

and his blog entry (in german):

unless you disable bounds checking with #pragma, xojo does more checks than needed.

especially I remember one engineer once noticed that about 30% of the IDE binary is nil object checking.

… and the Xojo compiler is not really highly optimizing, far as I know.
Probably rather a piece to remember when the compiler transition has taken place – no doubt disabling checks today gives a nice speed boost to many methods.

I personally wouldn’t disable bounds checking or nil object checking unless you’ve profiled your application, proven that it’s a hot spot, and then do the bound checks yourself.

As for integer over flow, the Xojo language does not throw exceptions if overflow or underflow occurs. I wish it did, but it would be a massive change to the language and break an unknown number of projects.

Thanks, Joe. Yes, I have found in most cases optimizations work very well with bounds checking enabled. And I feel more on the safe side with them.