These two nightly tester tests: http://llvm.org/nightlytest/test.php?machine=36&night=1063 http://llvm.org/nightlytest/test.php?machine=23&night=1064 Capture performance changes that were due to running llvm-gcc at -O0 instead of -O2. The basic impact is that we're now runing 'gccas' once instead of twice on the input programs (far more realistic). Some of these programs got slower, e.g. fldry. This turns out to be a phase-ordering issue. If these can be tracked down and improved, that would be good. More important, however, are the programs that sped up by disabling the run of the optimizer. This implies that running the optimizer twice on the programs actually *slowed them down*, which is clearly bad. -Chris
This is the patch in question: http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060925/038122.html
I'll give this a crack after I get back from vacation on Tuesday.
I'm having a problem reproducing this. If I use -O2 and -O0, and this is what I get: $ time ./fibo.O0.llc 701408733 real 0m11.558s user 0m10.503s sys 0m0.067s $ time ./fibo.O2.llc 701408733 real 0m10.617s user 0m9.828s sys 0m0.060s This is on a PPC system. I get similar results on an x86 system.
This doesn't appear to be happening anymore. The fibo execution time with -O0 and with -O2 is pretty much the same: [Gaz:Shootout-C++] time ./Output/fibo.llc # With -O2 701408733 real 0m10.129s user 0m9.935s sys 0m0.034s [Gaz:Shootout-C++] time ./fibo.O0.llc # With -O0 701408733 real 0m10.125s user 0m9.935s sys 0m0.034s [Gaz:Shootout-C++] time ./Output/fibo.native 701408733 real 0m10.752s user 0m10.339s sys 0m0.041s