Example: void f(int v) { switch (v) { case 10: a(); break; case 20: b(); break; case 30: c(); break; case 40: d(); break; case 50: e(); break; case 60: f(); break; default: h(); } } in lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, this correctly calls handleBTSplitSwitchCase in order to create a binary decision tree for the switch statement. However, the heuristic for selecting the pivot element tries to maximize the sum of the density of the left and the right half. This makes sense in order to split at "holes" leading to increased usage of jump tables later. However, if the density is uniformly distribute such as in the example, this will always pick a single element on the left or on the right as the density of the single element is 1 and far greater than any other density. Thus, the "binary tree" falls back to a linear check of all values. In order to solve this, we can a) Compute beforehand that we won't find any subrange to use a jump table for. b) Precompute the ideal ranges for jump tables. This might actually turn out to find jump tables in cases where the current heuristic doesn't find them.
Your example can easily be transformed to a dense jump table by modifying the value by a multiplication with the inverse of 15 (0xeeeeeeef) and rotating right by 2. In this case the first entry is not used. Early 2014 I had prepared a patch doing this special case but also applied perfect hashing early 2014 but was mostly ignored. Sigh. See also the work of eeckstein, http://llvm.org/viewvc/llvm-project?view=revision&revision=222121.
(In reply to comment #1) > Your example can easily be transformed to a dense jump table by modifying > the value by a multiplication with the inverse of 15 (0xeeeeeeef) and > rotating right by 2. In this case the first entry is not used. > Early 2014 I had prepared a patch doing this special case but also applied > perfect hashing early 2014 but was mostly ignored. Sigh. IIRC, the problem was the the patch was huge, not that anyone was against the idea of using perfect hashing for switches. As you say, Daniel's specific example can be handled by dividing the case values by 10, which would be a cool thing to be able to do, but that doesn't fix the general issue of selecting a bad pivot. Daniel, I like the sound of the b) approach, i.e. chunking together cases that can be used in jump tables first, and then doing the binary tree after.
(In reply to comment #2) > (In reply to comment #1) > Daniel, I like the sound of the b) approach, i.e. chunking together cases > that can be used in jump tables first, and then doing the binary tree after. +1. We'd just estimate the required density for the jump tables and cluster the switch cases into such intervals of case range.
This was committed in r235560, with follow-ups in r235608 and r235729. See the code review at http://reviews.llvm.org/D8649 for numbers and such.
*** Bug 18347 has been marked as a duplicate of this bug. ***