With WAH compression, FastBit can answer one-dimensional range queries at the ultimate speed in terms of computational complexity. More importantly, in head-to-head compressions, FastBit not only answers one-dimensional queries faster than similar schemes, but also answers multi-dimensional queries much faster.
Using bitmap indices, range queries can be answered with bitwise logical operations. Since bitwise logical operations are generally very well supported by computer hardware, uncompressed bitmaps involving relatively smaller number of bitmaps can be efficiently answered. In most scientific applications, the number of bitmaps in a bitmap index is typically large, say more than 1000. This causes the uncompressed bitmaps to be very large. Our compression scheme address the size problem by reducing the compressed size to the level that is below a typical B-tree implementation. In addition, because our compression scheme supports fast bitwise logical operations on the compressed bitmaps, the time to answer a one-dimensional range query is a linear function of the number of hits. This is theoretically optimal, since any system to answer a query must at least list all the hits and therefore has the same theoretical time complexity as our approach. This theoretical property is remarkable as only a few most efficient indexing schemes including B+-tree and B*-tree have it. There are two factors that make FastBit even more remarkable.
Early on, we only have empirical evidence that FastBit is faster than other compressed bitmap indices, see CIKM 2001 and SSDBM 2002. Based on a hint from Professor Ding-zhu Du, we are able to conduct a thorough analysis of FastBit, see VLDB 2004, and LBNL-49626 (to appear in ACM Transcations on Database Systems, here is a preview). Overall, compressed bitmap indices are as effective for on-line analytical processing as B-trees are for transcational processing.