org.apache.lucene.codecs.lucene50

Class Lucene50StoredFieldsFormat



  • public final class Lucene50StoredFieldsFormat
    extends StoredFieldsFormat
    Lucene 5.0 stored fields format.

    Principle

    This StoredFieldsFormat compresses blocks of documents in order to improve the compression ratio compared to document-level compression. It uses the LZ4 compression algorithm by default in 16KB blocks, which is fast to compress and very fast to decompress data. Although the default compression method that is used (BEST_SPEED) focuses more on speed than on compression ratio, it should provide interesting compression ratios for redundant inputs (such as log files, HTML or plain text). For higher compression, you can choose (BEST_COMPRESSION), which uses the DEFLATE algorithm with 60KB blocks for a better ratio at the expense of slower performance. These two options can be configured like this:

       // the default: for high performance
       indexWriterConfig.setCodec(new Lucene54Codec(Mode.BEST_SPEED));
       // instead for higher performance (but slower):
       // indexWriterConfig.setCodec(new Lucene54Codec(Mode.BEST_COMPRESSION));
     

    File formats

    Stored fields are represented by two files:

    1. A fields data file (extension .fdt). This file stores a compact representation of documents in compressed blocks of 16KB or more. When writing a segment, documents are appended to an in-memory byte[] buffer. When its size reaches 16KB or more, some metadata about the documents is flushed to disk, immediately followed by a compressed representation of the buffer using the LZ4 compression format.

      Here is a more detailed description of the field data file format:

      • FieldData (.fdt) --> <Header>, PackedIntsVersion, <Chunk>ChunkCount, ChunkCount, DirtyChunkCount, Footer
      • Header --> IndexHeader
      • PackedIntsVersion --> PackedInts.VERSION_CURRENT as a VInt
      • ChunkCount is not known in advance and is the number of chunks necessary to store all document of the segment
      • Chunk --> DocBase, ChunkDocs, DocFieldCounts, DocLengths, <CompressedDocs>
      • DocBase --> the ID of the first document of the chunk as a VInt
      • ChunkDocs --> the number of documents in the chunk as a VInt
      • DocFieldCounts --> the number of stored fields of every document in the chunk, encoded as followed:
        • if chunkDocs=1, the unique value is encoded as a VInt
        • else read a VInt (let's call it bitsRequired)
          • if bitsRequired is 0 then all values are equal, and the common value is the following VInt
          • else bitsRequired is the number of bits required to store any value, and values are stored in a packed array where every value is stored on exactly bitsRequired bits
      • DocLengths --> the lengths of all documents in the chunk, encoded with the same method as DocFieldCounts
      • CompressedDocs --> a compressed representation of <Docs> using the LZ4 compression format
      • Docs --> <Doc>ChunkDocs
      • Doc --> <FieldNumAndType, Value>DocFieldCount
      • FieldNumAndType --> a VLong, whose 3 last bits are Type and other bits are FieldNum
      • Type -->
        • 0: Value is String
        • 1: Value is BinaryValue
        • 2: Value is Int
        • 3: Value is Float
        • 4: Value is Long
        • 5: Value is Double
        • 6, 7: unused
      • FieldNum --> an ID of the field
      • Value --> String | BinaryValue | Int | Float | Long | Double depending on Type
      • BinaryValue --> ValueLength <Byte>ValueLength
      • ChunkCount --> the number of chunks in this file
      • DirtyChunkCount --> the number of prematurely flushed chunks in this file
      • Footer --> CodecFooter

      Notes

      • If documents are larger than 16KB then chunks will likely contain only one document. However, documents can never spread across several chunks (all fields of a single document are in the same chunk).
      • When at least one document in a chunk is large enough so that the chunk is larger than 32KB, the chunk will actually be compressed in several LZ4 blocks of 16KB. This allows StoredFieldVisitors which are only interested in the first fields of a document to not have to decompress 10MB of data if the document is 10MB, but only 16KB.
      • Given that the original lengths are written in the metadata of the chunk, the decompressor can leverage this information to stop decoding as soon as enough data has been decompressed.
      • In case documents are incompressible, CompressedDocs will be less than 0.5% larger than Docs.
    2. A fields index file (extension .fdx).

    Known limitations

    This StoredFieldsFormat does not support individual documents larger than (231 - 214) bytes.