Java Code Examples for org.apache.flink.util.MathUtils#log2strict()
The following examples show how to use
org.apache.flink.util.MathUtils#log2strict() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: InPlaceMutableHashTable.java From flink with Apache License 2.0 | 6 votes |
private void allocateBucketSegments(int numBucketSegments) { if (numBucketSegments < 1) { throw new RuntimeException("Bug in InPlaceMutableHashTable"); } bucketSegments = new MemorySegment[numBucketSegments]; for(int i = 0; i < bucketSegments.length; i++) { bucketSegments[i] = forcedAllocateSegment(); // Init all pointers in all buckets to END_OF_LIST for(int j = 0; j < numBucketsPerSegment; j++) { bucketSegments[i].putLong(j << bucketSizeBits, END_OF_LIST); } } numBuckets = numBucketSegments * numBucketsPerSegment; numBucketsMask = (1 << MathUtils.log2strict(numBuckets)) - 1; }
Example 2
Source File: BinaryHashPartition.java From flink with Apache License 2.0 | 6 votes |
/** * Creates a new partition, initially in memory, with one buffer for the build side. The * partition is initialized to expect record insertions for the build side. * * @param partitionNumber The number of the partition. * @param recursionLevel The recursion level - zero for partitions from the initial build, * <i>n + 1</i> for partitions that are created from spilled partition * with recursion level <i>n</i>. * @param initialBuffer The initial buffer for this partition. */ BinaryHashPartition(BinaryHashBucketArea bucketArea, BinaryRowSerializer buildSideAccessors, BinaryRowSerializer probeSideAccessors, int partitionNumber, int recursionLevel, MemorySegment initialBuffer, MemorySegmentPool memPool, int segmentSize, boolean compressionEnable, BlockCompressionFactory compressionCodecFactory, int compressionBlockSize) { super(0); this.bucketArea = bucketArea; this.buildSideSerializer = buildSideAccessors; this.probeSideSerializer = probeSideAccessors; this.partitionNumber = partitionNumber; this.recursionLevel = recursionLevel; this.memorySegmentSize = segmentSize; this.segmentSizeBits = MathUtils.log2strict(segmentSize); this.compressionEnable = compressionEnable; this.compressionCodecFactory = compressionCodecFactory; this.compressionBlockSize = compressionBlockSize; this.buildSideWriteBuffer = new BuildSideBuffer(initialBuffer, memPool); this.memPool = memPool; }
Example 3
Source File: BinaryHashPartition.java From flink with Apache License 2.0 | 6 votes |
/** * Constructor creating a partition from a spilled partition file that could be read in one * because it was known to completely fit into memory. * * @param buildSideAccessors The data type accessors for the build side data-type. * @param probeSideAccessors The data type accessors for the probe side data-type. * @param partitionNumber The number of the partition. * @param recursionLevel The recursion level of the partition. * @param buffers The memory segments holding the records. * @param buildSideRecordCounter The number of records in the buffers. * @param segmentSize The size of the memory segments. */ BinaryHashPartition(BinaryHashBucketArea area, BinaryRowSerializer buildSideAccessors, BinaryRowSerializer probeSideAccessors, int partitionNumber, int recursionLevel, List<MemorySegment> buffers, long buildSideRecordCounter, int segmentSize, int lastSegmentLimit) { super(0); this.buildSideSerializer = buildSideAccessors; this.probeSideSerializer = probeSideAccessors; this.partitionNumber = partitionNumber; this.recursionLevel = recursionLevel; this.memorySegmentSize = segmentSize; this.segmentSizeBits = MathUtils.log2strict(segmentSize); this.finalBufferLimit = lastSegmentLimit; this.partitionBuffers = buffers.toArray(new MemorySegment[buffers.size()]); this.buildSideRecordCounter = buildSideRecordCounter; this.bucketArea = area; }
Example 4
Source File: HashPartition.java From flink with Apache License 2.0 | 6 votes |
/** * Creates a new partition, initially in memory, with one buffer for the build side. The partition is * initialized to expect record insertions for the build side. * * @param partitionNumber The number of the partition. * @param recursionLevel The recursion level - zero for partitions from the initial build, <i>n + 1</i> for * partitions that are created from spilled partition with recursion level <i>n</i>. * @param initialBuffer The initial buffer for this partition. */ HashPartition(TypeSerializer<BT> buildSideAccessors, TypeSerializer<PT> probeSideAccessors, int partitionNumber, int recursionLevel, MemorySegment initialBuffer, MemorySegmentSource memSource, int segmentSize) { super(0); this.buildSideSerializer = buildSideAccessors; this.probeSideSerializer = probeSideAccessors; this.partitionNumber = partitionNumber; this.recursionLevel = recursionLevel; this.memorySegmentSize = segmentSize; this.segmentSizeBits = MathUtils.log2strict(segmentSize); this.overflowSegments = new MemorySegment[2]; this.numOverflowSegments = 0; this.nextOverflowBucket = 0; this.buildSideWriteBuffer = new BuildSideBuffer(initialBuffer, memSource); }
Example 5
Source File: LongHashPartition.java From flink with Apache License 2.0 | 6 votes |
/** * Entrance 3: dense mode for just data search (bucket in LongHybridHashTable of dense mode). */ LongHashPartition( LongHybridHashTable longTable, BinaryRowDataSerializer buildSideSerializer, MemorySegment[] partitionBuffers) { super(0); this.longTable = longTable; this.buildSideSerializer = buildSideSerializer; this.buildReuseRow = buildSideSerializer.createInstance(); this.segmentSize = longTable.pageSize(); Preconditions.checkArgument(segmentSize % 16 == 0); this.partitionBuffers = partitionBuffers; this.segmentSizeBits = MathUtils.log2strict(segmentSize); this.segmentSizeMask = segmentSize - 1; this.finalBufferLimit = segmentSize; this.iterator = new MatchIterator(); }
Example 6
Source File: KeyMap.java From flink with Apache License 2.0 | 6 votes |
/** * Creates a new table with a capacity tailored to the given expected number of elements. * * @param expectedNumberOfElements The number of elements to tailor the capacity to. */ public KeyMap(int expectedNumberOfElements) { if (expectedNumberOfElements < 0) { throw new IllegalArgumentException("Invalid capacity: " + expectedNumberOfElements); } // round up to the next power or two // guard against too small capacity and integer overflows int capacity = Integer.highestOneBit(expectedNumberOfElements) << 1; capacity = capacity >= 0 ? Math.max(MIN_CAPACITY, capacity) : MAX_CAPACITY; // this also acts as a sanity check log2size = MathUtils.log2strict(capacity); shift = FULL_BIT_RANGE - log2size; table = allocateTable(capacity); rehashThreshold = getRehashThreshold(capacity); }
Example 7
Source File: KeyMap.java From Flink-CEPplus with Apache License 2.0 | 6 votes |
/** * Creates a new table with a capacity tailored to the given expected number of elements. * * @param expectedNumberOfElements The number of elements to tailor the capacity to. */ public KeyMap(int expectedNumberOfElements) { if (expectedNumberOfElements < 0) { throw new IllegalArgumentException("Invalid capacity: " + expectedNumberOfElements); } // round up to the next power or two // guard against too small capacity and integer overflows int capacity = Integer.highestOneBit(expectedNumberOfElements) << 1; capacity = capacity >= 0 ? Math.max(MIN_CAPACITY, capacity) : MAX_CAPACITY; // this also acts as a sanity check log2size = MathUtils.log2strict(capacity); shift = FULL_BIT_RANGE - log2size; table = allocateTable(capacity); rehashThreshold = getRehashThreshold(capacity); }
Example 8
Source File: InPlaceMutableHashTable.java From Flink-CEPplus with Apache License 2.0 | 6 votes |
public RecordArea(int segmentSize) { int segmentSizeBits = MathUtils.log2strict(segmentSize); if ((segmentSize & (segmentSize - 1)) != 0) { throw new IllegalArgumentException("Segment size must be a power of 2!"); } this.segmentSizeBits = segmentSizeBits; this.segmentSizeMask = segmentSize - 1; outView = new RecordAreaOutputView(segmentSize); try { addSegment(); } catch (EOFException ex) { throw new RuntimeException("Bug in InPlaceMutableHashTable: we should have caught it earlier " + "that we don't have enough segments."); } inView = new RandomAccessInputView(segments, segmentSize); }
Example 9
Source File: HashPartition.java From flink with Apache License 2.0 | 5 votes |
private BuildSideBuffer(MemorySegment initialSegment, MemorySegmentSource memSource) { super(initialSegment, initialSegment.size(), 0); this.targetList = new ArrayList<MemorySegment>(); this.memSource = memSource; this.sizeBits = MathUtils.log2strict(initialSegment.size()); }
Example 10
Source File: HashPartition.java From Flink-CEPplus with Apache License 2.0 | 5 votes |
private BuildSideBuffer(MemorySegment initialSegment, MemorySegmentSource memSource) { super(initialSegment, initialSegment.size(), 0); this.targetList = new ArrayList<MemorySegment>(); this.memSource = memSource; this.sizeBits = MathUtils.log2strict(initialSegment.size()); }
Example 11
Source File: BinaryHashPartition.java From flink with Apache License 2.0 | 5 votes |
private BuildSideBuffer(MemorySegment initialSegment, MemorySegmentSource memSource) { super(initialSegment, initialSegment.size(), 0); this.memSource = memSource; this.sizeBits = MathUtils.log2strict(initialSegment.size()); this.targetList = new ArrayList<>(); this.buildStageSegments = new ArrayList<>(); this.buildStageSegments.add(initialSegment); this.buildStageInputView = new RandomAccessInputView( buildStageSegments, initialSegment.size()); }
Example 12
Source File: InPlaceMutableHashTable.java From flink with Apache License 2.0 | 5 votes |
public InPlaceMutableHashTable(TypeSerializer<T> serializer, TypeComparator<T> comparator, List<MemorySegment> memory) { super(serializer, comparator); this.numAllMemorySegments = memory.size(); this.freeMemorySegments = new ArrayList<>(memory); // some sanity checks first if (freeMemorySegments.size() < MIN_NUM_MEMORY_SEGMENTS) { throw new IllegalArgumentException("Too few memory segments provided. InPlaceMutableHashTable needs at least " + MIN_NUM_MEMORY_SEGMENTS + " memory segments."); } // Get the size of the first memory segment and record it. All further buffers must have the same size. // the size must also be a power of 2 segmentSize = freeMemorySegments.get(0).size(); if ( (segmentSize & segmentSize - 1) != 0) { throw new IllegalArgumentException("Hash Table requires buffers whose size is a power of 2."); } this.numBucketsPerSegment = segmentSize / bucketSize; this.numBucketsPerSegmentBits = MathUtils.log2strict(this.numBucketsPerSegment); this.numBucketsPerSegmentMask = (1 << this.numBucketsPerSegmentBits) - 1; recordArea = new RecordArea(segmentSize); stagingSegments = new ArrayList<>(); stagingSegments.add(forcedAllocateSegment()); stagingSegmentsInView = new RandomAccessInputView(stagingSegments, segmentSize); stagingSegmentsOutView = new StagingOutputView(stagingSegments, segmentSize); prober = new HashTableProber<>(buildSideComparator, new SameTypePairComparator<>(buildSideComparator)); enableResize = buildSideSerializer.getLength() == -1; }
Example 13
Source File: InPlaceMutableHashTable.java From flink with Apache License 2.0 | 5 votes |
public InPlaceMutableHashTable(TypeSerializer<T> serializer, TypeComparator<T> comparator, List<MemorySegment> memory) { super(serializer, comparator); this.numAllMemorySegments = memory.size(); this.freeMemorySegments = new ArrayList<>(memory); // some sanity checks first if (freeMemorySegments.size() < MIN_NUM_MEMORY_SEGMENTS) { throw new IllegalArgumentException("Too few memory segments provided. InPlaceMutableHashTable needs at least " + MIN_NUM_MEMORY_SEGMENTS + " memory segments."); } // Get the size of the first memory segment and record it. All further buffers must have the same size. // the size must also be a power of 2 segmentSize = freeMemorySegments.get(0).size(); if ( (segmentSize & segmentSize - 1) != 0) { throw new IllegalArgumentException("Hash Table requires buffers whose size is a power of 2."); } this.numBucketsPerSegment = segmentSize / bucketSize; this.numBucketsPerSegmentBits = MathUtils.log2strict(this.numBucketsPerSegment); this.numBucketsPerSegmentMask = (1 << this.numBucketsPerSegmentBits) - 1; recordArea = new RecordArea(segmentSize); stagingSegments = new ArrayList<>(); stagingSegments.add(forcedAllocateSegment()); stagingSegmentsInView = new RandomAccessInputView(stagingSegments, segmentSize); stagingSegmentsOutView = new StagingOutputView(stagingSegments, segmentSize); prober = new HashTableProber<>(buildSideComparator, new SameTypePairComparator<>(buildSideComparator)); enableResize = buildSideSerializer.getLength() == -1; }
Example 14
Source File: InPlaceMutableHashTable.java From flink with Apache License 2.0 | 4 votes |
public StagingOutputView(ArrayList<MemorySegment> segments, int segmentSize) { super(segmentSize, 0); this.segmentSizeBits = MathUtils.log2strict(segmentSize); this.segments = segments; }
Example 15
Source File: RandomAccessOutputView.java From Flink-CEPplus with Apache License 2.0 | 4 votes |
public RandomAccessOutputView(MemorySegment[] segments, int segmentSize) { this(segments, segmentSize, MathUtils.log2strict(segmentSize)); }
Example 16
Source File: BinaryHashTable.java From flink with Apache License 2.0 | 4 votes |
public BinaryHashTable( Configuration conf, Object owner, AbstractRowSerializer buildSideSerializer, AbstractRowSerializer probeSideSerializer, Projection<BaseRow, BinaryRow> buildSideProjection, Projection<BaseRow, BinaryRow> probeSideProjection, MemoryManager memManager, long reservedMemorySize, long preferredMemorySize, long perRequestMemorySize, IOManager ioManager, int avgRecordLen, long buildRowCount, boolean useBloomFilters, HashJoinType type, JoinCondition condFunc, boolean reverseJoin, boolean[] filterNulls, boolean tryDistinctBuildRow) { super(conf, owner, memManager, reservedMemorySize, preferredMemorySize, perRequestMemorySize, ioManager, avgRecordLen, buildRowCount, !type.buildLeftSemiOrAnti() && tryDistinctBuildRow); // assign the members this.originBuildSideSerializer = buildSideSerializer; this.binaryBuildSideSerializer = new BinaryRowSerializer(buildSideSerializer.getArity()); this.reuseBuildRow = binaryBuildSideSerializer.createInstance(); this.originProbeSideSerializer = probeSideSerializer; this.binaryProbeSideSerializer = new BinaryRowSerializer(originProbeSideSerializer.getArity()); this.buildSideProjection = buildSideProjection; this.probeSideProjection = probeSideProjection; this.useBloomFilters = useBloomFilters; this.type = type; this.condFunc = condFunc; this.reverseJoin = reverseJoin; this.nullFilterKeys = NullAwareJoinHelper.getNullFilterKeys(filterNulls); this.nullSafe = nullFilterKeys.length == 0; this.filterAllNulls = nullFilterKeys.length == filterNulls.length; this.bucketsPerSegment = this.segmentSize >> BinaryHashBucketArea.BUCKET_SIZE_BITS; checkArgument(bucketsPerSegment != 0, "Hash Table requires buffers of at least " + BinaryHashBucketArea.BUCKET_SIZE + " bytes."); this.bucketsPerSegmentMask = bucketsPerSegment - 1; this.bucketsPerSegmentBits = MathUtils.log2strict(bucketsPerSegment); this.partitionsBeingBuilt = new ArrayList<>(); this.partitionsPending = new ArrayList<>(); createPartitions(initPartitionFanOut, 0); }
Example 17
Source File: LookupBucketIterator.java From flink with Apache License 2.0 | 4 votes |
LookupBucketIterator(BinaryHashTable table) { this.table = table; this.reuse = table.binaryBuildSideSerializer.createInstance(); this.segmentSizeBits = MathUtils.log2strict(table.pageSize()); this.segmentSizeMask = table.pageSize() - 1; }
Example 18
Source File: CompactingHashTable.java From flink with Apache License 2.0 | 4 votes |
public CompactingHashTable(TypeSerializer<T> buildSideSerializer, TypeComparator<T> buildSideComparator, List<MemorySegment> memorySegments, int avgRecordLen) { super(buildSideSerializer, buildSideComparator); // some sanity checks first if (memorySegments == null) { throw new NullPointerException(); } if (memorySegments.size() < MIN_NUM_MEMORY_SEGMENTS) { throw new IllegalArgumentException("Too few memory segments provided. Hash Table needs at least " + MIN_NUM_MEMORY_SEGMENTS + " memory segments."); } this.availableMemory = (memorySegments instanceof ArrayList) ? (ArrayList<MemorySegment>) memorySegments : new ArrayList<MemorySegment>(memorySegments); this.avgRecordLen = buildSideSerializer.getLength() > 0 ? buildSideSerializer.getLength() : avgRecordLen; // check the size of the first buffer and record it. all further buffers must have the same size. // the size must also be a power of 2 this.segmentSize = memorySegments.get(0).size(); if ( (this.segmentSize & this.segmentSize - 1) != 0) { throw new IllegalArgumentException("Hash Table requires buffers whose size is a power of 2."); } this.pageSizeInBits = MathUtils.log2strict(this.segmentSize); int bucketsPerSegment = this.segmentSize >> NUM_INTRA_BUCKET_BITS; if (bucketsPerSegment == 0) { throw new IllegalArgumentException("Hash Table requires buffers of at least " + HASH_BUCKET_SIZE + " bytes."); } this.bucketsPerSegmentMask = bucketsPerSegment - 1; this.bucketsPerSegmentBits = MathUtils.log2strict(bucketsPerSegment); this.partitions = new ArrayList<InMemoryPartition<T>>(); // so far no partition has any MemorySegments }
Example 19
Source File: LookupBucketIterator.java From flink with Apache License 2.0 | 4 votes |
LookupBucketIterator(BinaryHashTable table) { this.table = table; this.reuse = table.binaryBuildSideSerializer.createInstance(); this.segmentSizeBits = MathUtils.log2strict(table.pageSize()); this.segmentSizeMask = table.pageSize() - 1; }
Example 20
Source File: CompactingHashTable.java From Flink-CEPplus with Apache License 2.0 | 4 votes |
public CompactingHashTable(TypeSerializer<T> buildSideSerializer, TypeComparator<T> buildSideComparator, List<MemorySegment> memorySegments, int avgRecordLen) { super(buildSideSerializer, buildSideComparator); // some sanity checks first if (memorySegments == null) { throw new NullPointerException(); } if (memorySegments.size() < MIN_NUM_MEMORY_SEGMENTS) { throw new IllegalArgumentException("Too few memory segments provided. Hash Table needs at least " + MIN_NUM_MEMORY_SEGMENTS + " memory segments."); } this.availableMemory = (memorySegments instanceof ArrayList) ? (ArrayList<MemorySegment>) memorySegments : new ArrayList<MemorySegment>(memorySegments); this.avgRecordLen = buildSideSerializer.getLength() > 0 ? buildSideSerializer.getLength() : avgRecordLen; // check the size of the first buffer and record it. all further buffers must have the same size. // the size must also be a power of 2 this.segmentSize = memorySegments.get(0).size(); if ( (this.segmentSize & this.segmentSize - 1) != 0) { throw new IllegalArgumentException("Hash Table requires buffers whose size is a power of 2."); } this.pageSizeInBits = MathUtils.log2strict(this.segmentSize); int bucketsPerSegment = this.segmentSize >> NUM_INTRA_BUCKET_BITS; if (bucketsPerSegment == 0) { throw new IllegalArgumentException("Hash Table requires buffers of at least " + HASH_BUCKET_SIZE + " bytes."); } this.bucketsPerSegmentMask = bucketsPerSegment - 1; this.bucketsPerSegmentBits = MathUtils.log2strict(bucketsPerSegment); this.partitions = new ArrayList<InMemoryPartition<T>>(); // so far no partition has any MemorySegments }