2362N/A * Copyright (c) 2005, 2012, Oracle and/or its affiliates. All rights reserved. 0N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 0N/A * This code is free software; you can redistribute it and/or modify it 0N/A * under the terms of the GNU General Public License version 2 only, as 0N/A * published by the Free Software Foundation. 0N/A * This code is distributed in the hope that it will be useful, but WITHOUT 0N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 0N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 0N/A * version 2 for more details (a copy is included in the LICENSE file that 0N/A * accompanied this code). 0N/A * You should have received a copy of the GNU General Public License version 0N/A * 2 along with this work; if not, write to the Free Software Foundation, 0N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 2362N/A * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 0N/A// All sizes are in HeapWords. // These fields may not be updated below, so make sure they're clear. // Determine the number of destination regions for the partial object. // One destination region. // The destination falls on a region boundary, thus the first word of the // partial object will be the first word copied to the destination region. // Two destination regions. When copied, the partial object will cross a // destination region boundary, so a word somewhere within the partial // object will be the first word copied to the second destination region. "perm",
"old ",
"eden",
"from",
"to " tty->
print_cr(
"------ ---------- ---------- ---------- ----------");
// Print (and count) the full regions at the beginning of the space. // Print the 'reclaimed ratio' for regions while there is something live in // the region or to the right of it. The remaining regions are empty (and // uninteresting), and computing the ratio will result in division by 0. // Any remaining regions are empty. Print one more if there is one. #
endif // #ifndef PRODUCT "region start not aligned");
"region size not a multiple of RegionSize");
// Release memory reserved in the space. // Middle regions--completely spanned by this object. // Update live_obj_size so the region appears completely full. // Find the point at which a space can be split and, if necessary, record the // If the current src region (which overflowed the destination space) doesn't // have a partial object, the split point is at the beginning of the current src // region (an "easy" split, no extra bookkeeping required). // If the current src region has a partial object, the split point is in the // region where that partial object starts (call it the split_region). If // split_region has a partial object, then the split point is just after that // partial object (a "hard" split where we have to record the split data and // zero the partial_obj_size field). With a "hard" split, we know that the // partial_obj ends within split_region because the partial object that caused // the overflow starts in split_region. If split_region doesn't have a partial // obj, then the split is at the beginning of split_region (another "easy" "region should not fit into target space");
// The split point is just after the partial object (if any) in the // src_region that contains the start of the object that overflowed the // Find the start of the "overflow" object and set split_region to the // Clear the source_region field of all destination regions whose first word // came from data after the split point (a non-null source_region field // implies a region must be filled). // An alternative to the simple loop below: clear during post_compact(), // which uses memcpy instead of individual stores, and is easy to // parallelize. (The downside is that it clears the entire RegionData // object as opposed to just one field.) // post_compact() would have to clear the summary data up to the highest // address that was written during the summary phase, which would be // max(top, max(new_top, clear_top)) // where clear_top is a new field in SpaceInfo. Would have to set clear_top // Set split_destination and partial_obj_size to reflect the split region. // The split is recorded only if a partial object extends onto the region. // Setup the continuation addresses. // The destination must be set even if the region has no data. // If cur_region does not fit entirely into the target space, find a point // at which the source space can be 'split' so that part is copied to the // target space and the rest is copied elsewhere. // Compute the destination_count for cur_region, and if necessary, update // source_region for a destination region. The source_region field is // updated if cur_region is the first (left-most) region to be copied to a // The destination_count calculation is a bit subtle. A region that has // data that compacts into itself does not count itself as a destination. // This maintains the invariant that a zero count means the region is // available and can be claimed and then filled. // The current region has been split: the partial object will be copied // to one destination space and the remaining data will be copied to // another destination space. Adjust the initial destination_count and, // if necessary, set the source_region field if the partial object will // cross a destination region boundary. // Initially assume that the destination regions will be the same and // adjust the value below if necessary. Under this assumption, if // cur_region == dest_region_2, then cur_region will be compacted // completely into itself. // Destination regions differ; adjust destination_count. // Data from cur_region will be copied to the start of dest_region_2. // Data from cur_region will be copied to the start of the destination // Region covering the object. // If the entire Region is live, the new location is region->destination + the // offset of the object within in the Region. // Run some performance tests to determine if this special case pays off. It // is worth it for pointers into the dense prefix. If the optimization to // avoid pointer updates in regions that only point to the dense prefix is // ever implemented, this should be revisited. // Otherwise, the new location is region->destination + block offset + the // number of live words in the Block that are (a) to the left of addr and (b) // due to objects that start in the Block. // Fill in the block table if necessary. This is unsynchronized, so multiple // threads may fill the block table for a region (harmless, since it is true,
// atomic_discovery false);
// write barrier for next field updates // Initialize static fields in ParCompactionManager. // Was the old gen get allocated successfully? "garbage collection for the requested " SIZE_FORMAT "KB heap.",
"garbage collection for the requested " SIZE_FORMAT "KB heap.",
// Simple class for storing info about the heap at the start of GC, to be used // At this point, top is the value before GC, new_top() is the value that will // be set at the end of GC. The marking bitmap is cleared to top; nothing // should be marked above top. The summary data is cleared to the larger of // Clear the data used to 'split' regions. // Update the from & to space pointers in space_info, since they are swapped // at each young gen gc. Do the update unconditionally (even though a // promotion failure does not swap spaces) because an unknown number of minor // collections will have swapped the spaces an unknown number of times. // Increment the invocation count // We need to track unique mark sweep invocations as well. HandleMark hm;
// Discard invalid handles created during verification // Verify object start arrays // Have worker threads release resources the next time they run a task. // Clear the marking bitmap, summary data and split info. // Update top(). Must be done after clearing the bitmap and summary data. // Update heap occupancy information which is used as input to the soft ref // clearing policy at the next gc. // Update time of last GC // Skip full regions at the beginning of the space--they are necessarily part // XXX - Use binary search? // Found the region that has the correct amount of deadwood to the left. // This typically occurs after crossing a fairly sparse set of regions, so // iterate backwards over those sparse regions, looking for the region // that has the lowest density of live objects 'to the right.' #
endif // #ifndef PRODUCT// Return a fraction indicating how much of the generation can be treated as // "dead wood" (i.e., not reclaimed). The function uses a normal distribution // based on the density of live objects in the generation to determine a limit, // which is then adjusted so the return value is min_percent when the density is // The following table shows some return values for a different values of the // standard deviation (ParallelOldDeadWoodLimiterStdDev); the mean is 0.5 and // fraction allowed as dead wood // ----------------------------------------------------------------- // density std_dev=70 std_dev=75 std_dev=80 std_dev=85 std_dev=90 std_dev=95 // ------- ---------- ---------- ---------- ---------- ---------- ---------- // 0.00000 0.01000000 0.01000000 0.01000000 0.01000000 0.01000000 0.01000000 // 0.05000 0.03193096 0.02836880 0.02550828 0.02319280 0.02130337 0.01974941 // 0.10000 0.05247504 0.04547452 0.03988045 0.03537016 0.03170171 0.02869272 // 0.15000 0.07135702 0.06111390 0.05296419 0.04641639 0.04110601 0.03676066 // 0.20000 0.08831616 0.07509618 0.06461766 0.05622444 0.04943437 0.04388975 // 0.25000 0.10311208 0.08724696 0.07471205 0.06469760 0.05661313 0.05002313 // 0.30000 0.11553050 0.09741183 0.08313394 0.07175114 0.06257797 0.05511132 // 0.35000 0.12538832 0.10545958 0.08978741 0.07731366 0.06727491 0.05911289 // 0.40000 0.13253818 0.11128511 0.09459590 0.08132834 0.07066107 0.06199500 // 0.45000 0.13687208 0.11481163 0.09750361 0.08375387 0.07270534 0.06373386 // 0.50000 0.13832410 0.11599237 0.09847664 0.08456518 0.07338887 0.06431510 // 0.55000 0.13687208 0.11481163 0.09750361 0.08375387 0.07270534 0.06373386 // 0.60000 0.13253818 0.11128511 0.09459590 0.08132834 0.07066107 0.06199500 // 0.65000 0.12538832 0.10545958 0.08978741 0.07731366 0.06727491 0.05911289 // 0.70000 0.11553050 0.09741183 0.08313394 0.07175114 0.06257797 0.05511132 // 0.75000 0.10311208 0.08724696 0.07471205 0.06469760 0.05661313 0.05002313 // 0.80000 0.08831616 0.07509618 0.06461766 0.05622444 0.04943437 0.04388975 // 0.85000 0.07135702 0.06111390 0.05296419 0.04641639 0.04110601 0.03676066 // 0.90000 0.05247504 0.04547452 0.03988045 0.03537016 0.03170171 0.02869272 // 0.95000 0.03193096 0.02836880 0.02550828 0.02319280 0.02130337 0.01974941 // 1.00000 0.01000000 0.01000000 0.01000000 0.01000000 0.01000000 0.01000000 // The raw limit is the value of the normal distribution at x = density. // Adjust the raw limit so it becomes the minimum when the density is 1. // First subtract the adjustment value (which is simply the precomputed value // normal_distribution(1.0)); this yields a value of 0 when the density is 1. // Then add the minimum value, so the minimum is returned when the density is // 1. Finally, prevent negative values, which occur when the mean is not 0.5. // Equivalent to (left + right) / 2, but does not overflow. // Equivalent to (left + right) / 2, but does not overflow. // The result is valid during the summary phase, after the initial summarization // of each space into itself, and before final summarization. // Return the address of the end of the dense prefix, a.k.a. the start of the // compacted region. The address is always on a region boundary. // Completely full regions at the left are skipped, since no compaction can // occur in those regions. Then the maximum amount of dead wood to allow is // computed, based on the density (amount live / capacity) of the generation; // the region with approximately that amount of dead space to the left is // identified as the limit region. Regions between the last completely full // region and the limit region are scanned and the one that has the best // (maximum) reclaimed_ratio() is selected. // The value was chosen to provoke splitting a young gen space; use it. // Skip full regions at the beginning of the space--they are necessarily part "region must have dead space");
// The gc number is saved whenever a maximum compaction is done, and used to // determine when the maximum compaction interval has expired. This avoids // successive max compactions for different reasons. // Locate the region with the desired amount of dead space to the left. // Scan from the first region with dead space to the limit region and find the // one with the best (largest) reclaimed ratio. // Something to consider: if the region with the best ratio is 'close to' the // first region w/free space, choose the first region with free space // ("first-free"). The first-free region is usually near the start of the // heap, which means we are copying most of the heap already, so copy a bit // more to get complete compaction. // Find the source and destination start addresses. // The start (the original top() value) is aligned to a region boundary so // the associated region does not have a destination. Compute the // destination from the previous region. // Filling the entire space. // Update the summary data. // The loop didn't completely fill to t (top); adjust top downward. // Choose the space to split; need at least 2 regions live (or fillable). // Fill from top() to end() w/live objects of mixed sizes. // Manipulate the old gen so that it has room for about half of the live data // in the target young gen space (live_words / 2). // Fill space above top() and set the dense prefix so everything survives. // Find a dense prefix that makes the right amount of space available. #
endif // #ifndef PRODUCT#
endif // #ifndef PRODUCT // Only enough dead space is filled so that any remaining dead space to the // left is larger than the minimum filler object. (The remainder is filled // The size of the dead space to the right of the boundary is not a // concern, since compaction will be able to use whatever space is // Here '||' is the boundary, 'x' represents a don't care bit and a box // surrounds the space to be filled with an object. // In the 32-bit VM, each bit represents two 32-bit words: // a) beg_bits: ... x x x | 0 | || 0 x x ... // end_bits: ... x x x | 0 | || 0 x x ... // In the 64-bit VM, each bit represents one 64-bit word: // b) beg_bits: ... x x x | 0 || 0 | x x ... // end_bits: ... x x 1 | 0 || 0 | x x ... // c) beg_bits: ... x x | 0 0 | || 0 x x ... // end_bits: ... x 1 | 0 0 | || 0 x x ... // d) beg_bits: ... x | 0 0 0 | || 0 x x ... // end_bits: ... 1 | 0 0 0 | || 0 x x ... // e) beg_bits: ... 0 0 | 0 0 | || 0 x x ... // end_bits: ... 0 0 | 0 0 | || 0 x x ... // Initially assume case a, c or e will apply. "should have been reset in summarize_spaces_quick()");
#
endif // #ifndef PRODUCT // Recompute the summary data, taking into account the dense prefix. If // every last byte will be reclaimed, then the existing summary data which // compacts everything can be left in place. // If dead space crosses the dense prefix boundary, it is (at least // partially) filled with a dummy object, marked live and added to the // summary data. This simplifies the copy/update phase and must be done // before the final locations of objects are determined, to prevent // leaving a fragment of dead space that is too small to fill. // Compute the destination of each Region, and thus each object. #
endif // #ifndef PRODUCT // Quick summarization of each space into itself, to see how much is live. tty->
print_cr(
"summary_phase: after summarizing each space to self");
// The amount of live data that will end up in old space (assuming it fits). // XXX - should also try to expand #
endif // #ifndef PRODUCT // Permanent and Old generations. // Summarize the remaining spaces in the young gen. The initial target space // is the old gen. If a space does not fit entirely into the target, then the // remainder is compacted into the space itself and that space becomes the new // All the live data will fit. // Reset the new_top value for the space. // Attempt to fit part of the source space into the target space. assert(!
done,
"space should not fit into old gen");
// The source space becomes the new target, so the remainder is compacted // within the space itself. assert(
done,
"space must fit when compacted into itself");
tty->
print_cr(
"summary_phase: after final summarization");
// This method should contain all heap-specific policy for invoking a full // collection. invoke_no_policy() will only attempt to compact the heap; it // will do nothing further. If we need to bail out for policy reasons, scavenge // before full gc, or any other specialized behavior, it needs to be added here. // Note that this method should only be called from the vm_thread while at a // Note that the all_soft_refs_clear flag in the collector policy // may be true because this method can be called without intervening // activity. For example when the heap space is tight and full measure // are being taken to free space. "should be in vm thread");
// This method contains no policy. You should probably // be calling invoke() instead. // The scope of casr should end after code that can change // CollectorPolicy::_should_clear_all_soft_refs. // Save information needed to minimize mangling // Make sure data structures are sane, make the heap parsable, and do other // miscellaneous bookkeeping. // Get the compaction manager reserved for the VM thread. // Place after pre_compact() where the number of invocations is incremented. // Set the number of GC threads to be used in this collection // Let the size policy know we're starting // When collecting the permanent generation methodOops may be moving, // so we either have to flush all bcp data or convert it into bci. #
endif // #ifndef PRODUCT // adjust_roots() updates Universe::_intArrayKlassObj which is // needed by the compaction for filling holes in the dense prefix. // Does the perm gen always have to be done serially because // klasses are used in the update of an object? // Reset the mark bitmap, summary data, and do other bookkeeping. Must be // Let the size policy know we're done " perm_gen_capacity: %d ",
// Don't check if the size_policy is ready here. Let // the size_policy check that internally. // Calculate optimal free space amounts "Sizes of space in young gen are out-of-bounds");
// Don't resize the young generation at an major collection. A // desired young generation size may have been calculated but // resizing the young generation complicates the code because the // resizing of the old generation may have moved the boundary // between the young generation and the old generation. Let the // young generation resizing happen at the minor collections. // We collected the perm gen, so we'll resize it here. // No GC timestamp here. This is after GC so it would be confusing. // Print perm gen last (print_heap_change() excludes the perm gen). // Track memory usage and detect low memory HandleMark hm;
// Discard invalid handles created during verification // Re-verify object start arrays // Both generations must be completely committed. // Figure out how much to take from eden. Include the average amount promoted // in the total; otherwise the next young gen GC will simply bail out to a return false;
// Must leave some space in eden. return false;
// Respect young gen minimum size. // Fill the unused part of the old gen. return false;
// If the old gen cannot be filled, must give up. // Take the live data from eden and set both top and end in the old gen to // eden top. (Need to set end because reset_after_change() mangles the region // from end to virtual_space->high() in debug builds). // Update the object start array for the filler object and the data from eden. // Could update the promoted average here, but it is not typically updated at // full GCs and the value to use is unclear. Something like // cur_promoted_avg + absorb_size / number_of_scavenges_since_last_full_gc. "shouldn't return NULL");
// Recursively traverse all live objects and mark them // We scan the thread roots in parallel // Process reference objects found during marking // Follow system dictionary roots and unload classes. // Follow code cache roots. // revisit_klass_stack is used in follow_weak_klass_links(). // Revisit memoized MDO's and clear any unmarked weak refs // Visit interned string tables and delete unmarked oops // Clean up unreferenced symbols in symbol table. // This should be moved to the shared markSweep code! // Adjust the pointers to reflect the new locations // Now adjust pointers in remaining weak roots. (All of which should // have been cleared if they pointed to non-surviving objects.) // Global (weak) JNI handles // Roots were visited so references into the young gen in roots // may have been scanned. Process them also. // Should the reference processor have a span that excludes // Find the threads that are active // Set the region stacks variables to "no" region stack values // so that they will be recognized and needing a region stack // in the stealing tasks if they do not get one by executing // Find all regions that are available (can be filled immediately) and // distribute them to the thread stacks. The iteration is done in reverse // order (high to low) so the regions will be removed in ascending order. // A region index which corresponds to the tasks created above. // "which" must be 0 <= which < task_count // Assign regions to tasks in round-robin fashion. "Inconsistent number of workers");
// Iterate over all the spaces adding tasks for updating // regions in the dense prefix. Assume that 1 gc thread // will work on opening the gaps and the remaining gc threads // will work on the dense prefix. // There is no dense prefix for this space. // The dense prefix is before this region. "The region after the dense prefix should always be ready to fill");
// Is there dense prefix work? // How many regions of the dense prefix should be given to // Don't over partition. This assumes that // PAR_OLD_DENSE_PREFIX_OVER_PARTITIONING is a small integer value // so there are not many regions to process. // Give each thread at least 1 region. // region_index_end is not processed // This gets any part of the dense prefix that did not // Once a thread has drained it's stack, it should try to steal regions from // Write a histogram of the number of times the block table was filled for a // Verify that all regions have been processed before the deferred updates. // Note that perm_space_id is skipped; this type of verification is not // valid until the perm gen is compacted by regions. // Update the deferred objects, if any. Any compaction manager can be used. // All Regions between space bottom() to new_top() should be marked as filled // and all Regions between new_top() and top() should be available (i.e., // should have been emptied). // All klasses on the revisit stack are marked at this point. // Update and follow all subklass, sibling and implementor links. // Check all the stacks here even if not all the workers are active. // There is no accounting which indicates which stacks might have // contents to be followed. // All strongly reachable oops have been marked at this point; // we can visit and clear any weak references from MDO's which // we memoized during the strong marking phase. if (l > 0 && l -
1 !=
index) {
"should be moved to forwarded location");
tty->
print_cr(
"Requires RecordMarkSweepCompaction to be enabled");
tty->
print_cr(
"No compaction information gathered yet");
#
endif //VALIDATE_MARK_SWEEP// Update interior oops in the ranges of regions [beg_region, end_region). // Claim the regions to avoid triggering an assert when they are marked as // Find the first live object or block of dead space that *starts* in this // range of regions. If a partial object crosses onto the region, skip it; // it will be marked for 'deferred update' when the object head is // processed. If dead space crosses onto the region, it is also skipped; it // will be filled when the prior region is processed. If neither of those // apply, the first word in the region is the start of a live object or dead // A live object or block of dead space starts in this range of Regions. // Create closures and iterate. // Mark the regions as filled. // Return the SpaceId for the space containing addr. If addr is not in the // heap, last_space_id is returned. In debug mode it expects the address to be // in the heap and asserts such. assert(
false,
"no space contains the addr");
// Skip over count live words starting from beg, and return the address of the // next live word. Unless marked, the word corresponding to beg is assumed to // be dead. Callers must either ensure beg does not correspond to the middle of // an object, or account for those live words in some other way. Callers must // also ensure that there are enough live words in the range [beg, end) to skip. // Skipping the desired number of words landed just past the end of an object. // Find the start of the next object. // The partial object ending at the split point contains the first word to // be copied to dest_addr. // Return the first live word in the source region. // Must skip some live data. // All the live words to skip are part of the partial object. // Find the first live word past the partial object. // Skip over the partial object (if any). // Skip over live words due to objects that start in the region. "src_space_id does not match beg_addr");
"src_space_id does not match end_addr");
// Regions up to new_top() are enqueued if they become available. // Skip empty regions (if any) up to the top of the space. // The next source region is in the current space. Update src_region_idx // and the source address to match src_region_ptr. // Switch to a new source space and find the first non-empty region. // Iterate over the spaces that do not compact into themselves. "first live obj in the space must match the destination");
"a space cannot begin with a partial obj");
assert(
false,
"no source region was found");
// Get the items needed to construct the closure. // Get the source region and related info. // Adjust src_region_idx to prepare for decrementing destination counts (the // destination count is not decremented when a region is copied to itself). // The first source word is in the middle of an object; copy the remainder // of the object or as much as will fit. The fact that pointer updates were // deferred will be noted when the object header is processed. // The partial object was copied from more than one source region. // Move to the next source region, possibly switching spaces as well. All // args except end_addr may be modified. // The last obj that starts in the source region does not end in the // The end was found; the entire object will fit. // The end was not found; the object will not fit. // The last object did not fit. Note that interior oop updates were // deferred, then copy enough of the object to fill the region. // Move to the next source region, possibly switching spaces as well. All // args except end_addr may be modified. // Fill in the block table elements for the specified region. Each block // table element holds the number of live words in the region that are to the // left of the first object that starts in the block. Thus only blocks in // which an object starts need to be filled. // The algorithm scans the section of the bitmap that corresponds to the // region, keeping a running total of the live words. When an object start is // found, if it's the first to start in the block that contains it, the // current total is written to the block table element. return;
// No objects start in this region. // Ensure the first loop iteration decides that the block has changed. // The destination of the first live object that starts in the region is one // past the end of the partial object entering the region (if any). "live objects skipped because closure is full");
// We need a monotonically non-deccreasing time in ms but // os::javaTimeMillis() does not guarantee monotonicity. // XXX See note in genCollectedHeap::millis_since_last_gc(). // We need a monotonically non-deccreasing time in ms but // os::javaTimeMillis() does not guarantee monotonicity. // This test is necessary; if omitted, the pointer updates to a partial object // that crosses the dense prefix boundary could be overwritten. // The start_array must be updated even if the object is not moving. // Updates the references in the object to their new values. // Prepare for compaction. This method is executed once // (i.e., by a single thread) before compaction. // Save the updated location of the intArrayKlassObj for // filling holes in the dense prefix.