4485N/A * Copyright (c) 2001, 2013, Oracle and/or its affiliates. All rights reserved. 342N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 342N/A * This code is free software; you can redistribute it and/or modify it 342N/A * under the terms of the GNU General Public License version 2 only, as 342N/A * published by the Free Software Foundation. 342N/A * This code is distributed in the hope that it will be useful, but WITHOUT 342N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 342N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 342N/A * version 2 for more details (a copy is included in the LICENSE file that 342N/A * accompanied this code). 342N/A * You should have received a copy of the GNU General Public License version 342N/A * 2 along with this work; if not, write to the Free Software Foundation, 342N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 1472N/A * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 342N/A// Different defaults for different number of GC threads 342N/A// They were chosen by running GCOld and SPECjbb on debris with different 342N/A// numbers of GC threads and choosing them based on the results 342N/A 0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0 342N/A 0.01,
0.005,
0.005,
0.003,
0.003,
0.002,
0.002,
0.0015 342N/A 1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0 342N/A 0.015,
0.01,
0.01,
0.008,
0.008,
0.0055,
0.0055,
0.005 342N/A 0.00006,
0.00003,
0.00003,
0.000015,
0.000015,
0.00001,
0.00001,
0.000009 342N/A// these should be pretty consistent 342N/A 5.0,
5.0,
5.0,
5.0,
5.0,
5.0,
5.0,
5.0 342N/A 0.3,
0.2,
0.2,
0.15,
0.15,
0.12,
0.12,
0.1 342N/A 1.0,
0.7,
0.7,
0.5,
0.5,
0.42,
0.42,
0.30 1394N/A // Incremental CSet attributes 342N/A#
ifdef _MSC_VER // the use of 'this' below gets a warning, make it go away 342N/A // add here any more surv rate groups 942N/A // Set up the region size and associated fields. Given that the 942N/A // policy is created before the heap, we have to set this up here, 942N/A // so it's done as soon as possible. 2748N/A // Currently, we only use a single switch for all the heuristics. 2748N/A // Given that we don't currently have a verboseness level 2748N/A // parameter, we'll hardcode this to high. This can be easily 1530N/A // Below, we might need to calculate the pause time target based on 1530N/A // the pause interval. When we do so we are going to give G1 maximum 1530N/A // flexibility and allow it to do pauses when it needs to. So, we'll 1530N/A // arrange that the pause interval to be pause time target + 1 to 1530N/A // ensure that a) the pause time target is maximized with respect to 1530N/A // the pause interval and b) we maintain the invariant that pause 1530N/A // time target < pause interval. If the user does not want this 1530N/A // maximum flexibility, they will have to set the pause interval 1530N/A // First make sure that, if either parameter is set, its value is 1530N/A // Then, if the pause time target parameter was not set, set it to 1530N/A // The default pause time target in G1 is 200ms 1530N/A // We do not allow the pause interval to be set without the 1530N/A "without setting MaxGCPauseMillis");
1530N/A // Then, if the interval parameter was not set, set it according to 1530N/A // the pause time target (this will also deal with the case when the 1530N/A // pause time target is the default value). 1530N/A // Finally, make sure that the two parameters are consistent. 1530N/A "MaxGCPauseMillis (%u) should be less than " 1530N/A "GCPauseIntervalMillis (%u)",
4249N/A // Put an artificial ceiling on this so that it's not set to a silly value. 4249N/A warning(
"G1ConfidencePercent is set to a value that is too large, " 342N/A // start conservatively (around 50ms is about right) 2696N/A // _max_survivor_regions will be calculated by 2753N/A // update_young_list_target_length() during initialization. 1356N/A "we should have set it to a default value set_g1_gc_flags() " 2753N/A // Put an artificial ceiling on this so that it's not set to a silly value. 2753N/A warning(
"G1ReservePercent is set to a value that is too large, " 2754N/A // This will be set when the heap is expanded 2753N/A // for the first time during initialization. 3009N/A warning(
"-XX:NewSize and -XX:MaxNewSize override -XX:NewRatio");
3009N/A // Do nothing. Values set on the command line, don't update them at runtime. 342N/A // Set aside an initial future to_space. 1394N/A // We may immediately start allocating regions and placing them on the 1394N/A // collection set list. Initialize the per-collection set info 545N/A// Create the jstat counters for the policy. 342N/A // end condition 1: not enough space for the young regions 2753N/A // end condition 2: prediction is over the target pause time 2753N/A // end condition 3: out-of-space (conservatively!) 2754N/A // re-calculate the necessary reserve 2753N/A // We use ceiling so that if reserve_regions_d is > 0.0 (but 2753N/A // smaller than 1.0) we'll get 1. 2753N/A // otherwise we don't have enough info to make the prediction 2754N/A // make sure we don't go below any user-defined minimum bound 2753N/A // Here, we might want to also take into account any additional 2753N/A // constraints (i.e., user-defined minimum bound). Currently, we 2753N/A // effectively don't set this bound. 2753N/A // if it's set to the default value (-1), we should predict it; 2753N/A // otherwise, use the given value. 2753N/A // Calculate the absolute and desired min bounds. 2753N/A // This is how many young regions we already have (currently: the survivors). 2753N/A // This is the absolute minimum young length, which ensures that we 2753N/A // can allocate one eden region in the worst-case. 2753N/A // Calculate the absolute and desired max bounds. 2753N/A // We will try our best not to "eat" into the reserve. 2753N/A // Don't calculate anything and let the code below bound it to 2753N/A // the desired_min_length, i.e., do the next GC as soon as 2753N/A // possible to maximize how many old regions we can add to it. 3201N/A // The user asked for a fixed young gen so we'll fix the young gen 3201N/A // whether the next GC is young or mixed. 2753N/A // Make sure we don't go over the desired max length, nor under the 2753N/A // desired min length. In case they clash, desired_min_length wins 2753N/A // which is why that test is second. 2753N/A "we should be able to allocate at least one eden region");
2753N/A // In case some edge-condition makes the desired max length too small... 2753N/A // We'll adjust min_young_length and max_young_length not to include 2753N/A // the already allocated young regions (i.e., so they reflect the 2753N/A // min and max eden regions we'll allocate). The base_min_length 2753N/A // will be reflected in the predictions by the 2753N/A // survivor_regions_evac_time prediction. 2753N/A // Here, we will make sure that the shortest young length that 2753N/A // makes sense fits within the target pause time. 2753N/A // The shortest young length will fit into the target pause time; 2753N/A // we'll now check whether the absolute maximum number of young 2753N/A // regions will fit in the target pause time. If not, we'll do 2753N/A // a binary search between min_young_length and max_young_length. 2753N/A // The maximum young length will fit into the target pause time. 2753N/A // We are done so set min young length to the maximum length (as 2753N/A // the result is assumed to be returned in min_young_length). 2753N/A // The maximum possible number of young regions will not fit within 2753N/A // the target pause time so we'll search for the optimal 2753N/A // length. The loop invariants are: 2753N/A // min_young_length < max_young_length 2753N/A // min_young_length is known to fit into the target pause time 2753N/A // max_young_length is known not to fit into the target pause time 2753N/A // Going into the loop we know the above hold as we've just 2753N/A // checked them. Every time around the loop we check whether 2753N/A // the middle value between min_young_length and 2753N/A // max_young_length fits into the target pause time. If it 2753N/A // does, it becomes the new min. If it doesn't, it becomes 2753N/A // the new max. This way we maintain the loop invariants. 2753N/A // The results is min_young_length which, according to the 2753N/A // loop invariants, should fit within the target pause time. 2753N/A // These are the post-conditions of the binary search above: 2753N/A "otherwise we should have discovered that max_young_length " 2753N/A "fits into the pause target and not done the binary search");
2753N/A "min_young_length, the result of the binary search, should " 2753N/A "fit into the pause target");
2753N/A "min_young_length, the result of the binary search, should be " 2753N/A "optimal, so no larger length should fit into the pause target");
2753N/A // Even the minimum length doesn't fit into the pause time 2753N/A // target, return it as the result nevertheless. 342N/A // add 10% to avoid having to recalculate often 342N/A// This method controls how a collector handles one or more 342N/A// of its generations being fully allocated. 342N/A // also call verify_young_ages on any additional surv rate groups 342N/A // Release the future to-space so that it is available for compaction into. 342N/A // Consider this like a collection pause for the purposes of allocation 2986N/A // transitions and make sure we start with young GCs after the Full GC. 342N/A // also call this on any additional surv rate groups 545N/A // Reset survivors SurvRateGroup. 3120N/A // We only need to do this here as the policy will only be applied 3120N/A // to the GC we're about to start. so, no point is calculating this 3120N/A // every time we calculate / recalculate the target young length. 342N/A // do that for any other surv rate groups 3112N/A "request concurrent cycle initiation",
3112N/A "do not request concurrent cycle initiation",
342N/A// Anything below that is considered to be zero 2936N/A "otherwise, the subtraction below does not make sense");
342N/A // do that for any other surv rate groups too 3112N/A // Note: this might have already been set, if during the last 3112N/A // pause we decided to start a cycle but at the beginning of 3112N/A // this pause we decided to postpone it. That's OK. 342N/A // this is where we update the allocation rate of the application 342N/A // This usually happens due to the timer not having the required 342N/A // granularity. Some Linuxes are the usual culprits. 342N/A // We'll just set it to something (arbitrarily) small. 2936N/A // We maintain the invariant that all objects allocated by mutator 2936N/A // threads will be allocated out of eden regions. So, we can use 2936N/A // the eden region number allocated since the previous GC to 2936N/A // calculate the application's allocate rate. The only exception 2936N/A // to that is humongous objects that are allocated separately. But 2936N/A // given that humongous object allocations do not really affect 2936N/A // either the pause's duration nor when the next pause will take 2936N/A // place we can safely ignore them here. 1086N/A // Dump info to allow post-facto debugging 1087N/A // In debug mode, terminate the JVM if the user wants to debug at this point. 1087N/A // Clip ratio between 0.0 and 1.0, and continue. This will be fixed in 1087N/A // CR 6902692 by redoing the manner in which the ratio is incrementally computed. 3201N/A // This is supposed to to be the "last young GC" before we start 3201N/A // doing mixed GCs. Here we decide whether to start mixed GCs or not. 3201N/A "do not start mixed GCs")) {
3201N/A // This is a mixed GC. Here we decide whether to continue doing 3201N/A "do not continue mixed GCs")) {
342N/A // do that for any other surv rate groupsx 3007N/A // This is defensive. For a while _max_rs_lengths could get 3007N/A // smaller than _recorded_rs_lengths which was causing 3007N/A // rs_length_diff to get very large and mess up the RSet length 3007N/A // predictions. The reason was unsafe concurrent updates to the 3007N/A // _inc_cset_recorded_rs_lengths field which the code below guards 3007N/A // against (see CR 7118202). This bug has now been fixed (see CR 3007N/A // 7119027). However, I'm still worried that 3007N/A // _inc_cset_recorded_rs_lengths might still end up somewhat 3007N/A // inaccurate. The concurrent refinement thread calculates an 3007N/A // RSet's length concurrently with other CR threads updating it 3007N/A // which might cause it to calculate the length incorrectly (if, 3007N/A // say, it's in mid-coarsening). So I'll leave in the defensive 3007N/A // conditional below just in case. 1111N/A // Note that _mmu_tracker->max_gc_time() returns the time in seconds. 1111N/A g = (
int)(g *
dec_k);
// Can become 0, that's OK. That would mean a mutator-only processing. 1111N/A // Change the refinement threads params 1111N/A // Change the barrier params 3961N/A // Predicting the number of cards is based on which type of GC 3961N/A // The prediction of the "other" time for this region is based 3961N/A // upon the region type and NOT the GC type. 751N/A // We will double the existing space, or take 751N/A // G1ExpandByPercentOfAvailable % of the available expansion 751N/A // space, whichever is smaller, bounded below by a minimum 751N/A // expansion (unless that's all that's left.) 342N/A // add this call for any other surv rate groups 342N/A// for debugging, bit of a hack... 1898N/A // We use ceiling so that if expansion_region_num_d is > 0.0 (but 1898N/A // less than 1.0) we'll get 1. 545N/A// Calculates survivor space parameters. 2753N/A // We use ceiling so that if max_survivor_regions_d is > 0.0 (but 2753N/A // smaller than 1.0) we'll get 1. 2748N/A "request concurrent cycle initiation",
2748N/A "do not request concurrent cycle initiation",
1359N/A // We are about to decide on whether this pause will be an 1359N/A // First, during_initial_mark_pause() should not be already set. We 1359N/A // will set it here if we have to. However, it should be cleared by 1359N/A // the end of the pause (it's only set for the duration of an 1359N/A // We had noticed on a previous pause that the heap occupancy has 1359N/A // gone over the initiating threshold and we should start a 1359N/A // concurrent marking cycle. So we might initiate one. 1359N/A // The concurrent marking thread is not "during a cycle", i.e., 1359N/A // it has completed the last one. So we can go ahead and 2986N/A // We do not allow mixed GCs during marking. 1359N/A // And we can now clear initiate_conc_mark_if_possible() as 1359N/A // we've already acted on it. 2748N/A "initiate concurrent cycle",
1359N/A // The concurrent marking thread is still finishing up the 1359N/A // previous cycle. If we start one right now the two cycles 1359N/A // overlap. In particular, the concurrent marking thread might 1359N/A // be in the process of clearing the next marking bitmap (which 1359N/A // we will use for the next cycle if we start one). Starting a 1359N/A // cycle now will be bad given that parts of the marking 1359N/A // information might get cleared by the marking thread. And we 1359N/A // cannot wait for the marking thread to finish the cycle as it 1359N/A // periodically yields while clearing the next marking bitmap 1359N/A // and, if it's in a yield point, it's waiting for us to 1359N/A // finish. So, at this point we will not start a cycle and we'll 1359N/A // let the concurrent marking thread complete the last one. 2748N/A "do not initiate concurrent cycle",
342N/A // We only include humongous regions in collection 342N/A // sets when concurrent mark shows that their contained object is 342N/A // Do we have any marking information for this region? 3201N/A // We will skip any region that's currently used as an old GC 3201N/A // alloc region (we should not consider those for collection 3201N/A // before we fill them up). 342N/A // Do we have any marking information for this region? 3201N/A // We will skip any region that's currently used as an old GC 3201N/A // alloc region (we should not consider those for collection 3201N/A // before we fill them up). 342N/A // Back to zero for the claim value. 2941N/A // The use of MinChunkSize = 8 in the original code 2941N/A // causes some assertion failures when the total number of 2941N/A // region is less than 8. The code here tries to fix that. 2941N/A // Should the original code also be fixed? 2941N/A "The active gc workers should be greater than 0");
2941N/A // In a product build do something reasonable to avoid a crash. 1394N/A// Add the heap region at the head of the non-incremental collection set 1394N/A// Initialize the per-collection-set information 3007N/A // The two "main" fields, _inc_cset_recorded_rs_lengths and 3007N/A // _inc_cset_predicted_elapsed_time_ms, are updated by the thread 3007N/A // that adds a new region to the CSet. Further updates by the 3007N/A // concurrent refinement thread that samples the young RSet lengths 3007N/A // are accumulated in the *_diffs fields. Here we add the diffs to 3007N/A // This is defensive. The diff should in theory be always positive 3007N/A // as RSets can only grow between GCs. However, given that we 3007N/A // sample their size concurrently with other threads updating them 3007N/A // it's possible that we might get the wrong size back, which 3007N/A // could make the calculations somewhat inaccurate. 1394N/A // This routine is used when: 1394N/A // * adding survivor regions to the incremental cset at the end of an 1394N/A // * adding the current allocation region to the incremental cset 1394N/A // * updating existing policy information for a region in the 1394N/A // incremental cset via young list RSet sampling. 1394N/A // Therefore this routine may be called at a safepoint by the 1394N/A // VM thread, or in-between safepoints by mutator threads (when 1394N/A // retiring the current allocation region) or a concurrent 1394N/A // refine thread (RSet sampling). 1394N/A // Cache the values we have added to the aggregated informtion 1394N/A // in the heap region in case we have to remove this region from 1394N/A // the incremental collection set, or it is updated by the 3007N/A // Update the CSet information that is dependent on the new RS length 3007N/A "should not be at a safepoint");
3007N/A // We could have updated _inc_cset_recorded_rs_lengths and 3007N/A // _inc_cset_predicted_elapsed_time_ms directly but we'd need to do 3007N/A // that atomically, as this code is executed by a concurrent 3007N/A // refinement thread, potentially concurrently with a mutator thread 3007N/A // allocating a new region and also updating the same fields. To 3007N/A // avoid the atomic operations we accumulate these updates on two 3007N/A // separate fields (*_diffs) and we'll just add them to the "main" 3007N/A // fields at the start of a GC. 1394N/A // information in the heap region here (before the region gets added 1394N/A // to the collection set). An individual heap region's cached values 1394N/A // are calculated, aggregated with the policy collection set info, 1394N/A // and cached in the heap region here (initially) and (subsequently) 1394N/A // by the Young List sampling code. 1394N/A// Add the region at the RHS of the incremental cset 1394N/A // We should only ever be appending survivors at the end of a pause 1394N/A // Now add the region at the right hand side 1394N/A// Add the region to the LHS of the incremental cset 1394N/A // Survivors should be added to the RHS at the end of a pause 1394N/A // Add the region at the left hand side 4461N/A // Returns the given amount of reclaimable bytes (that represents 4461N/A // the amount of reclaimable space still to be collected) as a 4461N/A // percentage of the current heap capacity. 4461N/A // Is the amount of uncollected reclaimable space above G1HeapWastePercent? 4461N/A // The min old CSet region bound is based on the maximum desired 4461N/A // number of mixed GCs after a cycle. I.e., even if some old regions 4461N/A // look expensive, we should add them to the CSet anyway to make 4461N/A // sure we go through the available old regions in no more than the 4461N/A // maximum desired number of mixed GCs. 4461N/A // The calculation is based on the number of marked regions we added 4461N/A // to the CSet chooser in the first place, not how many remain, so 4461N/A // that the result is the same during all mixed GCs that follow a cycle. 4461N/A // The max old CSet region bound is based on the threshold expressed 4461N/A // as a percentage of the heap size. I.e., it should bound the 4461N/A // number of old regions added to the CSet irrespective of how many 2695N/A // The young list is laid with the survivor regions from the previous 2695N/A // pause are appended to the RHS of the young list, i.e. 2695N/A // [Newly Young Regions ++ Survivors from last pause]. 2748N/A // Clear the fields that point to the survivor list - they are all young now. 2748N/A "add young regions to CSet",
2695N/A // The number of recorded young regions is the incremental 2695N/A // collection set's current size 3961N/A // Set the start of the non-young choice time. 3201N/A // Added maximum number of old regions to the CSet. 3201N/A "finish adding old regions to CSet",
4461N/A // Stop adding regions if the remaining reclaimable space is 4461N/A // not above G1HeapWastePercent. 4461N/A // We've added enough old regions that the amount of uncollected 4461N/A // reclaimable space is at or below the waste threshold. Stop 4461N/A // adding old regions to the CSet. 4461N/A "finish adding old regions to CSet",
3201N/A // Too expensive for the current CSet. 3201N/A // We have added the minimum number of old regions to the CSet, 3201N/A // we are done with this CSet. 3201N/A "finish adding old regions to CSet",
3201N/A // We'll add it anyway given that we haven't reached the 3201N/A // minimum number of old regions. 3201N/A // In the non-auto-tuning case, we'll finish adding regions 3201N/A // to the CSet if we reach the minimum. 3201N/A "finish adding old regions to CSet",
3201N/A // We will add this region to the CSet. 3201N/A "finish adding old regions to CSet",
3201N/A // We print the information once here at the end, predicated on 3201N/A // whether we added any apparently expensive regions or not, to 3201N/A // avoid generating output per region. 3201N/A "added expensive regions to CSet",