Lines Matching refs:to

16  * 2 along with this work; if not, write to the Free Software Foundation,
75 // The token-passing protocol gives priority to the VM thread. The
82 // thread to interfere, it obtains the CMS token.
84 // If either thread tries to get the token while the other has
89 // for long periods of time as the CMS thread continues to hog
97 // Two important conditions that we have to satisfy:
113 // Unfortunately, i couldn't come up with a good abstraction to factor and
115 // further below. That's something we should try to do. Also, the proof
133 "Incorrect argument to constructor");
164 // Wrapper class to temporarily disable icms during a foreground cms collection.
180 // This struct contains per-thread things necessary to support parallel
225 // offsets match. The ability to tell free chunks from objects
360 // Initialize the alphas to the bootstrap value of 100.
398 // Start a cms collection if there isn't enough space to promote
406 // Apply a further correction factor which tries to adjust
418 // Add 1 in case the consumption rate goes to zero.
424 // Compare the duration of the cms collection to the
427 // to the start of the cms sweep (less than the total
434 // We add "gc0_period" to the "work" calculation
436 // end of a scavenge, so we need to conservatively
438 // in the query so as to avoid concurrent mode failures
439 // due to starting the collection just a wee bit too
443 // If a concurrent mode failure occurred recently, we want to be
458 // amount of change to prevent wild oscillation.
571 // Adjust my span to cover old (cms) gen and perm gen
614 // (MUT, marking bit map etc.) to cover both generations subject to
617 // First check that _permGen is adjacent to _cmsGen and above it.
626 // For use by dirty card to oop closures.
634 warning("Failed to allocate CMS Bit Map");
645 warning("Failed to allocate CMS Marking Stack");
649 warning("Failed to allocate CMS Revisit Stack");
736 "CMS Thread should refer to this gen");
766 warning("Failed to allocate survivor plab/chunk array");
784 warning("Failed to allocate survivor plab array");
814 // If class unloading is disabled we want to include all classes into the root set.
881 // part of the space ends in a free block we should add that to
927 // compaction is expected to be a rare event with
943 // If incremental collection failed, we just want to expand
944 // to the limit.
1022 // allowing the object to be blackened (and its references scanned)
1027 // to be safely navigable by block_start().
1049 // allowing the object to be blackened (and its references scanned)
1057 // 1. need to mark the object as live so it isn't collected
1058 // 2. need to mark the 2nd bit to indicate the object may be uninitialized
1059 // 3. need to mark the end of the object so marking, precleaning or sweeping
1089 // We don't need to mark the object as uninitialized (as
1092 // time the marking, precleaning or sweeping get to look at it.
1094 // where we need to ensure that concurrent readers of the
1095 // block offset table are able to safely navigate a block that
1096 // is in flux from being free to being allocated (and in
1108 // marked, we need to dirty the entire array, not just its head.
1110 // The [par_]mark_range() method expects mr.end() below to
1111 // be aligned to the granularity of a bit's representation
1151 // The duty_cycle is a percentage between 0 and 100; convert to words and
1161 // The limits may be adjusted (shifted to the right) by
1162 // CMSIncrementalOffset, to allow the application more mutator time after a
1196 // Any changes here should try to maintain the invariant
1204 // A start_limit equal to end() means the duty cycle is 0, so treat that as a
1267 // delegate to underlying space.
1282 // Since there's currently no next generation, we don't try to promote
1285 "is made to pass on a possibly failing "
1286 "promotion to next generation");
1327 // [Perm Gen objects needs to be "parsable" before they can be navigated]
1337 // [Perm Gen comment above continues to hold]
1351 // so readers will either need to come back later or stall until
1353 // allocation, P-bits, when available, may be used to determine the
1356 // Things to support parallel young-gen collection.
1396 // Otherwise, copy the object. Here we must be careful to insert the
1417 // to delay the transition from uninitialized to full object
1430 // We should now be able to calculate the right size for this object
1524 // If the estimated time to complete a cms collection (cms_duration())
1533 // We want to conservatively collect somewhat early in order
1534 // to try and "bootstrap" our CMS/promotion statistics;
1552 // XXX We need to make sure that the gen expansion
1562 // this is not likely to be productive in practice because it's probably too
1566 "You may want to check the correctness of the following");
1600 // We want to start a new collection cycle if any of the following
1604 // . we recently needed to expand this space and have not, since that
1606 // . the underlying space believes that it may be a good idea to initiate
1609 // going to fail, or there is believed to be excessive fragmentation in
1671 // But I am not placing that assert here to allow future
1678 // Need the free list locks for the call to free() in compute_new_size()
1719 // The foreground and background collectors need to coordinate in order
1720 // to make sure that they do not mutually interfere with CMS collections.
1722 // the foreground collector may need to take over (preempt) and
1728 // can be passed to the foreground collector.
1750 // _foregroundGCIsActive - Set to true by the foreground collector when
1753 // _foregroundGCShouldWait - Set to true by the background collector
1756 // CGC_lock - monitor used to protect access to the above variables
1757 // and to notify the foreground and background collectors.
1763 // waits on the CGC_lock for _foregroundGCShouldWait to be false
1765 // are released so as not to block the background collector
1777 // waits on _CGC_lock for _foregroundGCIsActive to become false
1791 "shouldn't try to acquire control from self!");
1801 // Signal to a possibly ongoing concurrent collection that
1802 // we want to do a foreground collection.
1810 // do yields to improve the granularity of the collection.
1812 // We need to lock the Free list lock for the space that we are
1820 // We are going to be waiting for action for the CMS thread;
1837 // not know to give priority to VM thread? Actually, i think
1856 // Check if we need to do a compaction, or if not, whether
1857 // we need to start the mark-sweep from scratch.
1900 young_gen->to()->capacity() -
1932 // A work method used by foreground collection to determine
1935 // NOTE: the intent is to make UseCMSCompactAtFullCollection
1948 "You may want to check the correctness of the following");
1949 // Inform cms gen if this was due to partial collection failing.
1950 // The CMS gen may use this fact to determine its expansion policy.
1953 "Should have been noticed, reacted to and cleared");
1963 // We are about to do a last ditch collection attempt
1964 // so it would normally make sense to do a compaction
1965 // to reclaim as much space as possible.
1969 // we'd have to start over, or so little has been done
1971 // appears to be the sensible choice in either case.
1974 // We have been asked to clear all soft refs, but not to
1977 // past that phase, we'll need to redo the refs discovery phase and
1981 // phase, we'll choose to just redo the mark-sweep
1986 _collectorState = Resetting; // skip to reset to start new cycle
1994 // A work method used by the foreground collector to do
2008 "collections passed to foreground collector", _full_gcs_since_conc_gc);
2016 // Temporarily widen the span of the weak reference processing to
2046 // Note that we do not use this sample to update the _inter_sweep_estimate.
2105 // A work method used by the foreground collector to do
2111 gclog_or_tty->print_cr("Pass concurrent collection to foreground "
2123 // was in progress and has now finished. No need to do it
2162 // A utility class that is used by the CMS collector to
2164 // usual obligation to wait for the background collector to
2171 assert(_c->_foregroundGCShouldWait, "Else should not need to call");
2173 // allow a potentially blocked foreground collector to proceed
2191 // foreground collector. There was originally an attempt to share
2193 // collector but the if-then-else required made it cleaner to have
2215 // Reset the expansion cause, now that we are about to begin
2219 // Decide if we want to enable class unloading as part of the
2224 // Signal that we are about to start a collection
2239 // background collector to finish the phase and change state atomically.
2251 // except while it is waiting for the background collector to yield.
2254 // if the background collector is about to start a phase
2270 // Check if the FG collector wants us to yield.
2273 // We yielded to a foreground GC, nothing more to be
2275 assert(_foregroundGCShouldWait == false, "We set it to false in "
2284 // The background collector can run but check to see if the
2286 // background collector was waiting to get the CGC_lock
2307 // since the background collector may have yielded to the
2359 "to Resizing must be done under the free_list_lock");
2373 // Don't move the call to compute_new_size() down
2398 // calls to here because a preempted background collection
2399 // has it's state set to "Resetting".
2492 // Snapshot the soft reference policy to be used in this collection cycle.
2504 init_mark_was_synchronous = true; // fact to be exploited in re-mark
2545 // is done separately; nothing to be done in this state.
2596 // background collectors decides whether to
2601 // The background collector yields to the
2632 // Because of the need to lock the free lists and other structures in
2633 // the collector, common to all the generations that the collector is
2635 // delegate to their collector. It may have been simpler had the
2636 // current infrastructure allowed one to call a prologue on a
2638 // prologue delegate to the collector, which delegates back
2639 // some "local" work to a worker method in the individual generations
2641 // work common to all generations it's responsible for. A similar
2642 // comment applies to the gc_epilogue()'s.
2643 // The role of the varaible _between_prologue_and_epilogue is to
2692 // Delegate to CMScollector which knows how to coordinate between
2699 // Not to be called directly by any other entity (for instance,
2731 // if linear allocation blocks need to be appropriately marked to allow the
2732 // the blocks to be parsable. We also check here whether we need to nudge the
2733 // CMS collector thread to start a new cycle (if it's not already active).
2762 // update_counters() that allows the utilization to be passed as a
2763 // parameter, avoiding multiple calls to used().
2921 // GC must already have cleared any refs that need to be cleared,
2922 // and traced those that need to be marked; moreover,
2923 // the marking done here is not going to intefere in any
2935 gch->ensure_parsability(false); // fill TLABs, but no need to retire them
2949 // presumably, a mutation to A failed to be picked up by preclean/remark?
3057 // delegate to CMS space
3083 // Not currently implemented; need to do the following. -- ysr.
3089 // I think we probably ought not to be required to support these
3090 // iterations at any arbitrary point; I think there ought to be some
3091 // call to enable/disable allocation profiling in a generation/space,
3092 // and the iterator ought to return the objects allocated in the
3095 // require some extra data structure to support this, we only pay the
3157 // Fix the linear allocation blocks to look like free blocks.
3198 // merely consolidate assertion checks that appear to occur together frequently.
3210 // Decide if we want to enable class unloading as part of the
3221 // calls to this method, it should have idempotent results. Moreover,
3222 // its results should be monotonically increasing (i.e. going from 0 to 1,
3223 // but not 1 to 0) between successive calls between which the heap was
3259 return; // Nothing else needs to be done at this time
3269 // CMSBitMap::sizeInBits() is used to determine if it's allocated).
3272 warning("Failed to allocate permanent generation verification CMS Bit Map;\n"
3284 // Include symbols, strings and code cache elements to prevent their resurrection.
3290 // Exclude symbols, strings and code cache elements from root scanning to
3330 // to CardGeneration and share it...
3342 // by shouldConcurrentCollect() when making decisions on whether to start
3368 // A competing par_promote might beat us to the expansion space,
3396 // A competing allocation might beat us to the expansion space,
3438 gclog_or_tty->print_cr("Expanding %s from %ldK by %ldK to %ldK",
3451 DEBUG_ONLY(if (!success) warning("grow to reserved failed");)
3485 _wallclock.stop(); // to record time
3557 // which recognizes if we are a CMS generation, and doesn't try to turn on
3578 // younger generations to keep floating garbage to a minimum.
3579 // XXX: we won't do this for now -- it's an optimization to be done later.
3605 // remark step, so it's important to catch all the nmethod oops
3607 // The final 'true' flag to gen_process_strong_roots will ensure this.
3615 gch->ensure_parsability(false); // fill TLABs, but no need to retire them
3645 // to be used to limit the extent of sweep in each generation.
3654 // we might be tempted to assert that:
3685 } else { // We failed and a foreground collection wants to take over
3689 gclog_or_tty->print_cr("bailing out to foreground collection");
3716 // . if oop is to the right of the current scan pointer,
3718 // . else (oop is to left of current scan pointer)
3722 // Note that when we do a marking step we need to hold the
3728 // we need to make sure that there is no such interference
3759 // "n_threads" is the number of threads to be terminated.
3762 // "yield" indicates whether we need the gang as a whole to yield.
3789 char _pad_front[64]; // padding to ...
3872 // because we want terminating threads to yield only if the task
3896 // nothing to do
3898 // then nothing to do
3901 // . local work-queue overflow causes stuff to be pushed on
3910 // . try to steal from other threads oif GOS is empty
3922 assert(work_queue(worker_id)->size() == 0, "Expected to be empty");
3982 // using (not yet available) block-read and -write interfaces to the
3993 // Grab up to 1/4 the size of the work queue
4008 // We allow that there may be no tasks to do here because
4015 // Align down to a card boundary for the start of 0th task
4025 // compute the chunk that it corresponds to:
4029 // note that we need to do the global finger bump
4031 // the task corresponding to that region will be
4038 // There are null tasks here corresponding to chunks
4046 // For the 0th task, we'll not need to compute a block_start.
4055 // We want to skip the first object because
4056 // the protocol is to scan any object in its entirety
4061 // so we do not try to navigate uninitialized objects.
4064 // Printezis bits to avoid waiting for allocated
4065 // objects to become initialized/parsable.
4081 // the last argument to the constructor indicates whether the
4090 } // else nothing to do for this task
4091 } // else nothing to do for this task
4093 // We'd be tempted to assert here that since there are no
4094 // more tasks left to claim in this space, the global_finger
4139 // been published), so we do not need to check for
4148 // If we manage to "claim" the object, by being the
4149 // first thread to mark it, then we push it on our
4201 // We need to do this under a mutex to prevent other
4248 // We should probably use a constructor/destructor idiom to
4249 // do this unlock/lock or modify the MutexUnlocker class to
4262 // not to get a chance to wake up and take the bitmap lock between
4264 // should_yield() flag is on, let's sleep for a bit to give the
4265 // other thread a chance to wake up. The limit imposed on the number
4266 // of iterations is defensive, to avoid any unforseen circumstances
4268 // (coordinator_yield()) method that was observed to cause the
4276 // We really need to reconsider the synchronization between the GC
4316 // from the number we requested above, do we need to do anything different
4317 // below? In particular, may be we need to subclass the SequantialSubTasksDone
4340 // occurred; we need to do a fresh marking iteration from the
4345 // slow forward progress. It may be best to bail out and
4354 // Adjust the task to restart from _restart_addr
4381 // the last argument to iterate indicates whether the iteration
4385 // occurred; we need to do a fresh iteration from the
4391 // slow forward progress. It may be best to bail out and
4398 return false; // indicating failure to complete marking
4452 // past the next scavenge in an effort to
4460 // loop below to deal with cases where allocation
4465 // One, admittedly dumb, strategy is to give up
4467 // or after a certain maximum time. We want to make
4481 gclog_or_tty->print(" CMS: abort preclean due to loops ");
4487 gclog_or_tty->print(" CMS: abort preclean due to time ");
4494 // Sleep for some time, waiting for work to accumulate
4515 // Respond to an Eden sampling opportunity
4531 // We'd like to check that what we just sampled is an oop-start address;
4566 // to remove any reference objects with strongly-reachable
4578 // We don't want this step to interfere with a young
4579 // collection because we don't want to take CPU
4582 // Note that we don't need to protect ourselves from
4594 // The following will yield to allow foreground
4595 // collection to proceed promptly. XXX YSR:
4626 dng->to()->object_iterate_careful(&sss_cl);
4632 // CAUTION: The following closure has persistent state that may need to
4689 // . For the cards corresponding to the set bits, we scan the
4699 // other are quite benign. However, for efficiency it makes sense to keep
4701 // dirty card info to the modUnionTable. We therefore also use the
4702 // CGC_lock to protect the reading of the card table and the mod union
4705 // needs to be done carefully -- we should not try to scan
4710 // the free_list_lock and bitmap lock to do a full marking, then
4717 // further below are largely identical; if you need to modify
4728 // but it is difficult to turn the checking off just around
4729 // the yield points. It is simpler to selectively turn
4739 // It might also be fine to just use the committed part of the
4803 // the bits corresponding to the partially-scanned or unscanned
4814 // might need bitMapLock in order to read P-bits.
4830 // below are largely identical; if you need to modify
4835 // strategy: it's similar to precleamModUnionTable above, in that
4888 // uninitialized object. Redirty the bits corresponding to the
4932 // Temporarily set flag to false, GCH->do_collection will
4933 // expect it to be false and set to true
4999 gch->ensure_parsability(false); // fill TLAB's, but no need to retire them
5008 // mutators, it is possible for some reachable objects not to have been
5009 // scanned. For instance, an only reference to an object A was
5011 // A would be collected. Such updates to references in marked objects
5013 // dirtied since the first checkpoint in this GC cycle and prior to
5027 // The initial mark was stop-world, so there's no rescanning to
5028 // do; go straight on to the next step below.
5047 // remedial measures, where possible, so as to try and avoid
5122 // A value of 0 passed to n_workers will cause the number of
5123 // workers to be taken from the active workers in the work gang.
5159 // work_queue(i) is passed to the closure
5161 // also is passed to do_dirty_card_rescan_tasks() and to
5162 // do_work_steal() to select the i-th task_queue.
5179 // the critical path; thus, it's best to start off that
5186 ContiguousSpace* to_space = dng->to();
5221 "if we didn't scan the code cache, we have to be ready to drop nmethods with expired weak oops");
5235 // "worker_id" is passed to select the task_queue for "worker_id"
5265 // . compute region boundaries corresponding to task claimed
5315 // . compute region boundaries corresponding to task claimed
5317 // . apply rescanclosure to dirty mut bits for that region
5325 // CAUTION: This closure has state that persists across calls to
5337 // paradigm, the use of that persistent state will have to be
5339 // bug 4756801 work on which should examine this code to make
5340 // sure that the changes there do not run counter to the
5384 // table. Since we have been careful to partition at Card and MUT-word
5419 // only affects the number of attempts made to get work from the
5426 // not yet ready to go stealing work from others.
5427 // We'd like to assert(work_q->size() != 0, ...)
5434 // Verify that we have no work before we resort to stealing
5436 // Try to steal from other queues that have work
5443 // Loop around, finish this work, and try to steal some more
5513 // and increment _cursor[min_tid] prior to the next round i.
5564 // need to finish in order to be done).
5578 SequentialSubTasksDone* pst = dng->to()->par_seq_tasks();
5581 // need to finish in order to be done).
5594 // need to finish in order to be done).
5606 // Choose to use the number of GC workers most recently set
5608 // to ParallelGCThreads.
5631 // process_strong_roots (which currently doesn't knw how to
5638 // is deferred to the future.
5649 // in the multi-threaded case, but we special-case n=1 here to get
5699 // to rescan the marked objects on the dirty cards in the modUnionTable.
5757 "if we didn't scan the code cache, we have to be ready to drop nmethods with expired weak oops");
5874 // only affects the number of attempts made to get work from the
5881 // not yet ready to go stealing work from others.
5882 // We'd like to assert(work_q->size() != 0, ...)
5888 // Verify that we have no work before we resort to stealing
5890 // Try to steal from other queues that have work
5897 // Loop around, finish this work, and try to steal some more
5965 // been set to a reasonable value. If it has not been set,
6092 // in the perm_gen_verify_bit_map. In order to do that we traverse
6131 // We need all the free list locks to make the abstract state
6132 // transition from Sweeping to Resetting. See detailed note
6138 // input to soft ref clearing policy at the next gc.
6150 // input to soft ref clearing policy at the next gc.
6163 // We need to use a monotonically non-deccreasing time in ms
6175 // globally to the mutators.
6179 // from the Sweeping state to the Resizing state must be done
6180 // under the freelistLock (as is the check for whether to
6181 // allocate-live and whether to dirty the mod-union table).
6182 assert(_collectorState == Resizing, "Change of collector state to"
6187 // thus inviting a younger gen collection to promote into
6197 // CMSGen merely delegating to it.
6204 // The dictionary appears to be empty. In this case
6205 // try to coalesce at the end of the heap.
6258 gclog_or_tty->print_cr("to %d ", _debug_collection_type);
6265 // checking the mark bit map to see if the bits corresponding
6266 // to specific blocks are marked or not. Blocks that are
6270 // We need to ensure that the sweeper synchronizes with allocators
6279 // Note that we need to hold the freelistLock if we use
6281 // a mutator (or promotion) causes block contents to change
6284 // young generation GC's can't occur (they'll usually need to
6293 "Should possess CMS token to sweep");
6308 // We need to free-up/coalesce garbage/blocks from a
6339 // Clear the mark bitmap (no grey objects to start with)
6441 // thread should not be blocked if it wants to terminate
6442 // the CMS thread and yet continue to run the VM for a while
6462 assert(size >= 3, "Necessary for Printezis marks to work");
6475 assert(size >= 3, "Necessary for Printezis marks to work");
6524 // Later on we'll try to be more parsimonious with swap.
6625 // leaf lock. For printing we need to take a further lock
6626 // which has lower rank. We need to recallibrate the two
6627 // lock-ranks involved in order to be able to rpint the
6628 // messages below. (Or defer the printing to the caller.
6642 // Do not give up existing stack until we have managed to
6657 // Failed to double capacity, continue;
6659 gclog_or_tty->print(" (benign) Failed to expand marking stack from "SIZE_FORMAT"K to "
6667 // XXX: there seems to be a lot of code duplication here;
6670 // This closure is used to mark refs into the CMS generation in
6672 // assumes that we do not need to re-mark dirty cards; if the CMS
6753 // This closure is used to mark refs into the CMS generation at the
6754 // second (final) checkpoint, and to scan and transitively follow
6772 // stack by applying this closure to the oops in the oops popped
6775 assert(res, "Should have space to push on empty stack");
6785 // check if it's time to yield
6866 // This closure is used to mark refs into the CMS generation at the
6867 // second (final) checkpoint, and to scan and transitively follow
6882 // It is possible for several threads to be
6883 // trying to "claim" this object concurrently;
6886 // to the work queue (or overflow list).
6889 // queue to an appropriate length by applying this closure to
6903 // This closure is used to rescan the marked objects on the dirty cards
6912 // check if it's time to yield
6915 // and we have been asked to abort this ongoing preclean cycle.
6922 // change by the VM outside a safepoint. Don't try to
6926 // Signal precleaning to redirty the card since
6936 // to dirty cards only.
6941 // to scan object in its entirety.
6949 assert(size >= 3, "Necessary for Printezis marks to work");
6973 // An uninitialized object, skip to the next card, since
6974 // we may not be able to read its P-bits yet.
6977 // An object not (yet) reached by marking: we merely need to
6978 // compute its size so as to go look at the next block.
7023 // This (single-threaded) closure is used to preclean the oops in
7042 // marking stack before returning. This is to satisfy
7044 // good idea to abort immediately and complete the marking
7055 // check if it's time to yield
7094 // This closure is used to rescan the marked objects on the dirty cards
7172 // Should revisit to see if this should be restructured for
7192 // the _threshold so that we'll come back to scan this object
7200 // Bump _threshold to end_card_addr; note that
7203 // to the right.
7223 // so as to avoid monopolizing the locks involved.
7226 // We should probably use a constructor/destructor idiom to
7227 // do this unlock/lock or modify the MutexUnlocker class to
7257 assert(_bitMap->isMarked(ptr), "expected bit to be set");
7259 "should drain stack to limit stack usage");
7260 // convert ptr to an oop preparatory to scanning
7266 // advance the finger to right end of this object
7269 // On large heaps, it may take us some time to get through
7285 // of cards to be cleared in MUT (or precleaned in card table).
7286 // The set of cards to be cleared is all those that overlap
7318 assert(new_oop->is_oop(true), "Oops! expected to pop an oop");
7351 // Should revisit to see if this should be restructured for
7379 assert(_bit_map->isMarked(ptr), "expected bit to be set");
7383 "should drain stack to limit stack usage");
7384 // convert ptr to an oop preparatory to scanning
7390 // advance the finger to right end of this object
7393 // On large heaps, it may take us some time to get through
7408 // of cards to be cleared in MUT (or precleaned in card table).
7409 // The set of cards to be cleared is all those that overlap
7456 assert(new_oop->is_oop(true), "Oops! expected to pop an oop");
7464 // Yield in response to a request from VM Thread or
7496 // Should revisit to see if this should be restructured for
7507 "should drain stack to limit stack usage");
7508 // convert addr to an oop preparatory to scanning
7512 // advance the finger to right end of this object
7521 assert(new_oop->is_oop(), "Oops! expected to pop an oop");
7576 // anything including and to the right of _finger
7643 // We need to do this under a mutex to prevent other
7663 // sampled, this bit in the bit map; we'll need to
7664 // use the marking stack to scan this oop's oops.
7682 // anything including and to the right of _finger
7702 // -- if someone else marked it, nothing to do
7703 // -- if target oop is above global finger nothing to do
7705 // then nothing to do
7713 // sampled, this bit in the bit map; we'll need to
7714 // use the marking stack to scan this oop's oops.
7800 // we need to dirty all of the cards that the object spans,
7801 // since the rescan of object arrays will be limited to the
7818 // During the remark phase, we need to remember this oop
7848 // this oop may point to an already visited object that is
7854 // stack, and the mark word possibly restored to the prototypical
7855 // value, by the time we get to examined this failing assert in
7857 // to hold.
7865 // If we manage to "claim" the object, by being the
7866 // first thread to mark it, then we push it on our
7880 _collector->_par_pmc_remark_ovflw++; // imprecise OK: no need to CAS
7939 "mr should be aligned to start at a card boundary");
7940 // We'd like to assert:
8001 // you may need to review this code to see if it needs to be
8065 // it is possible for direct allocation in this generation to happen
8070 // it is sweeping. Thus blocks that are determined to be free are
8076 // however, to a further complication -- objects may have been allocated
8079 // in order to skip over it. To deal with this case, we use a technique
8080 // (due to Printezis) to encode such uninitialized block sizes in the
8085 // of these "unused" bits to represent uninitialized blocks -- the bit
8086 // corresponding to the start of the uninitialized object and the next
8088 // started with the two consecutive 1 bits to indicate its potentially
8098 // may have caused us to coalesce the block ending at the address _limit
8099 // with a newly expanded chunk (this happens when _limit was set to the
8102 if (addr >= _limit) { // we have swept up to or past the limit: finish up
8107 // coalesced chunk to the appropriate free list.
8172 // split birth - a free chunk is being added to its free list because
8176 // coal birth - a free chunk is being added to its free list because
8180 // These statistics are used to determine the desired number of free
8181 // chunks of a given size. The desired number is chosen to be relative
8182 // to the end of a CMS sweep. The desired number at the end of a sweep
8188 // where the interval is from the end of one sweep to the end of the
8194 // is being considered for coalescing will be referred to as the
8197 // When making a decision on whether to coalesce a right-hand chunk with
8204 // When making a decision about whether to split a chunk, the desired count
8205 // vs. the current count of the candidate to be split is also considered.
8209 // to a free list which may be overpopulated.
8239 // it doesn't make sense to remove this chunk from the free lists
8242 if ((HeapWord*)nextChunk < _sp->end() && // There is another free chunk to the right ...
8245 // nothing to do
8249 // No need to remove it if it will just be put
8268 // will be returned to the free lists in its entirety - all
8284 // below), we unconditionally flush, without needing to do
8288 // Code path common to both original and adaptive free lists.
8293 // we kicked some butt; time to pick up the garbage
8297 // else, nothing to do, just continue
8303 // Add it to a free list or let it possibly be coalesced into
8327 // will be returned to the free lists in its entirety - all
8357 // left hand chunk to the free lists.
8363 // This object is live: we'd normally expect this to be
8364 // an oop, and like to assert the following:
8371 // Determine the size from the bit map, rather than trying to
8412 assert(size >= 3, "Necessary for Printezis marks to work");
8468 // If the chunk is in a free range and either we decided to coalesce above
8514 // flush it along with any free range we may be holding on to. Note that
8520 // free block to be coalesced with the newly expanded portion,
8522 // for the sweeper to step over and examine.
8531 assert(eob == _limit || fc->is_free(), "Only a free chunk should allow us to cross over the limit");
8550 "A zero sized chunk cannot be added to the free lists.");
8559 gclog_or_tty->print_cr(" -- add free block 0x%x (%d) to free lists",
8562 // A new free range is going to be starting. The current
8563 // free range has not been added to the free lists yet or
8573 gclog_or_tty->print_cr("Already in free list: nothing to flush");
8580 // so as to avoid monopolizing the locks involved.
8583 // to the appropriate freelist. After yielding, the next
8585 // free blocks. If the next free block is adjacent to the
8586 // chunk just flushed, they will need to wait for the next
8587 // sweep to be coalesced.
8593 // We should probably use a constructor/destructor idiom to
8594 // do this unlock/lock or modify the MutexUnlocker class to
8680 // In the case of object arrays, we need to dirty all of
8707 // The work queues are private to each closure (thread),
8714 // may be concurrently getting here; the first one to
8803 // the max number to take from overflow list at a time
8843 // Much of the following code is similar in shape and spirit to the
8850 // It's OK to call this multi-threaded; the worst thing
8898 // remainder. If other threads try to take objects from
8900 // some time to see if data becomes available. If (and
8903 // to the global list, we will walk down our local list
8904 // to find its end and append the global list to
8906 // prove to be expensive (quadratic in the amount of traffic)
8912 // copy of the object to thread the list via its klass word.
8914 // the code below, please check the ParNew version to see if
8916 // CR 6797058 has been filed to consolidate the common code.
8929 // set to ParallelGCThreads.
8933 // sleeping between attempts to get the list.
8937 // Nothing left to take
8944 // If the list was found to be empty, or we spun long
8948 // to a non-BUSY state in the future.
8950 // Nothing to take or waited long enough
8965 // is nothing to return to the global list.
8972 // Chop off the suffix and rerturn it to the global list.
8978 // able to place back the suffix without incurring the cost
8993 // to do a splice. Find tail of suffix so we can prepend suffix to global
9008 // ... and try to place spliced list back on overflow_list ...
9045 // Multi-threaded; use CAS to prepend to overflow list
9070 // to do (for now) is to exit with an error. However, that may
9072 // able to recover without much harm. For such cases, we
9075 // the caller may be able to recover from a failure; code in
9076 // the VM can then be changed, incrementally, to deal with such
9102 // be trying to push it on the overflow list; see
9109 // We should be able to do this multi-threaded,
9113 // not very easy to completely overlap this with
9116 // expect the preserved oop stack (set) to be small,
9117 // it's probably fine to do this single-threaded.
9230 // If incremental collection failed, we just want to expand
9231 // to the limit.
9276 // No room to shrink
9278 gclog_or_tty->print_cr("No room to shrink: old_end "
9317 // Have to remove the chunk from the dictionary because it is changing
9351 gclog_or_tty->print_cr("Shrinking %s from %ldK by %ldK to %ldK",
9364 // Transfer some number of overflown objects to usual marking