collectedHeap.cpp revision 1027
0N/A * Copyright 2001-2009 Sun Microsystems, Inc. All Rights Reserved. 0N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 0N/A * This code is free software; you can redistribute it and/or modify it 0N/A * under the terms of the GNU General Public License version 2 only, as 0N/A * published by the Free Software Foundation. 0N/A * This code is distributed in the hope that it will be useful, but WITHOUT 0N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 0N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 0N/A * version 2 for more details (a copy is included in the LICENSE file that 0N/A * accompanied this code). 0N/A * You should have received a copy of the GNU General Public License version 0N/A * 2 along with this work; if not, write to the Free Software Foundation, 0N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 0N/A * Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, 0N/A * CA 95054 USA or visit www.sun.com if you need additional information or 0N/A * have any questions. 0N/A#
include "incls/_precompiled.incl" 0N/A// Memory state functions. 0N/A // create the gc cause jvmstat counters 0N/A "Found badHeapWordValue in post-allocation check");
0N/A "Found non badHeapWordValue in pre-allocation check");
// How to choose between a pending exception and a potential // OutOfMemoryError? Don't allow pending exceptions. // This is a VM policy failure, so how do we exhaustively test it? "shouldn't be allocating with pending exception");
"Allocation done by thread for which allocation is blocked " "by No_Allocation_Verifier!");
// Allocation of an oop can always invoke a safepoint, // hence, the true argument // Retain tlab and allocate object in shared space if // the amount free in the tlab is too large to discard. // Discard tlab and allocate a new one. // To minimize fragmentation, the last TLAB may be smaller than the rest. // Allocate a new TLAB... // ...and clear just the allocated object. // Verify that the storage points to a parsable object in heap "Else should have been filtered in defer_store_barrier()");
"Mismatch: multiple objects?");
// "Clear" the deferred_card_mark field // Helper for ReduceInitialCardMarks. For performance, // compiled code may elide card-marks for initializing stores // to a newly allocated object along the fast-path. We // compensate for such elided card-marks as follows: // (a) Generational, non-concurrent collectors, such as // GenCollectedHeap(ParNew,DefNew,Tenured) and // ParallelScavengeHeap(ParallelGC, ParallelOldGC) // need the card-mark if and only if the region is // in the old gen, and do not care if the card-mark // succeeds or precedes the initializing stores themselves, // so long as the card-mark is completed before the next // scavenge. For all these cases, we can do a card mark // at the point at which we do a slow path allocation // in the old gen. For uniformity, however, we end // up using the same scheme (see below) for all three // cases (deferring the card-mark appropriately). // (b) GenCollectedHeap(ConcurrentMarkSweepGeneration) requires // in addition that the card-mark for an old gen allocated // object strictly follow any associated initializing stores. // In these cases, the memRegion remembered below is // used to card-mark the entire region either just before the next // slow-path allocation by this thread or just before the next scavenge or // CMS-associated safepoint, whichever of these events happens first. // (The implicit assumption is that the object has been fully // initialized by this point, a fact that we assert when doing the // (c) G1CollectedHeap(G1) uses two kinds of write barriers. When a // G1 concurrent marking is in progress an SATB (pre-write-)barrier is // is used to remember the pre-value of any store. Initializing // stores will not need this barrier, so we need not worry about // compensating for the missing pre-barrier here. Turning now // to the post-barrier, we note that G1 needs a RS update barrier // which simply enqueues a (sequence of) dirty cards which may // optionally be refined by the concurrent update threads. Note // that this barrier need only be applied to a non-young write, // but, like in CMS, because of the presence of concurrent refinement // (much like CMS' precleaning), must strictly follow the oop-store. // Thus, using the same protocol for maintaining the intended // invariants turns out, serendepitously, to be the same for all // For each future collector, this should be reexamined with // that specific collector in mind. // If a previous card-mark was deferred, flush it now. // The deferred_card_mark region should be empty // following the flush above. // Remember info for the newly deferred store barrier // Set the length first for concurrent GC. // A single array can fill ~8G, so multiple objects are needed only in 64-bit. // First fill with arrays, ensuring that any remaining space is big enough to // fill. The remainder is filled with a single object. guarantee(
false,
"thread-local allocation buffers not supported");
// See note in ensure_parsability() below. "should only fill tlabs at safepoint");
// The main thread starts allocating via a TLAB even before it // has added itself to the threads list at vm boot-up. "Attempt to fill tlabs before main thread has been added" " to threads list is doomed to failure!");
// The second disjunct in the assertion below makes a concession // for the start-up verification done while the VM is being // created. Callers be careful that you know that mutators // aren't going to interfere -- for instance, this is permissible // if we are still single-threaded and have either not yet // started allocating (nothing much to verify) or we have // started allocating but are now a full-fledged JavaThread // (and have thus made our TLAB's) available for filling. "Should only be called at a safepoint or at start-up" " otherwise concurrent mutator activity may make heap " "should only accumulate statistics on tlabs at safepoint");
"should only resize tlabs at safepoint");
// We are doing a "major" collection and a heap dump before // major collection has been requested.