collectedHeap.cpp revision 2845
4596N/A * Copyright (c) 2001, 2011, Oracle and/or its affiliates. All rights reserved. 0N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 0N/A * This code is free software; you can redistribute it and/or modify it 0N/A * under the terms of the GNU General Public License version 2 only, as 2362N/A * published by the Free Software Foundation. 2362N/A * This code is distributed in the hope that it will be useful, but WITHOUT 0N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 0N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 0N/A * version 2 for more details (a copy is included in the LICENSE file that 0N/A * accompanied this code). 0N/A * You should have received a copy of the GNU General Public License version 0N/A * 2 along with this work; if not, write to the Free Software Foundation, 0N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 0N/A * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 0N/A// Memory state functions. 0N/A // create the gc cause jvmstat counters 0N/A // Used for ReduceInitialCardMarks (when COMPILER2 is used); 0N/A // otherwise remains unused. 160N/A "Found badHeapWordValue in post-allocation check");
160N/A "Found non badHeapWordValue in pre-allocation check");
0N/A // How to choose between a pending exception and a potential 0N/A // OutOfMemoryError? Don't allow pending exceptions. 0N/A // This is a VM policy failure, so how do we exhaustively test it? 0N/A "shouldn't be allocating with pending exception");
0N/A "Allocation done by thread for which allocation is blocked " 0N/A "by No_Allocation_Verifier!");
0N/A // Allocation of an oop can always invoke a safepoint, 0N/A // hence, the true argument 0N/A // Retain tlab and allocate object in shared space if 0N/A // the amount free in the tlab is too large to discard. 0N/A // Discard tlab and allocate a new one. 0N/A // To minimize fragmentation, the last TLAB may be smaller than the rest. 0N/A // Allocate a new TLAB... 0N/A // ...and zap just allocated object. 2080N/A // Skip mangling the space corresponding to the object header to 2080N/A // ensure that the returned space is not considered parsable by 2080N/A // any concurrent GC thread. 0N/A // Verify that the storage points to a parsable object in heap 0N/A "Else should have been filtered in new_store_pre_barrier()");
0N/A "Mismatch: multiple objects?");
2080N/A // "Clear" the deferred_card_mark field 0N/A// Helper for ReduceInitialCardMarks. For performance, 0N/A// compiled code may elide card-marks for initializing stores 0N/A// to a newly allocated object along the fast-path. We 0N/A// compensate for such elided card-marks as follows: 0N/A// (a) Generational, non-concurrent collectors, such as 0N/A// GenCollectedHeap(ParNew,DefNew,Tenured) and 0N/A// ParallelScavengeHeap(ParallelGC, ParallelOldGC) 0N/A// need the card-mark if and only if the region is 0N/A// in the old gen, and do not care if the card-mark 0N/A// succeeds or precedes the initializing stores themselves, 0N/A// so long as the card-mark is completed before the next 0N/A// scavenge. For all these cases, we can do a card mark 0N/A// at the point at which we do a slow path allocation 0N/A// in the old gen, i.e. in this call. 0N/A// (b) GenCollectedHeap(ConcurrentMarkSweepGeneration) requires 0N/A// in addition that the card-mark for an old gen allocated 0N/A// object strictly follow any associated initializing stores. 0N/A// In these cases, the memRegion remembered below is 0N/A// used to card-mark the entire region either just before the next 0N/A// slow-path allocation by this thread or just before the next scavenge or 0N/A// CMS-associated safepoint, whichever of these events happens first. 0N/A// (The implicit assumption is that the object has been fully 0N/A// initialized by this point, a fact that we assert when doing the 0N/A// (c) G1CollectedHeap(G1) uses two kinds of write barriers. When a 0N/A// G1 concurrent marking is in progress an SATB (pre-write-)barrier is 0N/A// is used to remember the pre-value of any store. Initializing 459N/A// stores will not need this barrier, so we need not worry about 0N/A// compensating for the missing pre-barrier here. Turning now 0N/A// to the post-barrier, we note that G1 needs a RS update barrier 0N/A// which simply enqueues a (sequence of) dirty cards which may 0N/A// optionally be refined by the concurrent update threads. Note 0N/A// that this barrier need only be applied to a non-young write, 0N/A// but, like in CMS, because of the presence of concurrent refinement 0N/A// (much like CMS' precleaning), must strictly follow the oop-store. 0N/A// Thus, using the same protocol for maintaining the intended 160N/A// invariants turns out, serendepitously, to be the same for both 0N/A// For any future collector, this code should be reexamined with 0N/A// that specific collector in mind, and the documentation above suitably 0N/A// extended and updated. 0N/A // If a previous card-mark was deferred, flush it now. 160N/A // The deferred_card_mark region should be empty 160N/A // following the flush above. 2080N/A // Set the length first for concurrent GC. // A single array can fill ~8G, so multiple objects are needed only in 64-bit. // First fill with arrays, ensuring that any remaining space is big enough to // fill. The remainder is filled with a single object. guarantee(
false,
"thread-local allocation buffers not supported");
// The second disjunct in the assertion below makes a concession // for the start-up verification done while the VM is being // created. Callers be careful that you know that mutators // aren't going to interfere -- for instance, this is permissible // if we are still single-threaded and have either not yet // started allocating (nothing much to verify) or we have // started allocating but are now a full-fledged JavaThread // (and have thus made our TLAB's) available for filling. "Should only be called at a safepoint or at start-up" " otherwise concurrent mutator activity may make heap " // The main thread starts allocating via a TLAB even before it // has added itself to the threads list at vm boot-up. "Attempt to fill tlabs before main thread has been added" " to threads list is doomed to failure!");
// The deferred store barriers must all have been flushed to the // card-table (or other remembered set structure) before GC starts // processing the card-table (or other remembered set). "should only accumulate statistics on tlabs at safepoint");
"should only resize tlabs at safepoint");
// We are doing a "major" collection and a heap dump before // major collection has been requested. // notify jvmti and dtrace