space.cpp revision 2454
1879N/A * Copyright (c) 1997, 2010, Oracle and/or its affiliates. All rights reserved. 0N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 0N/A * This code is free software; you can redistribute it and/or modify it 0N/A * under the terms of the GNU General Public License version 2 only, as 0N/A * published by the Free Software Foundation. 0N/A * This code is distributed in the hope that it will be useful, but WITHOUT 0N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 0N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 0N/A * version 2 for more details (a copy is included in the LICENSE file that 0N/A * accompanied this code). 0N/A * You should have received a copy of the GNU General Public License version 0N/A * 2 along with this work; if not, write to the Free Software Foundation, 0N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 1472N/A * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 0N/A // An arrayOop is starting on the dirty card - since we do exact 0N/A // store checks for objArrays we are done. 0N/A // Otherwise, it is possible that the object starting on the dirty 0N/A // card spans the entire card, and that the store happened on a 0N/A // later card. Figure out where the object ends. 0N/A // Use the block_size() method of the space over which 0N/A // the iteration is being done. That space (e.g. CMS) may have 0N/A // specific requirements on object sizes which will 0N/A // be reflected in the block_size() method. 0N/A // 1. Blocks may or may not be objects. 0N/A // 2. Even when a block_is_obj(), it may not entirely 0N/A // occupy the block if the block quantum is larger than 0N/A // We can and should try to optimize by calling the non-MemRegion 0N/A // version of oop_iterate() for all but the extremal objects 0N/A // (for which we need to call the MemRegion version of 0N/A // oop_iterate()) To be done post-beta XXX 0N/A // As in the case of contiguous space above, we'd like to 0N/A // just use the value returned by oop_iterate to increment the 0N/A // current pointer; unfortunately, that won't work in CMS because 0N/A // we'd need an interface change (it seems) to have the space 0N/A // "adjust the object size" (for instance pad it up to its 0N/A // block alignment or minimum block size restrictions. XXX 2454N/A// We get called with "mr" representing the dirty region 2454N/A// that we want to process. Because of imprecise marking, 2454N/A// we may need to extend the incoming "mr" to the right, 2454N/A// and scan more. However, because we may already have 2454N/A// scanned some of that extended region, we may need to 2454N/A// trim its right-end back some so we do not scan what 2454N/A// we (or another worker thread) may already have scanned 0N/A // Some collectors need to do special things whenever their dirty 0N/A // cards are processed. For instance, CMS must remember mutator updates 0N/A // (i.e. dirty cards) so as to re-scan mutated objects. 0N/A // Such work can be piggy-backed here on dirty card scanning, so as to make 0N/A // it slightly more efficient than doing a complete non-detructive pre-scan 0N/A // of the card table. 0N/A "Only ones we deal with for now.");
0N/A // Given what we think is the top of the memory region and 0N/A // the start of the object at the top, get the actual 0N/A // value of the top. 0N/A // If the previous call did some part of this region, don't redo. 0N/A // Top may have been reset, and in fact may be below bottom, 0N/A // e.g. the dirty card region is entirely in a now free object 0N/A // -- something that could happen with a concurrent sweeper. 0N/A // Walk the region if it is not empty; otherwise there is nothing to do. 342N/A // An idempotent closure might be applied in any order, so we don't 342N/A // record a _min_done for it. 342N/A "Don't update _min_done for idempotent cl");
0N/A // An arrayOop is starting on the dirty card - since we do exact 0N/A // store checks for objArrays we are done. 0N/A // Otherwise, it is possible that the object starting on the dirty 0N/A // card spans the entire card, and that the store happened on a 0N/A // later card. Figure out where the object ends. 0N/A "Block size and object size mismatch");
0N/A // Note that this assumption won't hold if we have a concurrent 0N/A // collector in this space, which may have freed up objects after 0N/A // they were dirtied and before the stop-the-world GC that is 0N/A // examining cards here. 0N/A // We have a boundary outside of which we don't want to look 0N/A // at objects, so create a filtering closure around the 0N/A // oop closure before walking the region. 0N/A // No boundary, simply walk the heap with the oop closure. 0N/A// We must replicate this so that the static type of "FilteringClosure" 0N/A// (see above) is apparent at the oop_iterate calls. 0N/A /* Bottom lies entirely below top, so we can call the */ \
0N/A /* non-memRegion version of oop_iterate below. */ \
0N/A /* Last object. */ \
0N/A// (There are only two of these, rather than N, because the split is due 0N/A// only to the introduction of the FilteringClosure, a local part of the 0N/A// impl of this abstraction.) 0N/A "invalid space boundaries");
0N/A // Space should not advertize an increase in size 0N/A // until after the underlying offest table has been enlarged. 263N/A// Mangled only the unused space that has not previously 263N/A// been mangled and that has not been allocated since being 263N/A // Although this method uses SpaceMangler::mangle_region() which 263N/A // is not specific to a space, the when the ContiguousSpace version 263N/A // is called, it is always with regard to a space and this 263N/A // bounds checking is appropriate. 0N/A // First check if we should switch compaction space 0N/A // switch to next compaction space 0N/A // store the forwarding pointer into the mark word 0N/A // if the object isn't moving we can just set the mark to the default 0N/A // mark and handle it specially later on. 0N/A // we need to update the offset table so that the beginnings of objects can be 0N/A // found during scavenge. Note that we are updating the offset table based on 0N/A // where the object will be once the compaction phase finishes. 0N/A // Recall that we required "q == compaction_top". 0N/A// Faster object search. 0N/A // adjust all the interior pointers to point at the new locations of objects 0N/A // Used by MarkSweep::mark_sweep_phase3() 0N/A // First check to see if there is any work to be done. 0N/A return;
// Nothing to do. 0N/A // point all the oops to the new location 0N/A // q is not a live object. But we're not in a compactible space, 0N/A // So we don't have live ranges. 0N/A // Check first is there is any work to do. 0N/A return;
// Nothing to do. 0N/A "top should be start of unallocated block, if it exists");
0N/A // We use MemRegion(bottom(), end()) rather than used_region() below 0N/A // because the two are not necessarily equal for some kinds of 0N/A // spaces, in particular, certain kinds of free list spaces. 0N/A // We could use the more complicated but more precise: 0N/A // MemRegion(used_region().start(), round_to(used_region().end(), CardSize)) 0N/A // but the slight imprecision seems acceptable in the assertion check. 0N/A "Should be within used space");
0N/A // This assert will not work when we go from cms space to perm 0N/A // space, and use same closure. Easy fix deferred for later. XXX YSR 0N/A // assert(prev == NULL || contains(prev), "Should be within space"); 518N/A // The previous invocation may have pushed "prev" beyond the 518N/A // last allocated block yet there may be still be blocks 518N/A // in this region due to a particular coalescing policy. 518N/A // Relax the assertion so that the case where the unallocated 518N/A // block is maintained and "prev" is beyond the unallocated 518N/A // block does not cause the assertion to fire. 0N/A "Should be within (closed) used space");
0N/A // See comment above (in more general method above) in case you 0N/A // happen to use this method. 0N/A "Should be within (closed) used space");
0N/A // Could call objects iterate, but this is easier. 0N/A // Handle first object specially. 0N/A // If "obj_addr" is not greater than top, then the 0N/A // entire object "obj" is within the region. 0N/A // "obj" extends beyond end of region 517N/A// For a continguous space object_iterate() and safe_object_iterate() 0N/A return p;
// failed at p 0N/A// Very general, slow implementation. 0N/A// This version requires locking. 2280N/A // In G1 there are places where a GC worker can allocates into a 2280N/A // region using this serial allocation code without being prone to a 2280N/A // race with other GC workers (we ensure that no other GC worker can 2280N/A // access the same region at the same time). So the assert below is 2280N/A // too strong in the case of G1. 0N/A// This version is lock-free. 0N/A // result can be one of two: 0N/A // the old top value: the exchange succeeded 0N/A // otherwise: the new value of the top is returned. 0N/A // allocate temporary type array decreasing free size with factor 'factor' 0N/A // if space is full, return 0N/A // allocate uninitialized int array 0N/A "size for smallest fake object doesn't match");
0N/A // The invariant is top() should be read before end() because 0N/A // top() can't be greater than end(), so if an update of _soft_end 0N/A // occurs between 'end_val = end();' and 'top_val = top();' top() 0N/A // also can grow up to the new end() and the condition 0N/A // 'top_val > end_val' is true. To ensure the loading order 0N/A // OrderAccess::loadload() is required after top() read. 0N/A // result can be one of two: 0N/A // the old top value: the exchange succeeded 0N/A // otherwise: the new value of the top is returned. 0N/A // For a sampling of objects in the space, find it using the 0N/A // block offset table. 342N/A "check offset computation");