3945N/A * Copyright (c) 2001, 2012, Oracle and/or its affiliates. All rights reserved. 0N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 0N/A * This code is free software; you can redistribute it and/or modify it 0N/A * under the terms of the GNU General Public License version 2 only, as 0N/A * published by the Free Software Foundation. 0N/A * This code is distributed in the hope that it will be useful, but WITHOUT 0N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 0N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 0N/A * version 2 for more details (a copy is included in the LICENSE file that 0N/A * accompanied this code). 0N/A * You should have received a copy of the GNU General Public License version 0N/A * 2 along with this work; if not, write to the Free Software Foundation, 0N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 1472N/A * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 113N/A // arrayOopDesc::header_size depends on command line initialization. 0N/A// If the minimum object size is greater than MinObjAlignment, we can 0N/A// end up with a shard at the end of the buffer that's smaller than 0N/A// the smallest object. We can't allow that because the buffer must 0N/A// look like it's full of objects when we retire it, so we make 0N/A// sure we have enough space for a filler int array object. 0N/A // If the buffer had been retained shorten the previous filler object. 0N/A // Wasted space book-keeping, otherwise (normally) done in invalidate() 0N/A // Is there wasted space we'd like to retain for the next GC? 0N/A// Compute desired plab size and latch result for later 0N/A// use. This should be called once at the end of parallel 0N/A// scavenge; it clears the sensor accumulators. 0N/A // Take historical weighted average 0N/A // Clip from above and below, and align to object boundary 0N/A // Now clear the accumulators for next round: 0N/A // note this needs to be fixed in the case where we 0N/A // are retaining across scavenges. FIX ME !!! XXX 0N/A "_retained: %c _retained_filler: [%p,%p)\n",
0N/A// The buffer comes with its own BOT, with a shared (obviously) underlying 0N/A// BlockOffsetSharedArray. We manipulate this BOT in the normal way 0N/A// as we would for any contiguous space. However, on accasion we 0N/A// need to do some buffer surgery at the extremities before we 0N/A// start using the body of the buffer for allocations. Such surgery 0N/A// (as explained elsewhere) is to prevent allocation on a card that 0N/A// is in the process of being walked concurrently by another GC thread. 0N/A// When such surgery happens at a point that is far removed (to the 0N/A// right of the current allocation point, top), we use the "contig" 0N/A// parameter below to directly manipulate the shared array without 0N/A// modifying the _next_threshold state in the BOT. 0N/A "or else _true_end should be equal to _hard_end");
0N/A // This may back us up beyond the previous threshold, so reset. 0N/A // We're about to make the retained_filler into a block. 0N/A // Reset _hard_end to _true_end (and update _end) 0N/A // Now any old _retained_filler is cut back to size, the free part is 0N/A // filled with a filler object, and top is past the header of that 0N/A // If the lab does not start on a card boundary, we don't want to 0N/A // allocate onto that card, since that might lead to concurrent 0N/A // allocation and card scanning, which we don't support. So we fill 0N/A // the first card with a garbage object. 0N/A // Ensure enough room to fill with the smallest block 0N/A // If the end is already in the first card, don't go beyond it! 0N/A // Or if the remainder is too small for a filler object, gobble it up. 0N/A // If the lab does not end on a card boundary, we don't want to 0N/A // allocate onto that card, since that might lead to concurrent 0N/A // allocation and card scanning, which we don't support. So we fill 0N/A // the last card with a garbage object. 0N/A // Ensure enough room to fill with the smallest block 0N/A // If the top is already in the last card, don't go back beyond it! 0N/A // Or if the remainder is too small for a filler object, gobble it up. 0N/A // 1) we had a filler object from the original top to hard_end. 0N/A // 2) We've filled in any partial cards at the front and back. 0N/A // Now we can reset the _bt to do allocation in the given area. 0N/A // If there's no space left, don't retain. 0N/A // There may be other reasons for queries into the middle of the 0N/A // filler object. When such queries are done in parallel with 0N/A // allocation, bad things can happen, if the query involves object 0N/A // iteration. So we ensure that such queries do not involve object 0N/A // iteration, by putting another filler object on the boundaries of 0N/A // such queries. One such is the object spanning a parallel card 0N/A // "chunk_boundary" is the address of the first chunk boundary less 0N/A "Consequence of last card handling above.");
0N/A "Consequence of last card handling above.");
0N/A // Now reset the initial filler chunk so it doesn't overlap with 0N/A // the one(s) inserted above.