g1RemSet.cpp revision 890
579N/A * Copyright 2001-2009 Sun Microsystems, Inc. All Rights Reserved. 342N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 342N/A * This code is free software; you can redistribute it and/or modify it 342N/A * under the terms of the GNU General Public License version 2 only, as 342N/A * published by the Free Software Foundation. 342N/A * This code is distributed in the hope that it will be useful, but WITHOUT 342N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 342N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 342N/A * version 2 for more details (a copy is included in the LICENSE file that 342N/A * accompanied this code). 342N/A * You should have received a copy of the GNU General Public License version 342N/A * 2 along with this work; if not, write to the Free Software Foundation, 342N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 342N/A * Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, 342N/A * CA 95054 USA or visit www.sun.com if you need additional information or 342N/A#
include "incls/_precompiled.incl" 342N/A "Missed a rem set member.");
342N/A // Set the "from" region in the closure. 342N/A // first find the used area 342N/A // The closure is not idempotent. We shouldn't look at objects 342N/A // allocated during the GC. 342N/A // If we didn't return above, then 342N/A // _try_claimed || r->claim_iter() 342N/A // is true: either we're supposed to work on claimed-but-not-complete 342N/A // regions, or we successfully claimed the region. 747N/A // If the card is dirty, then we will scan it during updateRS. 747N/A // We did some useful work in the previous iteration. 747N/A // Decrease the distance. 747N/A // Previous iteration resulted in a claim failure. 747N/A // Increase the distance. 342N/A // Set all cards back to clean. 342N/A// We want the parallel threads to start their scanning at 342N/A// different collection set regions to avoid contention. 342N/A// n collection set regions 342N/A// Then thread t will start at region t * floor (n/p) 794N/A // Apply the appropriate closure to all remaining log entries. 794N/A // Now there should be no dirty cards. 794N/A // XXX This isn't true any more: keeping cards of young regions 794N/A // marked dirty broke it. Need some reasonable fix. 342N/A "has no remset entries\n",
342N/A // Fit it into a histo bin. 342N/A // *p was in the collection set when p was pushed on "_new_refs", but 342N/A // another thread may have processed this location from an RS, so it 342N/A // might not point into the CS any longer. If so, it's obviously been 342N/A // processed, and we don't need to do anything further. 342N/A // If "p" has already been processed concurrently, this is 342N/A // Make this into a command-line flag... 638N/A // The two flags below were introduced temporarily to serialize 638N/A // the updating and scanning of remembered sets. There are some 638N/A // race conditions when these two operations are done in parallel 638N/A // and they are causing failures. When we resolve said race 638N/A // conditions, we'll revert back to parallel remembered set 638N/A // updating and scanning. See CRs 6677707 and 6677708. 342N/A // Set all cards back to clean. 616N/A // Restore remembered sets for the regions pointing into 890N/A // Construct the region representing the card. 890N/A // And find the region containing it. 890N/A // We must complete this write before we do any of the reads below. 890N/A // And process it, being careful of unallocated portions of TLAB's. 890N/A // If stop_point is non-null, then we encountered an unallocated region 890N/A // (perhaps the unfilled portion of a TLAB.) For now, we'll dirty the 890N/A // card and re-enqueue: if we put off the card until a GC pause, then the 890N/A // unallocated portion will be filled in. Alternatively, we might try 890N/A // the full complexity of the technique used in "regular" precleaning. 890N/A // The card might have gotten re-dirtied and re-enqueued while we 890N/A // worked. (In fact, it's pretty likely.) 342N/A // If the card is no longer dirty, nothing to do. 342N/A // Construct the region representing the card. 342N/A // And find the region containing it. 342N/A return;
// Not in the G1 heap (might be in perm, for example.) 342N/A // Why do we have to check here whether a card is on a young region, 342N/A // given that we dirty young regions and, as a result, the 342N/A // post-barrier is supposed to filter them out and never to enqueue 342N/A // them? When we allocate a new region as the "allocation region" we 342N/A // actually dirty its cards after we release the lock, since card 342N/A // dirtying while holding the lock was a performance bottleneck. So, 342N/A // as a result, it is possible for other threads to actually 342N/A // allocate objects in the region (after the acquire the lock) 342N/A // before all the cards on the region are dirtied. This is unlikely, 342N/A // and it doesn't happen often, but it can happen. So, the extra 342N/A // check below filters out those cards. 342N/A // While we are processing RSet buffers during the collection, we 342N/A // actually don't want to scan any cards on the collection set, 342N/A // since we don't want to update remebered sets with entries that 342N/A // point into the collection set, given that live objects from the 342N/A // collection set are about to move and such entries will be stale 342N/A // very soon. This change also deals with a reliability issue which 342N/A // involves scanning a card in the collection set and coming across 342N/A // an array that was being chunked and looking malformed. Note, 342N/A // however, that if evacuation fails, we have to scan any objects 342N/A // that were not moved and create any missing entries. 890N/A // Should we defer processing the card? 890N/A // Previously the result from the insert_cache call would be 890N/A // either card_ptr (implying that card_ptr was currently "cold"), 890N/A // null (meaning we had inserted the card ptr into the "hot" 890N/A // cache, which had some headroom), or a "hot" card ptr 890N/A // extracted from the "hot" cache. 890N/A // Now that the _card_counts cache in the ConcurrentG1Refine 890N/A // instance is an evicting hash table, the result we get back 890N/A // could be from evicting the card ptr in an already occupied 890N/A // bucket (in which case we have replaced the card ptr in the 890N/A // bucket with card_ptr and "defer" is set to false). To avoid 890N/A // having a data structure (updates to which would need a lock) 890N/A // to hold these unprocessed dirty cards, we need to immediately 890N/A // process card_ptr. The actions needed to be taken on return 890N/A // from cache_insert are summarized in the following table: 890N/A // -------------------------------------------------------------- 890N/A // null false card evicted from _card_counts & replaced with 890N/A // card_ptr; evicted ptr added to hot cache. 890N/A // No need to process res; immediately process card_ptr 890N/A // null true card not evicted from _card_counts; card_ptr added 890N/A // non-null false card evicted from _card_counts & replaced with 890N/A // card_ptr; evicted ptr is currently "cold" or 890N/A // caused an eviction from the hot cache. 890N/A // Immediately process res; process card_ptr. 890N/A // non-null true card not evicted from _card_counts; card_ptr is 890N/A // currently cold, or caused an eviction from hot 890N/A // Immediately process res; no need to process card_ptr. 890N/A // Process card pointer we get back from the hot card cache