3636N/A * Copyright (c) 2006, 2012, Oracle and/or its affiliates. All rights reserved. 0N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 0N/A * This code is free software; you can redistribute it and/or modify it 0N/A * under the terms of the GNU General Public License version 2 only, as 0N/A * published by the Free Software Foundation. 0N/A * This code is distributed in the hope that it will be useful, but WITHOUT 0N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 0N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 0N/A * version 2 for more details (a copy is included in the LICENSE file that 0N/A * accompanied this code). 0N/A * You should have received a copy of the GNU General Public License version 0N/A * 2 along with this work; if not, write to the Free Software Foundation, 0N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 1472N/A * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 263N/A // This method should do nothing. 263N/A // It can be called on a numa space during a full compaction. 263N/A // This method should do nothing. 263N/A // It can be called on a numa space during a full compaction. 263N/A // This method should do nothing because numa spaces are not mangled. 263N/A // This method should do nothing. 263N/A // This method should do nothing. 263N/A // This method should do nothing. 0N/A// There may be unallocated holes in the middle chunks 0N/A// that should be filled with dead objects to ensure parseability. 605N/A if (s->
top() <
top()) {
// For all spaces preceding the one containing top() 3636N/A // If object header crossed a small page boundary we mark the area 3636N/A // as invalid rounding it to a page_size(). 268N/A // This case can occur after the topology of the system has 268N/A // changed. Thread can change their location, the new home 268N/A // group will be determined during the first allocation 268N/A // attempt. For now we can safely assume that all spaces 268N/A // have equal size because the whole space will be reinitialized. 268N/A assert(
false,
"There should be at least one locality group");
268N/A // That's the normal case, where we know the locality group of the thread. 268N/A // Please see the comments for tlab_capacity(). 268N/A assert(
false,
"There should be at least one locality group");
373N/A assert(
false,
"There should be at least one locality group");
0N/A// Check if the NUMA topology has changed. Add and remove spaces if needed. 0N/A// The update can be forced by setting the force parameter equal to true. 0N/A // Check if the topology had changed. 0N/A // Add new spaces for the new nodes 0N/A // Remove spaces for the removed nodes. 0N/A// Bias region towards the first-touching lgrp. Set the right page sizes. 141N/A // First we tell the OS which page size we want in the given range. The underlying 141N/A // large page can be broken down if we require small pages. 141N/A // Then we uncommit the pages in the range. 0N/A// Free all pages in the region. 0N/A// Update space layout. Perform adaptation. 0N/A // If the topology has changed, make all chunks zero-sized. 268N/A // And clear the alloc-rate statistics. 268N/A // In future we may want to handle this more gracefully in order 268N/A // to avoid the reallocation of the pages as much as possible. 263N/A // A NUMA space is never mangled 263N/A // A NUMA space is never mangled 0N/A// Scan pages. Free pages that have smaller size or wrong placement. 0N/A// Accumulate statistics about the allocation rate of each lgrp. 0N/A// Get the current size of a chunk. 0N/A// This function computes the size of the chunk based on the 0N/A// difference between chunk ends. This allows it to work correctly in 0N/A// case the whole space is resized and during the process of adaptive 0N/A// Return the default chunk size by equally diving the space. 0N/A// page_size() aligned. 0N/A// Produce a new chunk size. page_size() aligned. 391N/A// This function is expected to be called on sequence of i's from 0 to 391N/A// lgrp_spaces()->length(). 0N/A for (
int j = 0; j < i; j++) {
462N/A // The resulting upper bound should not exceed the available 462N/A // amount of memory (pages_available * page_size()). 0N/A// Return the bottom_region and the top_region. Align them to page_size() boundary. 0N/A// |------------------new_region---------------------------------| 0N/A// |----bottom_region--|---intersection---|------top_region------| 0N/A // Try to coalesce small pages into a large one. 0N/A // Try to coalesce small pages into a large one. 0N/A// Try to merge the invalid region with the bottom or top region by decreasing 0N/A// the intersection area. Return the invalid_region aligned to the page_size() 0N/A// boundary if it's inside the intersection. Return non-empty invalid_region 0N/A// if it lies inside the intersection (also page-aligned). 0N/A// |------------------new_region---------------------------------| 0N/A// |----------------|-------invalid---|--------------------------| 0N/A// |----bottom_region--|---intersection---|------top_region------| 0N/A // That's the only case we have to make an additional bias_region() call. 263N/A // Must always clear the space 0N/A // Compute chunk sizes 0N/A // Try small pages if the chunk size is too small 0N/A // Handle space resize 0N/A // the page size preference for the whole space. 0N/A // Check if the space layout has changed significantly? 0N/A // This happens when the space has been resized so that either head or tail 0N/A // chunk became less than a page. 0N/A // No adaptation. Divide the space equally. 0N/A // Fast adaptation. If no space resize rate is set, resize 0N/A // the chunks instantly. 0N/A // Slow adaptation. Resize the chunks moving no more than 0N/A // NUMASpaceResizeRate bytes per collection. 0N/A if (i == 0) {
// Bottom chunk 0N/A }
else {
// Top chunk 0N/A // The general case: 0N/A // |---------------------|--invalid---|--------------------------| 0N/A // |------------------new_region---------------------------------| 0N/A // |----bottom_region--|---intersection---|------top_region------| 0N/A // |----old_region----| 0N/A // The intersection part has all pages in place we don't need to migrate them. 0N/A // Pages for the top and bottom part should be freed and then reallocated. 141N/A // Invalid region is a range of memory that could've possibly 141N/A // been allocated on the other node. That's relevant only on Solaris where 141N/A // there is no static memory binding. 141N/A // If that's a system with the first-touch policy then it's enough 141N/A // In a system with static binding we have to change the bias whenever 141N/A // we reshape the heap. 263N/A // Clear space (set top = bottom) but never mangle. 0N/A// Set the top of the whole space. 0N/A// Mark the the holes in chunks below the top() as invalid. 190N/A // Check if setting the chunk's top to a given value would create a hole less than 190N/A // a minimal object; assuming that's not the last chunk in which case we don't care. 481N/A // Add a minimum size filler object; it will cross the chunk boundary. 190N/A // Restart the loop from the same chunk, since the value has moved 263N/A // Never mangle NUMA spaces because the mangling will 263N/A // bind the memory to a possibly unwanted lgroup. 141N/A Linux supports static memory binding, therefore the most part of the 141N/A logic dealing with the possible invalid page allocation is effectively 141N/A disabled. Besides there is no notion of the home node in Linux. A 141N/A thread is allowed to migrate freely. Although the scheduler is rather 141N/A reluctant to move threads between the nodes. We check for the current 141N/A node every allocation. And with a high probability a thread stays on 141N/A the same node for some time allowing local access to recently allocated 0N/A // It is possible that a new CPU has been hotplugged and 0N/A // we haven't reshaped the space accordingly. 0N/A if (
top() < s->
top()) {
// Keep _top updated. 141N/A // Make the page allocation happen here if there is no static binding.. 0N/A// This version is lock-free. 0N/A // It is possible that a new CPU has been hotplugged and 0N/A // we haven't reshaped the space accordingly. 144N/A // We were the last to allocate and created a fragment less than 141N/A // Make the page allocation happen here if there is no static binding. 190N/A // This can be called after setting an arbitary value to the space's top, 190N/A // so an object can cross the chunk boundary. We ensure the parsablity 190N/A // of the space and just walk the objects in linear fashion. 0N/A// Scan pages and gather stats about page placement and size. 0N/A// Scan page_count pages and verify if they have the right size and right placement. 0N/A// If invalid pages are found they are freed in hope that subsequent reallocation 0N/A// will be more successful.