mach_vm_dep.c revision beb1bda06ff6d616abec61793bf882d58a50ec04
2N/A * The contents of this file are subject to the terms of the 2N/A * Common Development and Distribution License, Version 1.0 only 2N/A * (the "License"). You may not use this file except in compliance 2N/A * See the License for the specific language governing permissions 2N/A * and limitations under the License. 2N/A * When distributing Covered Code, include this CDDL HEADER in each 2N/A * If applicable, add the following below this CDDL HEADER, with the 2N/A * fields enclosed by brackets "[]" replaced with your own identifying 2N/A * information: Portions Copyright [yyyy] [name of copyright owner] 2N/A * Copyright 2005 Sun Microsystems, Inc. All rights reserved. 2N/A * Use is subject to license terms. 2N/A/* Copyright (c) 1984, 1986, 1987, 1988, 1989 AT&T */ 2N/A/* All Rights Reserved */ 2N/A * Portions of this source code were derived from Berkeley 4.3 BSD 2N/A * under license from the Regents of the University of California. 2N/A#
pragma ident "%Z%%M% %I% %E% SMI" 2N/A * UNIX machine dependent virtual memory support. 2N/A * The sun4u hardware mapping sizes which will always be supported are 2N/A * 8K, 64K, 512K and 4M. If sun4u based machines need to support other 2N/A * page sizes, platform or cpu specific routines need to modify the value. 2N/A * The base pagesize (p_szc == 0) must always be supported by the hardware. * use_text_pgsz64k, use_initdata_pgsz64k and use_text_pgsz4m * can be set in platform or CPU specific code but user can change the * disable_text_largepages and disable_initdata_largepages bitmaks are set in * platform or CPU specific code to disable page sizes that should not be * used. These variables normally shouldn't be changed via /etc/system. A * particular page size for text or inititialized data will be used by default * if both one of use_* variables is set to 1 AND this page size is not * disabled in the corresponding disable_* bitmask variable. * Minimum segment size tunables before 64K or 4M large pages * should be used to map it. * map_addr_proc() is the routine called when the system is to * choose an address for the user. We will pick an address * range which is just below the current stack limit. The * algorithm used for cache consistency on machines with virtual * address caches is such that offset 0 in the vnode is always * on a shm_alignment'ed aligned address. Unfortunately, this * means that vnodes which are demand paged will not be mapped * cache consistently with the executable images. When the * cache alignment for a given object is inconsistent, the * lower level code must manage the translations so that this * is not seen here (at the cost of efficiency, of course). * On input it is a hint from the user to be used in a completely * machine dependent fashion. For MAP_ALIGN, addrp contains the * On output it is NULL if no address can be found in the current * processes address space or else an address that is currently * not mapped for len bytes with a page of red zone on either side. * If vacalign is true, then the selected address will obey the alignment * constraints of a vac machine based on the given off value. * This happens when a program wants to map something in * a range that's accessible to a program in a smaller * address space. For example, a 64-bit program might * be calling mmap32(2) to guarantee that the returned * address is below 4Gbytes. * Redzone for each side of the request. This is done to leave * one page unmapped between segments. This is not required, but * it's useful for the user because if their program strays across * a segment boundary, it will catch a fault immediately making * debugging a little easier. * If the request is larger than the size of a particular * mmu level, then we use that level to map the request. * But this requires that both the virtual and the physical * addresses be aligned with respect to that level, so we * do the virtual bit of nastiness here. * For 32-bit processes, only those which have specified * MAP_ALIGN or an addr will be aligned on a page size > 4MB. Otherwise * we can potentially waste up to 256MB of the 4G process address * space just for alignment. * Align virtual addresses on a 64K boundary to ensure * that ELF shared libraries are mapped with the appropriate * alignment constraints by the run-time linker. * 64-bit processes require 1024K alignment of ELF shared libraries. * Look for a large enough hole starting below the stack limit. * After finding it, use the upper part. Addition of PAGESIZE is * for the redzone as described above. * Round address DOWN to the alignment amount, * add the offset, and if this address is less * than the original address, add alignment amount. * Platforms with smaller or larger TLBs may wish to change this. Most * sun4u platforms can hold 1024 8K entries by default and most processes * are observed to be < 6MB on these machines, so we decide to move up * here to give ourselves some wiggle room for other, smaller segments. * Number of pages in 1 GB. Don't enable automatic large pages if we have * fewer than this many pages. * Suggest a page size to be used to map a segment of type maptype and length * len. Returns a page size (not a size code). * If remap is non-NULL, fill in a value suggesting whether or not to remap * For non-Panther systems, the following code sets the [D]ISM * pagesize to 4M if either of the DTLBs happens to be * programmed to a different large pagesize. * The Panther code might hit this case as well, * if and only if the addr is not aligned to >= 4M. * Platform-dependent page scrub call. * For now, we rely on the fact that pagezero() will * platform specific large pages for kernel heap support /* not applicable to sun4u */