Lines Matching defs:from

51  * its children lgroups.  Thus, the lgroup hierarchy from a given leaf lgroup
52 * to the root lgroup shows the hardware resources from closest to farthest
53 * from the leaf lgroup such that each successive ancestor lgroup contains
54 * the next nearest resources at the next level of locality from the previous.
62 * allocation is lgroup aware too, so memory will be allocated from the current
105 * The lgrp_kstat_data array of named kstats is used to extract data from
106 * lgrp_stats and present it to kstat framework. It is protected from partallel
704 * Called to add lgrp info into cpu structure from cpu_add_unit;
774 * topology now that we know how far it is from other leaf
855 * Allocate from end when hint not set yet because no lgroups
861 * Start looking for next open slot from hint and leave hint
963 * Initialize kstat data. Called from lgrp intialization code.
1062 * Remove this lgroup from its lgroup CPU resources and remove
1063 * lgroup from lgroup topology if it doesn't have any more
1076 * This lgroup isn't empty, so just remove it from CPU
1131 * and update them from scratch since they may have completely
1141 * from each lgroup in its lgroup memory resource set
1181 * is moved from one board to another. The "from" and "to" arguments specify the
1188 * the lgroup topology which is changing as memory moves from one lgroup to
1189 * another. It removes the mnode from the source lgroup and re-inserts it in the
1197 * only two boards (mnodes), the lgrp_mem_fini() removes the only mnode from the
1203 * happen with cpu_lock held which prevents lgrp_mem_init() from re-inserting
1208 * lgrp_mem_fini() does not remove the last mnode from the lroot->lgrp_mnodes,
1215 lgrp_mem_rename(int mnode, lgrp_handle_t from, lgrp_handle_t to)
1218 * Remove the memory from the source node and add it to the destination
1221 lgrp_mem_fini(mnode, from, B_TRUE);
1260 * This routine may be called from a context where we already
1271 * lgrp_mem_fini() refuses to remove the last mnode from the root, so we
1310 * topology now that we know how far it is from other leaf
1348 * Add memory node to lgroup and remove lgroup from ones that need
1404 * This routine may be called from a context where we already
1418 * Delete memory node from lgroups which contain it
1432 * Avoid removing the last mnode from the root in the DR
1440 * Remove memory node from lgroup.
1469 * Remove this lgroup from lgroup topology if it does not contain any
1488 * Remove lgroup from memory resources of any lgroups that
1634 * "cpu", and it's lpl from going away across a call to this function.
1913 * Delete resource lpl_leaf from rset of lpl_target, assuming it's there.
2396 * This routine is clever enough that it can correctly add resources from the
2493 * remove a lpl from the hierarchy of resources, clearing its state when
2511 * Don't attempt to remove from lgrps that aren't there, that
2512 * don't contain our leaf, or from the leaf itself. (We do that
2633 * remove a cpu from a partition in terms of lgrp load avg bookeeping
2644 * from the per-cpu lpl list.
2650 * cpu partition in question as no longer containing resources from the lgrp of
2687 /* unlink cpu from lists of cpus in lpl */
2708 * We rely on the fact that this routine is called from the clock thread
2736 /* ASSERT (called from clock level) */
2812 * lpl_topo_bootstrap is only called once from cpupart_initialize_default() to
2821 * 1) Copies all fields from lpl_bootstrap to the target.
2859 * Copy all fields from lpl, except for the rset,
2924 * from until it is properly initialized.
3006 * partitions changing out from under us and assumes that given thread is
3067 * NOTE: Assumes that thread is protected from going away and its
3252 * doesn't get removed from t's partition
3255 * with cpus paused (such as from cpu_offline).
3292 * to account for it being moved from its old lgroup.
3438 * Return lgroup memory allocation policy given advice from madvise(3C)
3570 * Get policy segment tree from anon_map or vnode and use specified
3701 * allocate from the root.
3704 * the current CPU from going away before lgrp is found.
3845 * offset from home lgroup to choose for
3846 * next lgroup to allocate memory from
4160 * Get policy info from anon_map
4170 * Get policy info from vnode
4219 * Need to maintain hold on writer's lock to keep tree from
4220 * changing out from under us
4377 * Return the best memnode from which to allocate memory given