Lines Matching defs:large
213 int pc_free_pages; /* free's into large page free list */
214 int pc_destroy_pages; /* large page destroy's */
318 * /etc/system tunable to control large page allocation hueristic.
321 * for large page allocation requests. If a large page is not readily
323 * to create a large page, potentially moving smaller pages around to coalesce
325 * Default value of LPAP_DEFAULT will go to remote freelists if large pages
329 LPAP_DEFAULT, /* default large page allocation policy */
330 LPAP_LOCAL /* local large page allocation policy */
506 * Evenly spread out the PCF counters for large free pages
562 pgcnt_t large = page_get_pagecnt(szc);
618 * Simple case: System doesn't support large pages.
628 * the root page until we have a full large page.
630 if (!IS_P2ALIGNED(pnum, large)) {
633 * If not in a large page,
643 * Link a constituent page into the large page.
649 * When large page is fully formed, free it.
651 if (++cnt == large) {
663 * in a different large page.
665 ASSERT(IS_P2ALIGNED(pnum, large));
670 * a large page, just free the small page.
672 if (num < large) {
679 * Otherwise start a new large page.
1160 "large page identity doesn't match");
1979 * Used for large page support. It will attempt to allocate
1980 * a large page(s) off the freelist.
1998 * Check if system heavily prefers local large pages over remote
2081 * fill it in with all the constituent pages from the large page. But
2118 * Get a single large page off of the freelists, and set it up for use.
2279 * Try to see whether request is too large to *ever* be
2491 /* large pages should not end up here */
2585 * One or more constituent pages of this large page has been marked
2586 * toxic. Simply demote the large page to PAGESIZE pages and let
2588 * large page free routines (page_free_pages() and page_destroy_pages().
2636 "or no vnode large page %p", (void *)pp);
2877 * If pp is part of a large pages, only the given constituent page is reclaimed
2878 * and the large page it belonged to will be demoted. This can only happen
3014 * page_list_sub will handle the case where pp is a large page.
3064 "large page %p", (void *)pp);
3234 * CacheFS may call page_rename for a large NFS page
3236 * by applications. Demote this large page before
3238 * large pages left lying around.
3294 * If an existing page is a large page, then demote
3295 * it to ensure that no "partial" large pages are
3349 /* for now large pages should not end up here */
4119 * Variant of page_addclaim(), where ppa[] contains the pages of a single large
4130 * Only need to take the page struct lock on the large page root.
4167 * Variant of page_subclaim(), where ppa[] contains the pages of a single large
4178 * Only need to take the page struct lock on the large page root.
4322 * on large memory systems.
4645 * large page. The caller is responsible for passing in a locked
4646 * pp. If pp is a large page, then it succeeds in locking all the
4653 * pages of a large page pp belongs to can't change. To achieve this we
4658 * outside of this large page (i.e. pp belonged to a larger large page) is
4661 * locked a constituent page outside of pp's current large page.
4802 * We must lock all members of this large page or we cannot
5126 * Given a constituent page, try to demote the large page on the freelist.
5177 * Given a constituent page, try to demote the large page.
5281 * within a large page since it will break other code that relies on p_szc
5282 * being the same for all page_t's of a large page). Anonymous pages should
5285 * kernel large pages are demoted or freed the entire large page at a time
5287 * have to be able to demote a large page (i.e. decrease all constituent pages
5289 * we can easily deal with anonymous page demotion the entire large page at a
5291 * the entire large page region with actual demotion only done when pages are
5296 * page_destroy() (we also allow only part of the large page to be SOFTLOCKed
5297 * and therefore pageout should be able to demote a large page by EXCL locking
5325 * hat_page_demote() removes all large mappings that map pp and then decreases
5326 * p_szc starting from the last constituent page of the large page. By working
5327 * from the tail of a large page in pfn decreasing order allows one looking at
5333 * We are guaranteed that all constituent pages of pp's large page belong to
5337 * large mappings to pp even though we don't lock any constituent page except
5394 * Align address and length to (potentially large) page boundary
5402 * Do one (large) page at a time
5479 * Lock constituent pages if this is large page
5526 * large page for migration and unload the mappings of
5528 * large page
5573 * Assume that root page of large page is marked for
5579 * note we don't want to relocate an entire large page if
5670 * unlink constituent pages of a large page.
5904 /* check for wraparound - possible if n is large */
6169 * flag to prevent recursion while dealing with large pages.
6595 * Check to see if the page we have is too large.