Lines Matching refs:page_t

54  * (allocating page_t's if necessary), and release them into the system.
56 * the hypervisor, saving the page_t's for later use.
70 * For holding spare page_t structures - keep a singly-linked list.
72 * (pagenum >= mfn_count) page_t's. Valid page_t's should be inserted
73 * at the front, and invalid page_t's at the back. Removal should
77 static page_t *bln_spare_list_front, *bln_spare_list_back;
114 * Add the page_t structure to our spare list.
117 balloon_page_add(page_t *pp)
149 * Return a page_t structure from our spare list, or NULL if none are available.
151 static page_t *
154 page_t *pp;
188 page_t pages[1];
193 * two parts here. page_t's are handled separately, so they are not included.
198 * We want to add memory, but have no spare page_t structures. Use some of
199 * our new memory for the page_t structures.
209 page_t *page_array;
220 * that will be required to hold page_t structures for all new pages.
224 (PAGESIZE + sizeof (page_t)));
227 * Given the number of page_t structures we need, is there also
231 if ((metapgs << PAGESHIFT) < (totalpgs * sizeof (page_t) +
245 * Figure out the number of page_t structures that can fit in metapgs
247 * This will cause us to initialize more page_t structures than we
251 num_pages = (metasz - MEM_STRUCT_SIZE) / sizeof (page_t);
258 * space of all valid pfns contiguous. This means we create page_t
265 * Get a VA for the pages that will hold page_t and other structures.
267 * the page_t structures following.
307 * For the rest of the pages, initialize the page_t struct and
382 page_t *pp;
383 page_t *new_list_front, *new_list_back;
462 * we had page_t structures. i contains the number of pages
464 * means we somehow lost page_t's from our local list.
503 page_t *pp;
937 balloon_replace_pages(uint_t nextents, page_t **pp, uint_t addr_bits,