GMMR0.cpp revision 2f07e540c8781a499ff1746ea3f6dd883bba33de
* - Shared - Readonly page that is used by one or more VMs and treated * - Free - Not used by anyone. * For the page replacement operations (sharing, defragmenting and freeing) * to be somewhat efficient, private pages needs to be associated with a * particular page in a particular VM. * Tracking the usage of shared pages is impractical and expensive, so we'll * settle for a reference counting system instead. * Free pages will be chained on LIFOs * On 64-bit systems we will use a 64-bit bitfield per page, while on 32-bit * systems a 32-bit bitfield will have to suffice because of address space * limitations. The GMMPAGE structure shows the details. * @section sec_gmm_alloc_strat Page Allocation Strategy * The strategy for allocating pages has to take fragmentation and shared * pages into account, or we may end up with with 2000 chunks with only * a few pages in each. The fragmentation wrt shared pages is that unlike * private pages they cannot easily be reallocated. Private pages can be * reallocated by a defragmentation thread in the same manner that sharing * The first approach is to manage the free pages in two sets depending on * whether they are mainly for the allocation of shared or private pages. * In the initial implementation there will be almost no possibility for * mixing shared and private pages in the same chunk (only if we're really * stressed on memory), but when we implement forking of VMs and have to * deal with lots of COW pages it'll start getting kind of interesting. * The sets are lists of chunks with approximately the same number of * free pages. Say the chunk size is 1MB, meaning 256 pages, and a set * consists of 16 lists. So, the first list will contain the chunks with * 1-7 free pages, the second covers 8-15, and so on. The chunks will be * moved between the lists as pages are freed up or allocated. * @section sec_gmm_costs Costs * The per page cost in kernel space is 32-bit plus whatever RTR0MEMOBJ * entails. In addition there is the chunk cost of approximately * (sizeof(RT0MEMOBJ) + sizof(CHUNK)) / 2^CHUNK_SHIFT bytes per page. * On Windows the per page RTR0MEMOBJ cost is 32-bit on 32-bit windows * and 64-bit on 64-bit windows (a PFN_NUMBER in the MDL). So, 64-bit per page. * The cost on Linux is identical, but here it's because of sizeof(struct page *). * @section sec_gmm_legacy Legacy Mode for Non-Tier-1 Platforms * In legacy mode the page source is locked user pages and not * RTR0MemObjAllocPhysNC, this means that a page can only be allocated * by the VM that locked it. We will make no attempt at implementing * page sharing on these systems, just do enough to make it all work. * @subsection sub_gmm_locking Serializing * One simple fast mutex will be employed in the initial implementation, not * two as metioned in @ref subsec_pgmPhys_Serializing. * @see subsec_pgmPhys_Serializing * @section sec_gmm_overcommit Memory Over-Commitment Management * The GVM will have to do the system wide memory over-commitment * management. My current ideas are: * - Per VM oc policy that indicates how much to initially commit * to it and what to do in a out-of-memory situation. * - Prevent overtaxing the host. * There are some challenges here, the main ones are configurability and * security. Should we for instance permit anyone to request 100% memory * commitment? Who should be allowed to do runtime adjustments of the * config. And how to prevent these settings from being lost when the last * VM process exits? The solution is probably to have an optional root * daemon the will keep VMMR0.r0 in memory and enable the security measures. * This will not be implemented this week. :-) /******************************************************************************* *******************************************************************************/ /******************************************************************************* * Structures and Typedefs * *******************************************************************************/ /** Pointer to set of free chunks. */ /** Pointer to a GMM allocation chunk. */ * The per-page tracking structure employed by the GMM. * On 32-bit hosts we'll some trickery is necessary to compress all * the information into 32-bits. When the fSharedFree member is set, * the 30th bit decides whether it's a free page or not. * Because of the different layout on 32-bit and 64-bit hosts, macros * are used to get and set some of the data. /** Unsigned integer view. */ /** The view of a private page. */ /** The guest page frame number. (Max addressable: 2 ^ 44 - 16) */ /** The GVM handle. (64K VMs) */ /** The view of a shared page. */ /** The reference count. */ /** Reserved. Checksum or something? Two hGVMs for forking? */ /** The view of a free page. */ /** The index of the next page in the free list. */ /** Reserved. Checksum or something? */ /** Unsigned integer view. */ /** The view of a private page. */ /** The guest page frame number. (Max addressable: 2 ^ 36) */ /** The GVM handle. (127 VMs) */ /** The top page state bit, MBZ. */ /** The view of a shared page. */ /** The reference count. */ /** The view of a free page. */ /** The index of the next page in the free list. */ /** Pointer to a GMMPAGE. */ /** @name The Page States. /** A private page - alternative value used on the 32-bit implemenation. * This will never be used on 64-bit hosts. */ /** @def GMM_PAGE_IS_PRIVATE * @returns true if free, false if not. * @param pPage The GMM page. /** @def GMM_PAGE_IS_FREE * @returns true if free, false if not. * @param pPage The GMM page. /** @def GMM_PAGE_IS_FREE * @returns true if free, false if not. * @param pPage The GMM page. /** @def GMM_PAGE_PFN_END * The end of the the valid guest pfn range, {0..GMM_PAGE_PFN_END-1}. * @remark Some of the values outside the range has special meaning, see related \#defines. /** @def GMM_PAGE_PFN_UNSHAREABLE * Indicates that this page isn't used for normal guest memory and thus isn't shareable. * The end of the valid guest physical address as it applies to GMM pages. * This must reflect the constraints imposed by the RTGCPHYS type and * the guest page frame number used internally in GMMPAGE. */ * A GMM allocation chunk ring-3 mapping record. * This should really be associated with a session and not a VM, but * it's simpler to associated with a VM and cleanup with the VM object /** The mapping object. */ /** The VM owning the mapping. */ /** Pointer to a GMM allocation chunk mapping. */ * A GMM allocation chunk. * The Key is the chunk ID. */ * Either from RTR0MemObjAllocPhysNC or RTR0MemObjLockUser depending on * what the host can dish up with. */ /** Pointer to the next chunk in the free list. */ /** Pointer to the previous chunk in the free list. */ /** Pointer to the free set this chunk belongs to. NULL for * chunks with no free pages. */ /** Pointer to an array of mappings. */ /** The number of mappings. */ /** The head of the list of free pages. UINT16_MAX is the NIL value. */ /** The number of free pages. */ /** The GVM handle of the VM that first allocated pages from this chunk, this * is used as a preference when there are several chunks to choose from. * When in legacy mode this isn't a preference any longer. */ /** The number of private pages. */ /** The number of shared pages. */ /** Reserved for later. */ * An allocation chunk TLB entry. /** Pointer to the chunk. */ /** Pointer to an allocation chunk TLB entry. */ /** The number of entries tin the allocation chunk TLB. */ /** Gets the TLB entry index for the given Chunk ID. */ * An allocation chunk TLB. /** Pointer to an allocation chunk TLB. */ /** The number of lists in set. */ /** The GMMCHUNK::cFree shift count. */ /** The GMMCHUNK::cFree mask for use when considering relinking a chunk. */ /** The number of free pages in the set. */ /** Magic / eye catcher. GMM_MAGIC */ /** The fast mutex protecting the GMM. * More fine grained locking can be implemented later if necessary. */ /** The private free set. */ /** The shared free set. */ /** The maximum number of pages we're allowed to allocate. * @gcfgm 32-bit GMM/PctPages Relative to the number of host pages. */ /** The number of pages that has been reserved. * The deal is that cReservedPages - cOverCommittedPages <= cMaxPages. */ /** The number of pages that we have over-committed in reservations. */ /** The number of actually allocated (committed if you like) pages. */ /** The number of pages that are shared. A subset of cAllocatedPages. */ /** The number of pages that are shared that has been left behind by * VMs not doing proper cleanups. */ /** The number of allocation chunks. * (The number of pages we've allocated from the host can be derived from this.) */ /** The number of current ballooned pages. */ /** The legacy mode indicator. * This is determined at initialization time. */ /** The number of registered VMs. */ /** The previous allocated Chunk ID. * Used as a hint to avoid scanning the whole bitmap. */ /** Chunk ID allocation bitmap. * Bits of allocated IDs are set, free ones are cleared. * The NIL id (0) is marked allocated. */ /** Pointer to the GMM instance. */ /** The value of GMM::u32Magic (Katsuhiro Otomo). */ /******************************************************************************* *******************************************************************************/ /** Pointer to the GMM instance data. */ /** Macro for obtaining and validating the g_pGMM pointer. * On failure it will return from the invoking function with the specified return value. * @param pGMM The name of the pGMM variable. * @param rc The return value on failure. Use VERR_INTERNAL_ERROR for /** Macro for obtaining and validating the g_pGMM pointer, void function variant. * On failure it will return from the invoking function. * @param pGMM The name of the pGMM variable. /******************************************************************************* *******************************************************************************/ * Initializes the GMM component. * This is called when the VMMR0.r0 module is loaded and protected by the * @returns VBox status code. * Allocate the instance data and the lock(s). * Check and see if RTR0MemObjAllocPhysNC works. SUPR0Printf(
"GMMR0Init: RTR0MemObjAllocPhysNC(,64K,Any) -> %d!\n",
rc);
* Terminates the GMM component. * Take care / be paranoid... * Undo what init did and free any resources we've acquired. /* Destroy the fundamentals. */ /* free any chunks still hanging around. */ /* finally the instance data itself. */ * RTAvlU32Destroy callback. * @param pNode The node to destroy. * @param pvGMM The GMM handle. SUPR0Printf(
"GMMR0Term: %p/%#x: cFree=%d cPrivate=%d cShared=%d cMappings=%d\n",
pChunk,
SUPR0Printf(
"GMMR0Term: %p/%#x: RTRMemObjFree(%p,true) -> %d (cMappings=%d)\n",
pChunk,
* Initializes the per-VM data for the GMM. * This is called from within the GVMM lock (from GVMMR0CreateVM) * and should only initialize the data members so GMMR0CleanupVM * can deal with them. We reserve no memory or anything here, * that's done later in GMMR0InitVM. * @param pGVM Pointer to the Global VM structure. * Cleans up when a VM is terminating. * @param pGVM Pointer to the Global VM structure. * The policy is 'INVALID' until the initial reservation * request has been serviced. * If it's the last VM around, we can skip walking all the chunk looking * for the pages owned by this VM and instead flush the whole shebang. * This takes care of the eventuality that a VM has left shared page * references behind (shouldn't happen of course, but you never know). #
if 0
/* disabled so it won't hide bugs. */ * Walk the entire pool looking for pages that belongs to this VM * and left over mappings. (This'll only catch private pages, shared * pages will be 'left behind'.) /* account for shared pages that weren't freed. */ * Update the over-commitment management statistics. /** @todo Update GMM->cOverCommittedPages */ LogFlow((
"GMMR0CleanupVM: returns\n"));
* RTAvlU32DoWithAll callback. * @param pNode The node to search. * @param pvGVM Pointer to the shared VM structure. * Look for pages belonging to the VM. * (Perform some internal checks while we're scanning.) * The reason for not using gmmR0FreePrivatePage here is that we * must *not* cause the chunk to be freed from under us - we're in SUPR0Printf(
"gmmR0CleanupVMScanChunk: Chunk %p/%#x has bogus stats - free=%d/%d private=%d/%d shared=%d/%d\n",
* Look for the mapping belonging to the terminating VM. SUPR0Printf(
"gmmR0CleanupVMScanChunk: %p/%#x: mapping #%x: RTRMemObjFree(%p,false) -> %d \n",
* If not in legacy mode, we should reset the hGVM field * if it has our handle in it. SUPR0Printf(
"gmmR0CleanupVMScanChunk: %p/%#x: cFree=%#x - it should be 0 in legacy mode!\n",
* RTAvlU32Destroy callback for GMMR0CleanupVM. * @param pNode The node (allocation chunk) to destroy. * @param pvGVM Pointer to the shared VM structure. SUPR0Printf(
"gmmR0CleanupVMDestroyChunk: %p/%#x: mapping #%x: pGVM=%p exepcted %p\n",
pChunk,
SUPR0Printf(
"gmmR0CleanupVMDestroyChunk: %p/%#x: mapping #%x: RTRMemObjFree(%p,false) -> %d \n",
pChunk,
SUPR0Printf(
"gmmR0CleanupVMDestroyChunk: %p/%#x: RTRMemObjFree(%p,true) -> %d (cMappings=%d)\n",
pChunk,
* The initial resource reservations. * This will make memory reservations according to policy and priority. If there isn't * sufficient resources available to sustain the VM this function will fail and all * future allocations requests will fail as well. * These are just the initial reservations made very very early during the VM creation * process and will be adjusted later in the GMMR0UpdateReservation call after the * ring-3 init has completed. * @returns VBox status code. * @retval VERR_GMM_NOT_SUFFICENT_MEMORY * @param pVM Pointer to the shared VM structure. * @param cBasePages The number of pages that may be allocated for the base RAM and ROMs. * This does not include MMIO2 and similar. * @param cShadowPages The number of pages that may be allocated for shadow pageing structures. * @param cFixedPages The number of pages that may be allocated for fixed objects like the * hyper heap, MMIO2 and similar. * @param enmPolicy The OC policy to use on this VM. * @param enmPriority The priority in an out-of-memory situation. * @thread The creator thread / EMT. LogFlow((
"GMMR0InitialReservation: pVM=%p cBasePages=%#llx cShadowPages=%#x cFixedPages=%#x enmPolicy=%d enmPriority=%d\n",
* Validate, get basics and take the semaphore. * Check if we can accomodate this. LogFlow((
"GMMR0InitialReservation: returns %Rrc\n",
rc));
* VMMR0 request wrapper for GMMR0InitialReservation. * @returns see GMMR0InitialReservation. * @param pVM Pointer to the shared VM structure. * @param pReq The request packet. * Validate input and pass it on. * This updates the memory reservation with the additional MMIO2 and ROM pages. * @returns VBox status code. * @retval VERR_GMM_NOT_SUFFICENT_MEMORY * @param pVM Pointer to the shared VM structure. * @param cBasePages The number of pages that may be allocated for the base RAM and ROMs. * This does not include MMIO2 and similar. * @param cShadowPages The number of pages that may be allocated for shadow pageing structures. * @param cFixedPages The number of pages that may be allocated for fixed objects like the * hyper heap, MMIO2 and similar. * @param enmPolicy The OC policy to use on this VM. * @param enmPriority The priority in an out-of-memory situation. LogFlow((
"GMMR0UpdateReservation: pVM=%p cBasePages=%#llx cShadowPages=%#x cFixedPages=%#x\n",
* Validate, get basics and take the semaphore. * Check if we can accomodate this. LogFlow((
"GMMR0UpdateReservation: returns %Rrc\n",
rc));
* VMMR0 request wrapper for GMMR0UpdateReservation. * @returns see GMMR0UpdateReservation. * @param pVM Pointer to the shared VM structure. * @param pReq The request packet. * Validate input and pass it on. * Looks up a chunk in the tree and fill in the TLB entry for it. * This is not expected to fail and will bitch if it does. * @returns Pointer to the allocation chunk, NULL if not found. * @param pGMM Pointer to the GMM instance. * @param idChunk The ID of the chunk to find. * @param pTlbe Pointer to the TLB entry. * Finds a allocation chunk. * This is not expected to fail and will bitch if it does. * @returns Pointer to the allocation chunk, NULL if not found. * @param pGMM Pointer to the GMM instance. * @param idChunk The ID of the chunk to find. * Do a TLB lookup, branch if not in the TLB. * This is not expected to fail and will bitch if it does. * @returns Pointer to the page, NULL if not found. * @param pGMM Pointer to the GMM instance. * @param idPage The ID of the page to find. * Unlinks the chunk from the free list it's currently on (if any). * @param pChunk The allocation chunk. * Links the chunk onto the appropriate free list in the specified free set. * If no free entries, it's not linked into any list. * @param pChunk The allocation chunk. * @param pSet The free set. * @param pGMM Pointer to the GMM instance. * @param idChunk The Chunk ID to free. * Allocates a new Chunk ID. * @param pGMM Pointer to the GMM instance. * Try the next sequential one. #
if 0
/* test the fallback first */ * Scan sequentially from the last one. * Ok, scan from the start. * We're not racing anyone, so there is no need to expect failures or have restart loops. * Registers a new chunk of memory. * This is called by both gmmR0AllocateOneChunk and GMMR0SeedChunk. * @returns VBox status code. * @param pGMM Pointer to the GMM instance. * @param pSet Pointer to the set. * @param MemObj The memory object for the chunk. * @param hGVM The hGVM value. (Only used by GMMR0SeedChunk.) * Allocate a Chunk ID and insert it into the tree. * It doesn't cost anything to be careful here. * Allocate one new chunk and add it to the specified free set. * @returns VBox status code. * @param pGMM Pointer to the GMM instance. * @param pSet Pointer to the set. * Attempts to allocate more pages until the requested amount is met. * @returns VBox status code. * @param pGMM Pointer to the GMM instance data. * @param pSet Pointer to the free set to grow. * @param cPages The number of pages needed. * Try steal free chunks from the other set first. (Only take 100% free chunks.) * If we need still more pages, allocate new chunks. * Worker for gmmR0AllocatePages. * @param pGMM Pointer to the GMM instance data. * @param hGVM The GVM handle of the VM requesting memory. * @param pChunk The chunk to allocate it from. * @param pPageDesc The page descriptor. /* update the chunk stats. */ /* unlink the first free page. */ /* make the page private. */ /* update the page descriptor. */ * Common worker for GMMR0AllocateHandyPages and GMMR0AllocatePages. * @returns VBox status code: * @param pGMM Pointer to the GMM instance data. * @param pGVM Pointer to the shared VM structure. * @param cPages The number of pages to allocate. * @param paPages Pointer to the page descriptors. * See GMMPAGEDESC for details on what is expected on input. * @param enmAccount The account to charge. * Check allocation limits. Log((
"gmmR0AllocatePages: Reserved=%#llx Allocated+Requested=%#llx+%#x!\n",
Log((
"gmmR0AllocatePages: Reserved=%#llx Allocated+Requested=%#llx+%#x!\n",
Log((
"gmmR0AllocatePages: Reserved=%#llx Allocated+Requested=%#llx+%#x!\n",
* Check if we need to allocate more memory or not. In legacy mode this is * a bit extra work but it's easier to do it upfront than bailing out later. /* first round, pick from chunks with an affinity to the VM. */ /* second round, take all free pages in this list. */ * Check if we've reached some threshold and should kick one or two VMs and tell * them to inflate their balloons a bit more... later. * Updates the previous allocations and allocates more pages. * The handy pages are always taken from the 'base' memory account. * @returns VBox status code: * @param pVM Pointer to the shared VM structure. * @param cPagesToUpdate The number of pages to update (starting from the head). * @param cPagesToAlloc The number of pages to allocate (starting from the head). * @param paPages The array of page descriptors. * See GMMPAGEDESC for details on what is expected on input. LogFlow((
"GMMR0AllocateHandyPages: pVM=%p cPagesToUpdate=%#x cPagesToAlloc=%#x paPages=%p\n",
* Validate, get basics and take the semaphore. * (This is a relatively busy path, so make predictions where possible.) /*|| paPages[iPage].idPage == NIL_GMM_PAGEID*/,
/*|| paPages[iPage].idSharedPage == NIL_GMM_PAGEID*/,
/* No allocations before the initial reservation has been made! */ * Stop on the first error. /* else: NIL_RTHCPHYS nothing */ Log((
"GMMR0AllocateHandyPages: #%#x/%#x: Not owner! hGVM=%#x hSelf=%#x\n",
* Join paths with GMMR0AllocatePages for the allocation. LogFlow((
"GMMR0AllocateHandyPages: returns %Rrc\n",
rc));
* Allocate one or more pages. * This is typically used for ROMs and MMIO2 (VRAM) during VM creation. * @returns VBox status code: * @param pVM Pointer to the shared VM structure. * @param cPages The number of pages to allocate. * @param paPages Pointer to the page descriptors. * See GMMPAGEDESC for details on what is expected on input. * @param enmAccount The account to charge. * Validate, get basics and take the semaphore. /* No allocations before the initial reservation has been made! */ LogFlow((
"GMMR0UpdateReservation: returns %Rrc\n",
rc));
* VMMR0 request wrapper for GMMR0AllocatePages. * @returns see GMMR0AllocatePages. * @param pVM Pointer to the shared VM structure. * @param pReq The request packet. * Validate input and pass it on. * Frees a chunk, giving it back to the host OS. * @param pGMM Pointer to the GMM instance. * @param pChunk The chunk to free. * If there are current mappings of the chunk, then request the * VMs to unmap them. Reposition the chunk in the free list so * it won't be a likely candidate for allocations. /** @todo R0 -> VM request */ * Try free the memory object. * Unlink it from everywhere. * Free the Chunk ID and struct. * The caller does all the statistic decrementing, we do all the incrementing. * @param pGMM Pointer to the GMM instance data. * @param pChunk Pointer to the chunk this page belongs to. * @param pPage Pointer to the page. * Put the page on the free list. * and relink the chunk if necessary. * If the chunk becomes empty, consider giving memory back to the host OS. * The current strategy is to try give it back if there are other chunks * in this free list, meaning if there are at least 240 free pages in this * category. Note that since there are probably mappings of the chunk, * it won't be freed up instantly, which probably screws up this logic * Frees a shared page, the page is known to exist and be valid and such. * @param pGMM Pointer to the GMM instance. * @param idPage The Page ID * @param pPage The page structure. * Frees a private page, the page is known to exist and be valid and such. * @param pGMM Pointer to the GMM instance. * @param idPage The Page ID * @param pPage The page structure. * Common worker for GMMR0FreePages and GMMR0BalloonedPages. * @returns VBox status code: * @param pGMM Pointer to the GMM instance data. * @param pGVM Pointer to the shared VM structure. * @param cPages The number of pages to free. * @param paPages Pointer to the page descriptors. * @param enmAccount The account this relates to. * Check that the request isn't impossible wrt to the account status. * Walk the descriptors and free the pages. * Statistics (except the account) are being updated as we go along, * unlike the alloc code. Also, stop on the first error. Log((
"gmmR0AllocatePages: #%#x/%#x: not owner! hGVM=%#x hSelf=%#x\n",
iPage,
idPage,
Log((
"gmmR0AllocatePages: #%#x/%#x: already free!\n",
iPage,
idPage));
* Any threshold stuff to be done here? * Free one or more pages. * This is typically used at reset time or power off. * @returns VBox status code: * @param pVM Pointer to the shared VM structure. * @param cPages The number of pages to allocate. * @param paPages Pointer to the page descriptors containing the Page IDs for each page. * @param enmAccount The account this relates to. * Validate input and get the basics. /*|| paPages[iPage].idPage == NIL_GMM_PAGEID*/,
* Take the semaphore and call the worker function. LogFlow((
"GMMR0FreePages: returns %Rrc\n",
rc));
* VMMR0 request wrapper for GMMR0FreePages. * @returns see GMMR0FreePages. * @param pVM Pointer to the shared VM structure. * @param pReq The request packet. * Validate input and pass it on. * Report back on a memory ballooning request. * The request may or may not have been initiated by the GMM. If it was initiated * by the GMM it is important that this function is called even if no pages was * Since the whole purpose of ballooning is to free up guest RAM pages, this API * may also be given a set of related pages to be freed. These pages are assumed * to be on the base account. * @returns VBox status code: * @param pVM Pointer to the shared VM structure. * @param cBalloonedPages The number of pages that was ballooned. * @param cPagesToFree The number of pages to be freed. * @param paPages Pointer to the page descriptors for the pages that's to be freed. * @param fCompleted Indicates whether the ballooning request was completed (true) or * if there is more pages to come (false). If the ballooning was not * not triggered by the GMM, don't set this. LogFlow((
"GMMR0BalloonedPages: pVM=%p cBalloonedPages=%#x cPagestoFree=%#x paPages=%p enmAccount=%d fCompleted=%RTbool\n",
* Validate input and get the basics. /*|| paPages[iPage].idPage == NIL_GMM_PAGEID*/,
* Take the sempahore and do some more validations. * Record the ballooned memory. Log((
"GMMR0BalloonedPages: +%#x - Global=%#llx; / VM: Total=%#llx Req=%#llx Actual=%#llx (completed)\n",
cBalloonedPages,
* Anything we need to do here now when the request has been completed? Log((
"GMMR0BalloonedPages: +%#x - Global=%#llx / VM: Total=%#llx Req=%#llx Actual=%#llx (pending)\n",
cBalloonedPages,
Log((
"GMMR0BalloonedPages: +%#x - Global=%#llx / VM: Total=%#llx (user)\n",
LogFlow((
"GMMR0BalloonedPages: returns %Rrc\n",
rc));
* VMMR0 request wrapper for GMMR0BalloonedPages. * @returns see GMMR0BalloonedPages. * @param pVM Pointer to the shared VM structure. * @param pReq The request packet. * Validate input and pass it on. * Report balloon deflating. * @returns VBox status code: * @param pVM Pointer to the shared VM structure. * @param cPages The number of pages that was let out of the balloon. * Validate input and get the basics. * Take the sempahore and do some more validations. Log((
"GMMR0BalloonedPages: -%#x - Global=%#llx / VM: Total=%#llx Req=%#llx\n",
cPages,
* Anything we need to do here now when the request has been completed? Log((
"GMMR0BalloonedPages: -%#x - Global=%#llx / VM: Total=%#llx\n",
cPages,
LogFlow((
"GMMR0BalloonedPages: returns %Rrc\n",
rc));
* Unmaps a chunk previously mapped into the address space of the current process. * @returns VBox status code. * @param pGMM Pointer to the GMM instance data. * @param pGVM Pointer to the Global VM structure. * @param pChunk Pointer to the chunk to be unmapped. * Find the mapping and try unmapping it. * Maps a chunk into the user address space of the current process. * @returns VBox status code. * @param pGMM Pointer to the GMM instance data. * @param pGVM Pointer to the Global VM structure. * @param pChunk Pointer to the chunk to be mapped. * @param ppvR3 Where to store the ring-3 address of the mapping. * In the VERR_GMM_CHUNK_ALREADY_MAPPED case, this will be * contain the address of the existing mapping. * Check to see if the chunk is already mapped. /* reallocate the array? */ * Map a chunk and/or unmap another chunk. * The mapping and unmapping applies to the current process. * This API does two things because it saves a kernel call per mapping when * when the ring-3 mapping cache is full. * @returns VBox status code. * @param idChunkMap The chunk to map. NIL_GMM_CHUNKID if nothing to map. * @param idChunkUnmap The chunk to unmap. NIL_GMM_CHUNKID if nothing to unmap. * @param ppvR3 Where to store the address of the mapped chunk. NULL is ok if nothing to map. LogFlow((
"GMMR0MapUnmapChunk: pVM=%p idChunkMap=%#x idChunkUnmap=%#x ppvR3=%p\n",
* Validate input and get the basics. Log((
"GMMR0MapUnmapChunk: legacy mode!\n"));
* Take the semaphore and do the work. * The unmapping is done last since it's easier to undo a mapping than * undoing an unmapping. The ring-3 mapping cache cannot not be so big * that it pushes the user virtual address space to within a chunk of * it it's limits, so, no problem here. LogFlow((
"GMMR0MapUnmapChunk: returns %Rrc\n",
rc));
* VMMR0 request wrapper for GMMR0MapUnmapChunk. * @returns see GMMR0MapUnmapChunk. * @param pVM Pointer to the shared VM structure. * @param pReq The request packet. * Validate input and pass it on. * Legacy mode API for supplying pages. * The specified user address points to a allocation chunk sized block that * will be locked down and used by the GMM when the GM asks for pages. * @returns VBox status code. * @param pvR3 Pointer to the chunk size memory block to lock down. * Validate input and get the basics. Log((
"GMMR0MapUnmapChunk: not in legacy mode!\n"));
* Lock the memory before taking the semaphore. * Take the semaphore and add a new chunk with our hGVM.