PGMAll.cpp revision d2c6b2e8826a5ef34170fef0c72c3fc7c5c1b46a
1707N/A * PGM - Page Manager and Monitor - All context code. 1707N/A * Copyright (C) 2006-2007 Oracle Corporation 1707N/A * This file is part of VirtualBox Open Source Edition (OSE), as 1707N/A * you can redistribute it and/or modify it under the terms of the GNU 1707N/A * General Public License (GPL) as published by the Free Software 1707N/A * Foundation, in version 2 as it comes in the "COPYING" file of the 1707N/A * VirtualBox OSE distribution. VirtualBox OSE is distributed in the 1707N/A * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind. 1707N/A/******************************************************************************* 1707N/A*******************************************************************************/ 1707N/A/******************************************************************************* 1707N/A*******************************************************************************/ 1707N/A * Stated structure for PGM_GST_NAME(HandlerVirtualUpdate) that's 1707N/A * passed to PGM_GST_NAME(VirtHandlerUpdateOne) during enumeration. 1707N/A /** The CR4 register value. */ 1707N/A/******************************************************************************* 1707N/A*******************************************************************************/ /* Guest - protected mode */ /* Guest - 32-bit mode */ /* Guest - protected mode */ /* Guest - 32-bit mode */ /* Guest - protected mode (only used for AMD-V nested paging in 64 bits mode) */ #
endif /* VBOX_WITH_64_BITS_GUESTS */ * Shadow - Nested paging mode /* Guest - protected mode */ /* Guest - 32-bit mode */ #
endif /* VBOX_WITH_64_BITS_GUESTS *//* Guest - protected mode */ /* Guest - 32-bit mode */ #
endif /* VBOX_WITH_64_BITS_GUESTS */ * @returns VBox status code (appropriate for trap handling and GC return). * @param pVCpu VMCPU handle. * @param uErr The trap error code. * @param pRegFrame Trap register frame. * @param pvFault The fault address. #
endif /* VBOX_WITH_STATISTICS */ /* Note: hack alert for difficult to reproduce problem. */ Log((
"WARNING: Unexpected VERR_PAGE_TABLE_NOT_PRESENT (%d) for page fault at %RGv error code %x (rip=%RGv)\n",
rc,
pvFault,
uErr,
pRegFrame->
rip));
/* Some kind of inconsistency in the SMP case; it's safe to just execute the instruction again; not sure about single VCPU VMs though. */ * Typically used to sync commonly used pages before entering raw mode * @returns VBox status code suitable for scheduling. * @retval VINF_SUCCESS on success. * @retval VINF_PGM_SYNC_CR3 if we're out of shadow pages or something like that. * @param pVCpu VMCPU handle. * @param GCPtrPage Page to invalidate. * Gets the mapping corresponding to the specified address (if any). * @returns Pointer to the mapping. * @param pVM The virtual machine. * @param GCPtr The guest context pointer. * Verifies a range of pages for read or write access * Only checks the guest's page tables * @returns VBox status code. * @param pVCpu VMCPU handle. * @param Addr Guest virtual address to check * @param cbSize Access size * @remarks Current not in use. Log((
"PGMIsValidAccess: access violation for %RGv rc=%d\n",
Addr,
rc));
* Check if the access would cause a page fault * Note that hypervisor page directories are not present in the guest's tables, so this check * Verifies a range of pages for read or write access * Supports handling of pages marked for dirty bit tracking and CSAM * @returns VBox status code. * @param pVCpu VMCPU handle. * @param Addr Guest virtual address to check * @param cbSize Access size Log((
"PGMVerifyAccess: access violation for %RGv rc=%d\n",
Addr,
rc));
* Check if the access would cause a page fault * Note that hypervisor page directories are not present in the guest's tables, so this check * Next step is to verify if we protected this page for dirty bit tracking or for CSAM scanning * Page is not present in our page tables. #
if 0
/* def VBOX_STRICT; triggers too often now */ * This check is a bit paranoid, but useful. /* Note! This will assert when writing to monitored pages (a bit annoying actually). */ AssertMsgFailed((
"Unexpected access violation for %RGv! rc=%Rrc write=%d user=%d\n",
/* Don't recursively call PGMVerifyAccess as we might run out of stack. */ * Emulation of the invlpg instruction (HC only actually). * @returns Strict VBox status code, special care required. * @retval VINF_PGM_SYNC_CR3 - handled. * @retval VINF_EM_RAW_EMULATE_INSTR - not handled (RC only). * @retval VERR_REM_FLUSHED_PAGES_OVERFLOW - not handled. * @param pVCpu VMCPU handle. * @param GCPtrPage Page to invalidate. * @remark ASSUMES the page table entry or page directory is valid. Fairly * safe, but there could be edge cases! * @todo Flush page or page directory only if necessary! * Notify the recompiler so it can record this instruction. * Check for conflicts and pending CR3 monitoring updates. LogFlow((
"PGMGCInvalidatePage: Conflict!\n"));
LogFlow((
"PGMGCInvalidatePage: PGM_SYNC_MONITOR_CR3 -> reinterpret instruction in R3\n"));
* Call paging mode specific worker. * Check if we have a pending update of the CR3 monitoring. * Inform CSAM about the flush * Note: This is to check if monitored pages have been changed; when we implement * callbacks for virtual handlers, this is no longer required. /* Ignore all irrelevant error codes. */ * Executes an instruction using the interpreter. * @returns VBox status code (appropriate for trap handling and GC return). * @param pVCpu VMCPU handle. * @param pRegFrame Register frame. * @param pvFault Fault address. * Gets effective page information (from the VMM page directory). * @param pVCpu VMCPU handle. * @param GCPtr Guest Context virtual address of the page. * @param pfFlags Where to store the flags. These are X86_PTE_*. * @param pHCPhys Where to store the HC physical address of the page. * @remark You should use PGMMapGetPage() for pages in a mapping. * Modify page flags for a range of pages in the shadow context. * The existing flags are ANDed with the fMask and ORed with the fFlags. * @returns VBox status code. * @param pVCpu VMCPU handle. * @param GCPtr Virtual address of the first page in the range. * @param fFlags The OR mask - page flags X86_PTE_*, excluding the page mask of course. * @param fMask The AND mask - page flags X86_PTE_*. * Be very CAREFUL when ~'ing constants which could be 32-bit! * @param fOpFlags A combination of the PGM_MK_PK_XXX flags. * @remark You must use PGMMapModifyPage() for pages in a mapping. * Changing the page flags for a single page in the shadow page tables so as to * @returns VBox status code. * @param pVCpu VMCPU handle. * @param GCPtr Virtual address of the first page in the range. * @param fOpFlags A combination of the PGM_MK_PK_XXX flags. * Changing the page flags for a single page in the shadow page tables so as to * The call must know with 101% certainty that the guest page tables maps this * as writable too. This function will deal shared, zero and write monitored * @returns VBox status code. * @param pVCpu VMCPU handle. * @param GCPtr Virtual address of the first page in the range. * @param fMmio2 Set if it is an MMIO2 page. * @param fOpFlags A combination of the PGM_MK_PK_XXX flags. * Changing the page flags for a single page in the shadow page tables so as to * @returns VBox status code. * @param pVCpu VMCPU handle. * @param GCPtr Virtual address of the first page in the range. * @param fOpFlags A combination of the PGM_MK_PG_XXX flags. * Gets the shadow page directory for the specified address, PAE. * @returns Pointer to the shadow PD. * @param pVCpu The VMCPU handle. * @param GCPtr The address. * @param uGstPdpe Guest PDPT entry. Valid. * @param ppPD Receives address of page directory /* Allocate page directory if not present. */ /* PD not present; guest must reload CR3 to change it. * No need to monitor anything in this case. /* Create a reference back to the PDPT by using the index in its shadow page. */ /* The PD was cached or created; hook it up now. */ * In 32 bits PAE mode we *must* invalidate the TLB when changing a * PDPT entry; the CPU fetches them only during cr3 load, so any * non-present PDPT will continue to cause page faults. * Gets the pointer to the shadow page directory entry for an address, PAE. * @returns Pointer to the PDE. * @param pVCpu The current CPU. * @param GCPtr The address. * @param ppShwPde Receives the address of the pgm pool page for the shadow page directory /* Fetch the pgm pool shadow descriptor. */ * Syncs the SHADOW page directory pointer for the specified address. * Allocates backing pages in case the PDPT or PML4 entry is missing. * The caller is responsible for making sure the guest has a valid PD before * @param pVCpu VMCPU handle. * @param GCPtr The address. * @param uGstPml4e Guest PML4 entry (valid). * @param uGstPdpe Guest PDPT entry (valid). * @param ppPD Receives address of page directory /* Allocate page directory pointer table if not present. */ /* Create a reference back to the PDPT by using the index in its shadow page. */ /* The PDPT was cached or created; hook it up now. */ /* Allocate page directory if not present. */ /* Create a reference back to the PDPT by using the index in its shadow page. */ /* The PD was cached or created; hook it up now. */ * Gets the SHADOW page directory pointer for the specified address (long mode). * @param pVCpu VMCPU handle. * @param GCPtr The address. * @param ppPdpt Receives address of pdpt * @param ppPD Receives address of page directory * Syncs the SHADOW EPT page directory pointer for the specified address. Allocates * backing pages in case the PDPT or PML4 entry is missing. * @param pVCpu VMCPU handle. * @param GCPtr The address. * @param ppPdpt Receives address of pdpt * @param ppPD Receives address of page directory /* Allocate page directory pointer table if not present. */ /* The PDPT was cached or created; hook it up now and fill with the default value. */ /* Allocate page directory if not present. */ /* The PD was cached or created; hook it up now and fill with the default value. */ * Synchronizes a range of nested page table entries. * The caller must own the PGM lock. * @param pVCpu The current CPU. * @param GCPhys Where to start. * @param cPages How many pages which entries should be synced. * @param enmShwPagingMode The shadow paging mode (PGMMODE_EPT for VT-x, * host paging mode for AMD-V). * Gets effective Guest OS page information. * When GCPtr is in a big page, the function will return as if it was a normal * 4KB page. If the need for distinguishing between big and normal page becomes * necessary at a later point, a PGMGstGetPage() will be created for that * @param pVCpu The current CPU. * @param GCPtr Guest Context virtual address of the page. * @param pfFlags Where to store the flags. These are X86_PTE_*, even for big pages. * @param pGCPhys Where to store the GC physical address of the page. * This is page aligned. The fact that the * Checks if the page is present. * @returns true if the page is present. * @returns false if the page is not present. * @param pVCpu VMCPU handle. * @param GCPtr Address within the page. * Sets (replaces) the page flags for a range of pages in the guest's tables. * @param pVCpu VMCPU handle. * @param GCPtr The address of the first page. * @param cb The size of the range in bytes. * @param fFlags Page flags X86_PTE_*, excluding the page mask of course. * Modify page flags for a range of pages in the guest's tables * The existing flags are ANDed with the fMask and ORed with the fFlags. * @returns VBox status code. * @param pVCpu VMCPU handle. * @param GCPtr Virtual address of the first page in the range. * @param cb Size (in bytes) of the range to apply the modification to. * @param fFlags The OR mask - page flags X86_PTE_*, excluding the page mask of course. * @param fMask The AND mask - page flags X86_PTE_*, excluding the page mask of course. * Be very CAREFUL when ~'ing constants which could be 32-bit! * Performs the lazy mapping of the 32-bit guest PD. * @returns VBox status code. * @param pVCpu The current CPU. * @param ppPd Where to return the pointer to the mapping. This is * Performs the lazy mapping of the PAE guest PDPT. * @returns VBox status code. * @param pVCpu The current CPU. * @param ppPdpt Where to return the pointer to the mapping. This is * Performs the lazy mapping / updating of a PAE guest PD. * @returns Pointer to the mapping. * @returns VBox status code. * @param pVCpu The current CPU. * @param iPdpt Which PD entry to map (0..3). * @param ppPd Where to return the pointer to the mapping. This is /* Invalid page or some failure, invalidate the entry. */ #
endif /* !VBOX_WITH_2X_4GB_ADDR_SPACE_IN_R0 */ * Performs the lazy mapping of the 32-bit guest PD. * @returns VBox status code. * @param pVCpu The current CPU. * @param ppPml4 Where to return the pointer to the mapping. This will * Gets the PAE PDPEs values cached by the CPU. * @returns VBox status code. * @param pVCpu The virtual CPU. * @param paPdpes Where to return the four PDPEs. The array * pointed to must have 4 entries. * Sets the PAE PDPEs values cached by the CPU. * @remarks This must be called *AFTER* PGMUpdateCR3. * @returns VBox status code. * @param pVCpu The virtual CPU. * @param paPdpes The four PDPE values. The array pointed to * must have exactly 4 entries. /* Force lazy remapping if it changed in any way. */ * Gets the current CR3 register value for the shadow memory context. * @param pVCpu VMCPU handle. * Gets the current CR3 register value for the nested memory context. * @param pVCpu VMCPU handle. * Gets the current CR3 register value for the HC intermediate memory context. * @param pVM The VM handle. * Gets the current CR3 register value for the RC intermediate memory context. * @param pVM The VM handle. * @param pVCpu VMCPU handle. return 0;
/* not relevant */ * Gets the CR3 register value for the 32-Bit intermediate memory context. * @param pVM The VM handle. * Gets the CR3 register value for the PAE intermediate memory context. * @param pVM The VM handle. * Gets the CR3 register value for the AMD64 intermediate memory context. * @param pVM The VM handle. * Performs and schedules necessary updates following a CR3 load or reload. * This will normally involve mapping the guest PD or nPDPT * @returns VBox status code. * @retval VINF_PGM_SYNC_CR3 if monitoring requires a CR3 sync. This can * safely be ignored and overridden since the FF will be set too then. * @param pVCpu VMCPU handle. * @param cr3 The new cr3. * @param fGlobal Indicates whether this is a global flush or not. * Always flag the necessary updates; necessary for hardware acceleration /** @todo optimize this, it shouldn't always be necessary. */ * Remap the CR3 content and adjust the monitoring if CR3 was actually changed. * Check if we have a pending update of the CR3 monitoring. * Performs and schedules necessary updates following a CR3 load or reload when * using nested or extended paging. * This API is an alternative to PDMFlushTLB that avoids actually flushing the * TLB and triggering a SyncCR3. * This will normally involve mapping the guest PD or nPDPT * @returns VBox status code. * @retval (If applied when not in nested mode: VINF_PGM_SYNC_CR3 if monitoring * requires a CR3 sync. This can safely be ignored and overridden since * the FF will be set too then.) * @param pVCpu VMCPU handle. * @param cr3 The new cr3. /* We assume we're only called in nested paging mode. */ * Remap the CR3 content and adjust the monitoring if CR3 was actually changed. AssertRCSuccess(
rc);
/* Assumes VINF_PGM_SYNC_CR3 doesn't apply to nested paging. */ /** @todo this isn't true for the mac, but we need hw to test/fix this. */ * Synchronize the paging structures. * This function is called in response to the VM_FF_PGM_SYNC_CR3 and * VM_FF_PGM_SYNC_CR3_NONGLOBAL. Those two force action flags are set * in several places, most importantly whenever the CR3 is loaded. * @returns VBox status code. * @param pVCpu VMCPU handle. * @param cr0 Guest context CR0 register * @param cr3 Guest context CR3 register * @param cr4 Guest context CR4 register * @param fGlobal Including global page directories or not * The pool may have pending stuff and even require a return to ring-3 to * We might be called when we shouldn't. * The mode switching will ensure that the PD is resynced * after every mode switch. So, if we find ourselves here * when in protected or real mode we can safely disable the * FF and return immediately. /* If global pages are not supported, then all flushes are global. */ * Check if we need to finish an aborted MapCR3 call (see PGMFlushTLB). * This should be done before SyncCR3. /* Make sure we check for pending pgm pool syncs as we clear VMCPU_FF_PGM_SYNC_CR3 later on! */ Log((
"PGMSyncCR3: pending pgm pool sync after MapCR3!\n"));
* Let the 'Bth' function do the work and we'll just keep track of the flags. /* Go back to ring 3 if a pgm pool sync is again pending. */ * Check if we have a pending update of the CR3 monitoring. * Now flush the CR3 (guest context). * Called whenever CR0 or CR4 in a way which may affect the paging mode. * @returns VBox status code, with the following informational code for * @retval VINF_SUCCESS if the was no change, or it was successfully dealt with. * @retval VINF_PGM_CHANGE_MODE if we're in RC or R0 and the mode changes. * @retval VINF_EM_SUSPEND or VINF_EM_OFF on a fatal runtime error. (R3 only) * @param pVCpu VMCPU handle. * @param cr0 The new cr0. * @param cr4 The new cr4. * @param efer The new extended feature enable register. * Calc the new guest mode. LogFlow((
"PGMChangeMode: returns VINF_PGM_CHANGE_MODE.\n"));
* Gets the current guest paging mode. * @returns The current paging mode. * @param pVCpu VMCPU handle. * Gets the current shadow paging mode. * @returns The current paging mode. * @param pVCpu VMCPU handle. * Gets the current host paging mode. * @returns The current paging mode. * @param pVM The VM handle. * @returns read-only name string. * @param enmMode The mode which name is desired. default:
return "unknown mode value";
* Notification from CPUM that the EFER.NXE bit has changed. * @param pVCpu The virtual CPU for which EFER changed. * @param fNxe The new NXE state. /** @todo VMCPU_ASSERT_EMT_OR_NOT_RUNNING(pVCpu); */ Log((
"PGMNotifyNxeChanged: fNxe=%RTbool\n",
fNxe));
/*pVCpu->pgm.s.fGst32BitMbzBigPdeMask - N/A */ /*pVCpu->pgm.s.fGstPaeMbzPdpeMask - N/A */ /*pVCpu->pgm.s.fGst32BitMbzBigPdeMask - N/A */ /*pVCpu->pgm.s.fGstPaeMbzPdpeMask -N/A */ * Check if any pgm pool pages are marked dirty (not monitored) * @param pVM The VM to operate on. * Check if this VCPU currently owns the PGM lock. * @param pVM The VM to operate on. * Enable or disable large page usage * @returns VBox status code. * @param pVM The VM to operate on. * @param fUseLargePages Use/not use large pages * @returns VBox status code * @param pVM The VM to operate on. * @returns VBox status code * @param pVM The VM to operate on. * Common worker for pgmRZDynMapGCPageOffInlined and pgmRZDynMapGCPageV2Inlined. * @returns VBox status code. * @param pVM The VM handle. * @param pVCpu The current CPU. * @param GCPhys The guest physical address of the page to map. The * offset bits are not ignored. * @param ppv Where to return the address corresponding to @a GCPhys. * Convert it to a writable page and it on to the dynamic mapper. #
endif /* IN_RC || VBOX_WITH_2X_4GB_ADDR_SPACE_IN_R0 *//** Format handler for PGMPAGE. * @copydoc FNRTSTRFORMATTYPE */ /* The single char state stuff. */ static const char s_achPageTypes[
8][
4] = {
"INV",
"RAM",
"MI2",
"M2A",
"SHA",
"ROM",
"MIO",
"BAD" };
static const char s_achRefs[
4] = {
'-',
'U',
'!',
'L' };
/** Format handler for PGMRAMRANGE. * @copydoc FNRTSTRFORMATTYPE */ #
endif /* !IN_R0 || LOG_ENABLED */ * Registers the global string format types. * This should be called at module load time or in some other manner that ensure * that it's called exactly one time. * @returns IPRT status code on RTStrFormatTypeRegister failure. /* in case of cleanup failure in ring-0 */ * Deregisters the global string format types. * This should be called at module unload time or in some other manner that * ensure that it's called exactly one time. * Asserts that there are no mapping conflicts. * @returns Number of conflicts. * @param pVM The VM Handle. /* Only applies to raw mode -> 1 VPCU */ * Check for mapping conflicts. /** @todo This is slow and should be optimized, but since it's just assertions I don't care now. */ * Asserts that everything related to the guest CR3 is correctly shadowed. * This will call PGMAssertNoMappingConflicts() and PGMAssertHandlerAndFlagsInSync(), * and assert the correctness of the guest CR3 mapping before asserting that the * shadow page tables is in sync with the guest page tables. * @returns Number of conflicts. * @param pVM The VM Handle. * @param pVCpu VMCPU handle. * @param cr3 The current guest CR3 register value. * @param cr4 The current guest CR4 register value.