PDMAsyncCompletionFileInternal.h revision dc0a54940789f994c84390cb4a9f03da0b492285
/** @todo: Revise the caching of tasks. We have currently four caches: * Per endpoint task cache * Per endpoint task segment cache * Per class task segment cache * We could use the RT heap for this probably or extend MMR3Heap (uses RTMemAlloc * instead of managing larger blocks) to have this global for the whole VM. /** Enable for delay injection from the debugger. */ * A few forward declarations. /** Pointer to a request segment. */ /** Pointer to the endpoint class data. */ /** Pointer to a cache LRU list. */ /** Pointer to the global cache structure. */ /** Pointer to a task segment. */ /** An endpoint is added to the manager. */ /** An endpoint is removed from the manager. */ /** An endpoint is about to be closed. */ /** The manager is requested to terminate */ /** The manager is requested to suspend */ /** The manager is requested to resume */ /** Simple aka failsafe */ /** Async I/O with host cache enabled. */ /** Pointer to a I/O manager type */ * States of the I/O manager. /** Normal running state accepting new requests /** Fault state - not accepting new tasks for endpoints but waiting for * remaining ones to finish. /** Suspending state - not accepting new tasks for endpoints but waiting * for remaining ones to finish. /** Shutdown state - not accepting new tasks for endpoints but waiting * for remaining ones to finish. /** The I/O manager waits for all active requests to complete and doesn't queue * new ones because it needs to grow to handle more requests. * State of a async I/O manager. /** Next Aio manager in the list. */ /** Previous Aio manager in the list. */ /** Current state of the manager. */ /** Event semaphore the manager sleeps on when waiting for new requests. */ /** Flag whether the thread waits in the event semaphore. */ /** The async I/O context for this manager. */ /** Flag whether the I/O manager was woken up. */ /** List of endpoints assigned to this manager. */ /** Number of endpoints assigned to the manager. */ /** Number of requests active currently. */ /** Number of maximum requests active. */ /** Pointer to an array of free async I/O request handles. */ /** Index of the next free entry in the cache. */ /** Size of the array. */ /** Memory cache for file range locks. */ /** Number of milliseconds to wait until the bandwidth is refreshed for at least * one endpoint and it is possible to process more requests. */ /** Critical section protecting the blocking event handling. */ /** Event semaphore for blocking external events. * The caller waits on it until the async I/O manager * finished processing the event. */ /** Flag whether a blocking event is pending and needs * processing by the I/O manager. */ /** Blocking event type */ /** Add endpoint event. */ /** The endpoint to be added */ /** Remove endpoint event. */ /** The endpoint to be removed */ /** Close endpoint event. */ /** The endpoint to be closed */ /** Pointer to a async I/O manager state. */ /** Pointer to a async I/O manager state pointer. */ * A file access range lock. /** AVL node in the locked range tree of the endpoint. */ /** How many tasks have locked this range. */ /** Flag whether this is a read or write lock. */ /** List of tasks which are waiting that the range gets unlocked. */ /** List of tasks which are waiting that the range gets unlocked. */ * Backend type for the endpoint. /** Buffered (i.e host cache enabled) */ /** Pointer to a backend type. */ * Global data for the file endpoint class. /** Override I/O manager type - set to SIMPLE after failure. */ /** Default backend type for the endpoint. */ /** Pointer to the head of the async I/O managers. */ /** Number of async I/O managers currently running. */ /** Maximum number of segments to cache per endpoint */ /** Maximum number of simultaneous outstandingrequests. */ /** Bitmask for checking the alignment of a buffer. */ /** Flag whether the out of resources warning was printed already. */ /** Pointer to the endpoint class data. */ /** The invalid event type */ /** A task is about to be canceled */ * States of the endpoint. /** Normal running state accepting new requests /** The endpoint is about to be closed - not accepting new tasks for endpoints but waiting for * remaining ones to finish. /** Removing from current I/O manager state - not processing new tasks for endpoints but waiting * for remaining ones to finish. /** The current endpoint will be migrated to another I/O manager. */ * Data for the file endpoint. /** Current state of the endpoint. */ /** The backend to use for this endpoint. */ /** async I/O manager this endpoint is assigned to. */ /** Flags for opening the file. */ * Real size of the file. Only updated if /** List of new tasks. */ /** Head of the small cache for allocated task segments for exclusive * use by this endpoint. */ /** Tail of the small cache for allocated task segments for exclusive * use by this endpoint. */ /** Number of elements in the cache. */ /** Flag whether a flush request is currently active */ /** Time spend in a read. */ /** Time spend in a write. */ /** Event semaphore for blocking external events. * The caller waits on it until the async I/O manager * finished processing the event. */ /** Flag whether caching is enabled for this file. */ /** Flag whether the file was opened readonly. */ /** Flag whether the host supports the async flush API. */ /** Status code to inject for the next complete read. */ /** Status code to inject for the next complete write. */ /** The current task which gets delayed. */ /** Timestamp when the delay expires. */ /** Flag whether a blocking event is pending and needs * processing by the I/O manager. */ /** Blocking event type */ /** Additional data needed for the event types. */ /** Cancelation event. */ /** The task to cancel. */ /** Data for exclusive use by the assigned async I/O manager. */ /** Pointer to the next endpoint assigned to the manager. */ /** Pointer to the previous endpoint assigned to the manager. */ /** List of pending requests (not submitted due to usage restrictions * or a pending flush request) */ /** Tail of pending requests. */ /** Tree of currently locked ranges. * If a write task is enqueued the range gets locked and any other * task writing to that range has to wait until the task completes. /** Number of requests currently being processed for this endpoint * (excluded flush requests). */ /** Number of requests processed during the last second. */ /** Current number of processed requests for the current update period. */ /** Flag whether the endpoint is about to be moved to another manager. */ /** Destination I/O manager. */ /** Pointer to the endpoint class data. */ /** Request completion function */ /** Pointer to a request completion function. */ /** Pointer to the range lock we are waiting for */ /** Next task in the list. (Depending on the state) */ /** When non-zero the segment uses a bounce buffer because the provided buffer * doesn't meet host requirements. */ /** Pointer to the used bounce buffer if any. */ /** Start offset in the bounce buffer to copy from. */ /** Flag whether this is a prefetch request. */ /** Already prepared native I/O request. * Used if the request is prepared already but * was not queued because the host has not enough /** Completion function to call on completion. */ /** Number of bytes to transfer until this task completes. */ /** Flag whether the task completed. */