i915_gem.c revision 1450
68N/A * Copyright (c) 2006, 2013, Oracle and/or its affiliates. All rights reserved. 68N/A * Copyright (c) 2009, 2013, Intel Corporation. 68N/A * All Rights Reserved. 68N/A * Permission is hereby granted, free of charge, to any person obtaining a 68N/A * copy of this software and associated documentation files (the "Software"), 68N/A * to deal in the Software without restriction, including without limitation 68N/A * the rights to use, copy, modify, merge, publish, distribute, sublicense, 68N/A * and/or sell copies of the Software, and to permit persons to whom the 68N/A * Software is furnished to do so, subject to the following conditions: 68N/A * The above copyright notice and this permission notice (including the next 68N/A * paragraph) shall be included in all copies or substantial portions of the 68N/A * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 68N/A * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 68N/A * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 68N/A * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 68N/A * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 68N/A * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 68N/A * Eric Anholt <eric@anholt.net> /* As we do not have an associated fence register, we will force * a tiling change if we ever need to acquire one. * Only wait 10 seconds for the gpu reset to complete to avoid hanging * userspace. If it takes that long something really bad is going on and * we should simply try to bail out and fail as gracefully as possible. DRM_ERROR(
"Timed out waiting for the gpu reset to complete\n");
/* fix me mutex_lock_interruptible */ /* GEM with user mode setting was never supported on ilk and later. */ /* Allocate the new object */ /* drop reference from allocate - handle holds it now */ /* have to work out size/pitch and return them */ * Creates a new mm object and returns a handle to it. /* Use the unswizzled path if this page isn't affected. */ DRM_ERROR(
"slow_shmem_bit17_copy unswizzled path failed, ret = %d",
ret);
/* Copy the data, XORing A6 with A17 (1). The user already knows he's * XORing with the other bits (A9 for Y, A9 and A10 for X) /* If we're not in the cpu read domain, set ourself into the gtt * read domain and manually flush cachelines (if required). This * optimizes for the case when the gpu will dirty the data * anyway again before the next pread happens. */ /* Operation in this page * shmem_page_index = page number within shmem file * shmem_page_offset = offset within page in shmem file * data_page_index = page number in get_user_pages return * data_page_offset = offset with data_page_index page. * page_length = bytes to copy for this page * Reads data from the object referenced by handle. * On error, the contents of *data are undefined. /* Bounds check source. */ /* prime objects have no backing filp to GEM pread/pwrite /* Pin the user pages containing the data. We can't fault while * holding the struct mutex, and all of the pwrite implementations * want to hold it while dereferencing the user data. /* If we're not in the cpu write domain, set ourself into the gtt * write domain and manually flush cachelines (if required). This * optimizes for the case when the gpu will use the data * right away and we therefore have to clflush anyway. */ /* Same trick applies for invalidate partially written cachelines before /* Operation in this page * shmem_page_index = page number within shmem file * shmem_page_offset = offset within page in shmem file * data_page_index = page number in get_user_pages return * data_page_offset = offset with data_page_index page. * page_length = bytes to copy for this page * Writes data to the object referenced by handle. * On error, the contents of the buffer that were to be modified are undefined. /* Bounds check destination. */ /* prime objects have no backing filp to GEM pread/pwrite /* We can only do the GTT pwrite on untiled buffers, as otherwise * it would end up going through the fenced access, and we'll get * different detiling behavior between reading and writing. * pread/pwrite currently are reading and writing from the CPU * perspective, requiring manual detiling by the client. /* Note that the gtt paths might fail with non-page-backed user * pointers (e.g. gtt mappings when moving data between * textures). Fallback to the shmem path in that case. */ /* Flushing cursor object */ /* Non-interruptible callers can't handle -EAGAIN, hence return * -EIO unconditionally for these. */ /* Recovery complete, but the reset failed ... */ * Compare seqno against outstanding lazy request. Emit a request if they are * __wait_seqno - wait until execution of seqno has finished * @ring: the ring expected to report seqno * @reset_counter: reset sequence associated with the given seqno * @interruptible: do an interruptible wait (normally yes) * @timeout: in - how long to wait (NULL forever); out - how much time remaining * Note: It is of utmost importance that the passed in seqno and reset_counter * values have been read by the caller in an smp safe manner. Where read-side * locks are involved, it is sufficient to read the reset_counter before * unlocking the lock that protects the seqno. For lockless tricks, the * reset_counter _must_ be read before, and an appropriate smp_rmb must be * Returns 0 if the seqno was found within the alloted time. Else returns the * errno with remaining time filled in timeout argument. /* busy check is faster than cv wait on gen6+ */ * Frequently read CS register may cause my GEN7 platform hang, * but it's crucial for missed IRQ issue. * So the first wait busy check the seqno, * the second wait force correct ordering * between irq and seqno writes then check again. /* We need to check whether any gpu reset happened in between * the caller grabbing the seqno and now ... */ /* ... but upgrade the -EGAIN to an -EIO if the gpu is truely DRM_ERROR(
"%s returns %d (awaiting %d at %d, next %d)\n",
* Waits for a sequence number to be signaled, and cleans up the * request and object lists appropriately for that event. /* Manually manage the write flush as we may have not yet * Note that the last_write_seqno is always the earlier of * the two (read/write) seqno, so if we haved successfully waited, * we know we have passed the last write. * Ensures that all rendering to the object has completed and the object is * safe to unbind from the GTT or access from the CPU. /* A nonblocking variant of the above wait. This is a highly dangerous routine * as the object state may change during this call. * Called when user space prepares to use an object with the CPU, either * through the mmap ioctl's mapping or a GTT mapping. /* Only handle setting domains to types used by the CPU. */ /* Having something in the write domain implies it's in the read * domain, and only that read domain. Enforce that in the request. /* Try to flush the object off the GPU without holding the lock. * We will repeat the flush holding the lock in the normal manner * to catch cases where we are gazumped. /* Silently promote "you're not bound, there was nothing to do" * to success, since the client was just asking us to * make sure everything was done. * Called when user space has done writes to this buffer /* Pinned buffers may be scanout, so flush the cache */ * Maps the contents of an object, returning the address it is mapped * While the mapping holds a reference on the contents of the object, it doesn't * imply a ref on the object itself. /* prime objects have no backing filp to GEM mmap /* Now bind it into the GTT if needed */ /* Access to snoopable pages through the GTT is incoherent. */ /* Finally, remap it using the new GTT offset */ * i915_gem_create_mmap_offset - create a fake mmap offset for an object * GEM memory mapping works by handing back to userspace a fake mmap offset * it can use in a subsequent mmap(2) call. The DRM core code then looks * up the object based on the offset and sets up the various memory mapping * This routine allocates and attaches a fake offset for @obj. /* user_token is the fake offset * which create in drm_map_handle at alloc time * i915_gem_release_mmap - remove physical page mappings * Preserve the reservation of the mmaping with the DRM core code, but * relinquish ownership of the pages back to the system. * It is vital that we remove the page mapping if we have mapped a tiled * object through the GTT and then lose the fence register due to * resource pressure. Similarly if the object has been moved out of the * aperture, than pages mapped into userspace must be revoked. Removing the * mapping will then trigger a page fault on the next user access, allowing * fixup by i915_gem_fault(). /* Previous chips need a power-of-two fence region when tiling */ * i915_gem_get_gtt_alignment - return required GTT alignment for an object * Return the required GTT alignment for an object, taking into account * potential fence register mapping if needed. * Minimum alignment is 4k (GTT page size), but might be greater * if a fence register is needed for the object. * Previous chips need to be aligned to the size of the smallest * fence register that can contain the object. * i915_gem_mmap_gtt_ioctl - prepare an object for GTT mmap'ing * @data: GTT mapping ioctl data * @file_priv: GEM object info * Simply returns the fake offset to userspace so it can mmap it. * The mmap call will end up in drm_gem_mmap(), which will set things * up so we can get faults in the handler above. * The fault handler will take care of binding the object into the GTT * (since it may have been evicted to make room for something), allocating * a fence register, and mapping the appropriate aperture address into /* In the event of a disaster, abandon all caches and /* Ensure that the associated pages are gathered from the backing storage * and pinned into our object. i915_gem_object_get_pages() may be called * multiple times before they are released by a single call to * i915_gem_object_put_pages() - once the pages are no longer referenced * either as a result of memory pressure (reaping pages under the shrinker) * or as the object is itself released. /* Keep the seqno relative to the current ring */ /* Add a reference if we're newly entering the active list. */ /* Move from whatever list we were on to the tail of execution. */ /* Bump MRU to take account of the delayed flush */ /* Carefully retire all requests without writing to the rings */ /* Finally reset hw state */ /* HWS page needs to be set less than what we /* Carefully set the last_seqno value so that wrap /* reserve 0 for non-seqno */ * Emit any outstanding flushes - execbuf can fail to emit the flush * after having emitted the batchbuffer command. Hence we need to fix * things up similar to emitting the lazy request. The difference here * is that the flush _must_ happen before the next request, no matter /* Record the position of the start of the request so that * should we detect the updated seqno part-way through the * GPU processing the request, we never over-estimate the /* Whilst this request exists, batch_obj will be on the * active_list, and so will hold the active reference. Only when this * request is retired will the the batch_obj be moved onto the * inactive_list and lose its active reference. Hence we do not need * to explicitly hold another reference here. /* change to delay HZ and then run work (not insert to workqueue of Linux) */ DRM_DEBUG(
"i915_gem: schedule_delayed_work");
/* There is a possibility that unmasked head address * pointing inside the ring, matches the batch_obj address range. * However this is extremely unlikely. /* Innocent until proven guilty */ DRM_ERROR(
"%s hung %s bo (0x%x ctx %d) at 0x%x\n",
inside ?
"inside" :
"flushing",
/* If contexts are disabled or this is the default context, use * Commit delayed tiling changes if we have an object still * attached to the fence, otherwise just clear the fence. /* Move everything out of the GPU domains to ensure we do any * necessary invalidation upon reuse. * This function clears the request list as sequence numbers are passed. /* We know the GPU must have read the request to have * sent us the seqno + interrupt, so use the position * of tail of the request to update the last known position /* Move any buffers on the active list that are no longer referenced /* Come back later if the device is busy... */ /* Send a periodic flush down the ring so we don't hold onto GEM DRM_DEBUG(
"i915_gem: schedule_delayed_work");
* Ensures that an object will eventually get non-busy by flushing any required * write domains, emitting any outstanding lazy request and retiring and * i915_gem_wait_ioctl - implements DRM_IOCTL_I915_GEM_WAIT * @DRM_IOCTL_ARGS: standard ioctl arguments * Returns 0 if successful, else an error is returned with the remaining time in * -ETIME: object is still busy after timeout * -ERESTARTSYS: signal interrupted the wait * -ENONENT: object doesn't exist * Also possible, but rare: * -ENODEV: Internal IRQ fail * -E?: The add request failed * The wait ioctl with a timeout of 0 reimplements the busy ioctl. With any * non-zero timeout parameter the wait ioctl will wait for the given number of * nanoseconds on an object becoming unbusy. Since the wait itself does so * without holding struct_mutex the object may become re-busied before this * function completes. A similar but shorter * race condition exists in the busy /* Need to make sure the object gets inactive eventually. */ /* Do this after OLR check to make sure we make forward progress polling * on this IOCTL with a 0 timeout (like busy ioctl) * i915_gem_object_sync - sync an object to a ring. * @obj: object which may be in use on another ring. * @to: ring we wish to use the object on. May be NULL. * This code is meant to abstract object synchronization with the GPU. * Calling with NULL implies synchronizing the object with the CPU * rather than a particular GPU ring. * Returns 0 if successful, else propagates up the lower layer error. /* We use last_read_seqno because sync_to() * might have just caused seqno wrap under /* Force a pagefault for domain tracking on next user access */ * Unbinds an object from the GTT aperture. /* Continue on if we fail due to EIO, the GPU is hung so we * should be safe and we need to cleanup or else we might * cause memory corruption through use-after-free. /* release the fence reg _after_ flushing */ /* Avoid an unnecessary call to unbind on rebind. */ /* Flush everything onto the inactive list. */ /* To w/a incoherency with non-atomic 64-bit register updates, * we split the 64-bit update into two 32-bit writes. In order * for a partial fence not to be evaluated between writes, we * precede the update with write to turn off the fence register, * and only enable the fence as the last step. * For extra levels of paranoia, we make sure each step lands * before applying the next step. DRM_ERROR(
"object 0x%08x [fenceable? %d] not 1M or pot-size (0x%08x) aligned\n",
/* Note: pitch better be a power of two tile widths */ DRM_ERROR(
"object 0x%08x not 512K or pot-size 0x%08x aligned\n",
/* Ensure that all CPU reads are completed before installing a fence * and all writes before removing the fence. DRM_ERROR(
"bogus fence setup with stride: 0x%x, tiling mode: %i\n",
/* And similarly be paranoid that no direct access to this region * is reordered to before the fence is installed. /* First try to find a free reg */ /* None available, try to steal one or wait for a user to finish */ * i915_gem_object_get_fence_reg - set up a fence reg for an object * @obj: object to map through a fence reg * When mapping objects through the GTT, userspace wants to be able to write * to them without having to worry about swizzling if the object is tiled. * This function walks the fence regs looking for a free one for @obj, * stealing one if it can't find any. * It then sets up the reg based on the object's properties: address, pitch * For an untiled surface, this removes any existing fence. /* Have we updated the tiling parameters upon the object and so * will need to serialise the write to the associated fence register? /* Just update our place in the LRU if our fence is getting reused. */ /* On non-LLC machines we have to be careful when putting differing * types of snoopable memory together to avoid the prefetcher * crossing memory domains and dieing. DRM_ERROR(
"object found on GTT list with no space reserved\n");
DRM_ERROR(
"object reserved space [%08lx, %08lx] with wrong color, cache_level=%x, color=%lx\n",
DRM_ERROR(
"invalid GTT space found at [%08lx, %08lx] - color=%x\n",
* Finds free space in the GTT aperture and binds the object there. /* If the object is bigger than the entire aperture, reject it early * before evicting everything in a vain attempt to find space. DRM_ERROR(
"Attempting to bind an object larger than the aperture: object=%zd > %s aperture=%zu\n",
/* If we don't have a page list set up, then we're not pinned * to GPU, and we can ignore the cache flush because it'll happen * Stolen memory is always coherent with the GPU as it is explicitly * marked as wc by the system, or the system is cache-coherent. /* If the GPU is snooping the contents of the CPU cache, * we do not need to manually clear the CPU cache lines. However, * the caches are only snooped when the render cache is * and flushes when moving into and out of the RENDER domain, correct * snooping behaviour occurs naturally as the result of our domain /** Flushes the GTT write domain for the object if it's dirty. */ /* No actual flushing is required for the GTT write domain. Writes * to it immediately go to main memory as far as we know, so there's * no chipset flush. It also doesn't land in render cache. * However, we do have to enforce the order so that all writes through * the GTT land before any writes to the device, such as updates to /** Flushes the CPU write domain for the object if it's dirty. */ * Moves a single object to the GTT read, and possibly write domain. * This function returns when the move is complete, including waiting on /* Not valid to be called on unbound objects. */ /* Serialise direct access to this object with the barriers for * coherent writes from the GPU, by effectively invalidating the * GTT domain upon first access. /* It should now be out of any other write domains, and we can update * the domain values for our changes. /* GPU reset can handle this error */ // BUG_ON((obj->base.write_domain & ~I915_GEM_DOMAIN_GTT) != 0); /* And bump the LRU for this access */ DRM_DEBUG(
"can not change the cache level of pinned objects\n");
/* Before SandyBridge, you could not use tiling or fence * registers with snooped memory, so relinquish any fences * currently pointing to our region in the aperture. /* If we're coming from LLC cached, then we haven't * actually been tracking whether the data is in the * CPU cache or not, since we only allow one bit set * in obj->write_domain and have been skipping the clflushes. * Just set it to the CPU cache for now. * Prepare buffer for display plane (scanout, cursors, etc). * Can be called from an uninterruptible phase (modesetting) and allows * any flushes to be pipelined (for pageflips). /* The display engine is not coherent with the LLC cache on gen6. As * a result, we make sure that the pinning that is about to occur is * done with uncached PTEs. This is lowest common denominator for all * However for gen6+, we could do better by using the GFDT bit instead * of uncaching, which would allow us to flush all the LLC-cached data * with that bit in the PTE to main memory with just one PIPE_CONTROL. /* As the user may map the buffer once pinned in the display plane * (e.g. libkms for the bootup splash), we have to ensure that we * always use map_and_fenceable for all scanout buffers. /* It should now be out of any other write domains, and we can update * the domain values for our changes. /* Ensure that we invalidate the GPU's caches and TLBs. */ * Moves a single object to the CPU read, and possibly write domain. * This function returns when the move is complete, including waiting on /* Flush the CPU cache if it's still invalid. */ /* It should now be out of any other write domains, and we can update * the domain values for our changes. /* If we're writing through the CPU, then the GPU read domains will * need to be invalidated at next use. /* Throttle our rendering by waiting until the ring has completed our requests * emitted over 20 msec ago. * Note that if we were to use the current jiffies each time around the loop, * we wouldn't escape the function with any frames outstanding if the time to * render a frame was over 20ms. * This should get us reasonable parallelism between CPU and GPU but also * relatively low latency when blocking on a particular request to finish. DRM_INFO(
"bo is already pinned with incorrect alignment:" " offset=%x, req.alignment=%x, req.map_and_fenceable=%d," " obj->map_and_fenceable=%d\n",
DRM_ERROR(
"Already pinned in i915_gem_pin_ioctl(): %d\n",
/* XXX - flush the CPU caches for pinned objects * as the X server doesn't manage domains yet DRM_ERROR(
"Not pinned by caller in i915_gem_pin_ioctl(): %d\n",
/* Count all active objects as busy, even if they are currently not used * by the gpu. Users of this interface expect objects to eventually * become non-busy without any further actions, therefore emit any * necessary flushes here. /* Don't enable buffer catch */ /* Avoid an unnecessary call to unbind on the first bind. */ /* On Gen6, we can have the GPU use the LLC (the CPU * cache) for about a 10% performance improvement * compared to uncached. Graphics requests other than * display scanout are coherent with the CPU in * accessing this cache. This means in this mode we * don't need to clflush on the CPU side, and on the * GPU side we only need to flush internal caches to * get data visible to the CPU. * However, we maintain the display planes as UC, and so * need to rebind when first used as such. DRM_ERROR(
"i915_gem_init_object is not supported, BUG!");
/* Stolen objects don't hold a ref, but do hold pin count. Fix that up // if (obj->base.import_attach) // drm_prime_gem_destroy(&obj->base, NULL); /* Under UMS, be paranoid and evict. */ /* Hack! Don't let anybody do execbuf while we don't control the chip. * We need to replace this with a semaphore, or something. * And not confound mm.suspended! /* Cancel the retire work handler, wait for it to finish if running DRM_DEBUG(
"0x%x was already programmed to %x\n",
/* Make sure all the writes land before disabling dop clock gating */ * XXX: There was some w/a described somewhere suggesting loading DRM_INFO(
"PPGTT enable failed. This is not fatal, but unexpected\n");
/* VLVA0 (potential hack), BIOS isn't actually waking us */ /* save original fb GTT */ * Some BIOSes fail to initialise the GTT, which will cause DMA faults when * the IOMMU is enabled. We need to clear the whole GTT. /* workaround: prealloc fb buffer, make sure the start address 0 */ /* copy old content to fb buffer */ /* Flush everything out, we'll be doing GTT only from now on */ /* Allow hardware batchbuffers unless told otherwise, but not for KMS. */ DRM_ERROR(
"Reenabling wedged hardware, good luck\n");
/* On GEN3 we really need to make sure the ARB C3 LP bit is set */ /* Old X drivers will take 0-2 for front, back, depth buffers */ /* Initialize fence registers to zero */ * Create a physically contiguous memory object for this object * e.g. for cursor + overlay regs /* create a new object */ /* i915_gpu_idle() generates warning message, so just ignore return */ /* Clean up our request list when the client is going away, so that * later retire_requests won't dereference our soon-to-be-gone