Lines Matching defs:to
9 * Permission is hereby granted, free of charge, to any person obtaining a
11 * to deal in the Software without restriction, including without limitation
12 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
13 * and/or sell copies of the Software, and to permit persons to whom the
14 * Software is furnished to do so, subject to the following conditions:
64 * a tiling change if we ever need to acquire one.
95 * Only wait 10 seconds for the gpu reset to complete to avoid hanging
97 * we should simply try to bail out and fail as gracefully as possible.
100 DRM_ERROR("Timed out waiting for the gpu reset to complete\n");
228 /* have to work out size/pitch and return them */
242 * Creates a new mm object and returns a handle to it.
359 * page_length = bytes to copy for this page
427 /* prime objects have no backing filp to GEM pread/pwrite
498 * want to hold it while dereferencing the user data.
508 * right away and we therefore have to clflush anyway. */
543 * page_length = bytes to copy for this page
583 * Writes data to the object referenced by handle.
585 * On error, the contents of the buffer that were to be modified are undefined.
615 /* prime objects have no backing filp to GEM pread/pwrite
637 * textures). Fallback to the shmem path in that case. */
694 * @ring: the ring expected to report seqno
698 * @timeout: in - how long to wait (NULL forever); out - how much time remaining
702 * locks are involved, it is sufficient to read the reset_counter before
762 /* We need to check whether any gpu reset happened in between
767 /* ... but upgrade the -EGAIN to an -EIO if the gpu is truly
792 * Waits for a sequence number to be signaled, and cleans up the
839 * Ensures that all rendering to the object has completed and the object is
840 * safe to unbind from the GTT or access from the CPU.
901 * Called when user space prepares to use an object with the CPU, either
914 /* Only handle setting domains to types used by the CPU. */
937 /* Try to flush the object off the GPU without holding the lock.
939 * to catch cases where we are gazumped.
948 /* Silently promote "you're not bound, there was nothing to do"
949 * to success, since the client was just asking us to
966 * Called when user space has done writes to this buffer
1020 /* prime objects have no backing filp to GEM mmap
1060 /* Access to snoopable pages through the GTT is incoherent. */
1095 * to overload the umem_cookie. This *can* cause race conditions where
1096 * released memory can have bad cookie values. By default, we set it to
1105 * GEM memory mapping works by handing back to userspace a fake mmap offset
1120 * as it causes a drm object to have two different memory allocations
1121 * (not to mention some ugly overloading of the umem_cookie). But maybe
1122 * this is something to fix with the VMA code in the next driver.
1127 DRM_ERROR("failed to alloc kernel memory");
1149 * relinquish ownership of the pages back to the system.
1152 * object through the GTT and then lose the fence register due to
1209 * @obj: object to check
1227 * Previous chips need to be aligned to the size of the smallest
1280 * Simply returns the fake offset to userspace so it can mmap it.
1285 * (since it may have been evicted to make room for something), allocating
1356 DRM_ERROR("Faled to allocate page list. size = %ld", np * sizeof(caddr_t));
1372 * multiple times before they are released by a single call to
1407 /* Keep the seqno relative to the current ring */
1418 /* Move from whatever list we were on to the tail of execution. */
1426 /* Bump MRU to take account of the delayed flush */
1434 TRACE_GEM_OBJ_HISTORY(obj, "to active");
1459 TRACE_GEM_OBJ_HISTORY(obj, "to inactive");
1472 /* Carefully retire all requests without writing to the rings */
1499 /* HWS page needs to be set less than what we
1500 * will inject to ring
1548 * Emit any outstanding flushes - execbuf can fail to emit the flush
1549 * after having emitted the batchbuffer command. Hence we need to fix
1550 * things up similar to emitting the lazy request. The difference here
1587 * to explicitly hold another reference here.
1617 /* change to delay HZ and then run work (not insert to workqueue of Linux) */
1788 * attached to the fence, otherwise just clear the fence.
1809 /* Move everything out of the GPU domains to ensure we do any
1847 /* We know the GPU must have read the request to have
1849 * of tail of the request to update the last known position
1858 * by the ringbuffer to the flushing/inactive lists as appropriate.
2009 /* Need to make sure the object gets inactive eventually. */
2022 /* Do this after OLR check to make sure we make forward progress polling
2047 * i915_gem_object_sync - sync an object to a ring.
2050 * @to: ring we wish to use the object on. May be NULL.
2052 * This code is meant to abstract object synchronization with the GPU.
2060 struct intel_ring_buffer *to)
2066 if (from == NULL || to == from)
2069 if (to == NULL || !i915_semaphore_is_enabled(obj->base.dev))
2072 idx = intel_ring_sync_index(from, to);
2082 ret = to->sync_to(to, from, seqno);
2127 /* Continue on if we fail due to EIO, the GPU is hung so we
2128 * should be safe and we need to cleanup or else we might
2150 /* Avoid an unnecessary call to unbind on rebind. */
2200 * for a partial fence not to be evaluated between writes, we
2201 * precede the update with write to turn off the fence register,
2339 /* And similarly be paranoid that no direct access to this region
2340 * is reordered to before the fence is installed.
2417 /* First try to find a free reg */
2431 /* None available, try to steal one or wait for a user to finish */
2444 * @obj: object to map through a fence reg
2446 * When mapping objects through the GTT, userspace wants to be able to write
2447 * to them without having to worry about swizzling if the object is tiled.
2466 * will need to serialise the write to the associated fence register?
2510 /* On non-LLC machines we have to be careful when putting differing
2511 * types of snoopable memory together to avoid the prefetcher
2614 * before evicting everything in a vain attempt to find space.
2617 DRM_ERROR("Attempting to bind an object larger than the aperture: object=%zd > %s aperture=%zu\n",
2691 * to GPU, and we can ignore the cache flush because it'll happen
2705 * we do not need to manually clear the CPU cache lines. However,
2707 * flushed/invalidated. As we always have to emit invalidations
2727 * to it immediately go to main memory as far as we know, so there's
2730 * However, we do have to enforce the order so that all writes through
2731 * the GTT land before any writes to the device, such as updates to
2754 * Moves a single object to the GTT read, and possibly write domain.
2757 * flushes to occur.
2765 /* Not valid to be called on unbound objects. */
2778 /* Serialise direct access to this object with the barriers for
2834 * currently pointing to our region in the aperture.
2856 * Just set it to the CPU cache for now.
2933 * any flushes to be pipelined (for pageflips).
2951 * a result, we make sure that the pinning that is about to occur is
2956 * of uncaching, which would allow us to flush all the LLC-cached data
2957 * with that bit in the PTE to main memory with just one PIPE_CONTROL.
2964 * (e.g. libkms for the bootup splash), we have to ensure that we
3003 * Moves a single object to the CPU read, and possibly write domain.
3006 * flushes to occur.
3040 * need to be invalidated at next use.
3053 * Note that if we were to use the current jiffies each time around the loop,
3054 * we wouldn't escape the function with any frames outstanding if the time to
3058 * relatively low latency when blocking on a particular request to finish.
3259 * by the gpu. Users of this interface expect objects to eventually
3306 /* Avoid an unnecessary call to unbind on the first bind. */
3334 DRM_ERROR("failed to init gem object");
3347 * compared to uncached. Graphics requests other than
3350 * don't need to clflush on the CPU side, and on the
3351 * GPU side we only need to flush internal caches to
3352 * get data visible to the CPU.
3355 * need to rebind when first used as such.
3439 * We need to replace this with a semaphore, or something.
3450 /* Cancel the retire work handler, wait for it to finish if running
3477 DRM_DEBUG("0x%x was already programmed to %x\n",
3635 * Some BIOSes fail to initialise the GTT, which will cause DMA faults when
3636 * the IOMMU is enabled. We need to clear the whole GTT.
3643 DRM_ERROR("failed to allocate framebuffer");
3649 /* copy old content to fb buffer */
3655 DRM_ERROR("failed to pin fb ret %d", ret);
3752 DRM_ERROR("failed to idle hardware: %d\n", ret);
3782 /* On GEN3 we really need to make sure the ARB C3 LP bit is set */
3801 /* Initialize fence registers to zero */
3927 DRM_ERROR("failed to init phys object %d size: %lu\n", id, obj->base.size);
3932 /* bind to the object */
3939 DRM_ERROR("failed to get page list\n");
3995 * later retire_requests won't dereference our soon-to-be-gone