Lines Matching defs:to

9  * Permission is hereby granted, free of charge, to any person obtaining a
11 * to deal in the Software without restriction, including without limitation
12 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
13 * and/or sell copies of the Software, and to permit persons to whom the
14 * Software is furnished to do so, subject to the following conditions:
66 * a tiling change if we ever need to acquire one.
97 * Only wait 10 seconds for the gpu reset to complete to avoid hanging
99 * we should simply try to bail out and fail as gracefully as possible.
102 DRM_ERROR("Timed out waiting for the gpu reset to complete\n");
230 /* have to work out size/pitch and return them */
244 * Creates a new mm object and returns a handle to it.
361 * page_length = bytes to copy for this page
429 /* prime objects have no backing filp to GEM pread/pwrite
500 * want to hold it while dereferencing the user data.
510 * right away and we therefore have to clflush anyway. */
545 * page_length = bytes to copy for this page
585 * Writes data to the object referenced by handle.
587 * On error, the contents of the buffer that were to be modified are undefined.
617 /* prime objects have no backing filp to GEM pread/pwrite
639 * textures). Fallback to the shmem path in that case. */
696 * @ring: the ring expected to report seqno
700 * @timeout: in - how long to wait (NULL forever); out - how much time remaining
704 * locks are involved, it is sufficient to read the reset_counter before
764 /* We need to check whether any gpu reset happened in between
769 /* ... but upgrade the -EGAIN to an -EIO if the gpu is truely
794 * Waits for a sequence number to be signaled, and cleans up the
841 * Ensures that all rendering to the object has completed and the object is
842 * safe to unbind from the GTT or access from the CPU.
903 * Called when user space prepares to use an object with the CPU, either
916 /* Only handle setting domains to types used by the CPU. */
939 /* Try to flush the object off the GPU without holding the lock.
941 * to catch cases where we are gazumped.
950 /* Silently promote "you're not bound, there was nothing to do"
951 * to success, since the client was just asking us to
968 * Called when user space has done writes to this buffer
1022 /* prime objects have no backing filp to GEM mmap
1062 /* Access to snoopable pages through the GTT is incoherent. */
1099 * GEM memory mapping works by handing back to userspace a fake mmap offset
1115 DRM_ERROR("failed to alloc kernel memory");
1136 * relinquish ownership of the pages back to the system.
1139 * object through the GTT and then lose the fence register due to
1196 * @obj: object to check
1214 * Previous chips need to be aligned to the size of the smallest
1267 * Simply returns the fake offset to userspace so it can mmap it.
1272 * (since it may have been evicted to make room for something), allocating
1337 DRM_ERROR("Faled to allocate page list. size = %ld", np * sizeof(caddr_t));
1353 * multiple times before they are released by a single call to
1388 /* Keep the seqno relative to the current ring */
1399 /* Move from whatever list we were on to the tail of execution. */
1407 /* Bump MRU to take account of the delayed flush */
1415 TRACE_GEM_OBJ_HISTORY(obj, "to active");
1440 TRACE_GEM_OBJ_HISTORY(obj, "to inactive");
1453 /* Carefully retire all requests without writing to the rings */
1480 /* HWS page needs to be set less than what we
1481 * will inject to ring
1529 * Emit any outstanding flushes - execbuf can fail to emit the flush
1530 * after having emitted the batchbuffer command. Hence we need to fix
1531 * things up similar to emitting the lazy request. The difference here
1568 * to explicitly hold another reference here.
1598 /* change to delay HZ and then run work (not insert to workqueue of Linux) */
1769 * attached to the fence, otherwise just clear the fence.
1790 /* Move everything out of the GPU domains to ensure we do any
1828 /* We know the GPU must have read the request to have
1830 * of tail of the request to update the last known position
1839 * by the ringbuffer to the flushing/inactive lists as appropriate.
1990 /* Need to make sure the object gets inactive eventually. */
2003 /* Do this after OLR check to make sure we make forward progress polling
2028 * i915_gem_object_sync - sync an object to a ring.
2031 * @to: ring we wish to use the object on. May be NULL.
2033 * This code is meant to abstract object synchronization with the GPU.
2041 struct intel_ring_buffer *to)
2047 if (from == NULL || to == from)
2050 if (to == NULL || !i915_semaphore_is_enabled(obj->base.dev))
2053 idx = intel_ring_sync_index(from, to);
2063 ret = to->sync_to(to, from, seqno);
2108 /* Continue on if we fail due to EIO, the GPU is hung so we
2109 * should be safe and we need to cleanup or else we might
2131 /* Avoid an unnecessary call to unbind on rebind. */
2181 * for a partial fence not to be evaluated between writes, we
2182 * precede the update with write to turn off the fence register,
2320 /* And similarly be paranoid that no direct access to this region
2321 * is reordered to before the fence is installed.
2398 /* First try to find a free reg */
2412 /* None available, try to steal one or wait for a user to finish */
2425 * @obj: object to map through a fence reg
2427 * When mapping objects through the GTT, userspace wants to be able to write
2428 * to them without having to worry about swizzling if the object is tiled.
2447 * will need to serialise the write to the associated fence register?
2491 /* On non-LLC machines we have to be careful when putting differing
2492 * types of snoopable memory together to avoid the prefetcher
2595 * before evicting everything in a vain attempt to find space.
2598 DRM_ERROR("Attempting to bind an object larger than the aperture: object=%zd > %s aperture=%zu\n",
2672 * to GPU, and we can ignore the cache flush because it'll happen
2686 * we do not need to manually clear the CPU cache lines. However,
2688 * flushed/invalidated. As we always have to emit invalidations
2708 * to it immediately go to main memory as far as we know, so there's
2711 * However, we do have to enforce the order so that all writes through
2712 * the GTT land before any writes to the device, such as updates to
2735 * Moves a single object to the GTT read, and possibly write domain.
2738 * flushes to occur.
2746 /* Not valid to be called on unbound objects. */
2759 /* Serialise direct access to this object with the barriers for
2815 * currently pointing to our region in the aperture.
2837 * Just set it to the CPU cache for now.
2914 * any flushes to be pipelined (for pageflips).
2932 * a result, we make sure that the pinning that is about to occur is
2937 * of uncaching, which would allow us to flush all the LLC-cached data
2938 * with that bit in the PTE to main memory with just one PIPE_CONTROL.
2945 * (e.g. libkms for the bootup splash), we have to ensure that we
2984 * Moves a single object to the CPU read, and possibly write domain.
2987 * flushes to occur.
3021 * need to be invalidated at next use.
3034 * Note that if we were to use the current jiffies each time around the loop,
3035 * we wouldn't escape the function with any frames outstanding if the time to
3039 * relatively low latency when blocking on a particular request to finish.
3240 * by the gpu. Users of this interface expect objects to eventually
3287 /* Avoid an unnecessary call to unbind on the first bind. */
3315 DRM_ERROR("failed to init gem object");
3328 * compared to uncached. Graphics requests other than
3331 * don't need to clflush on the CPU side, and on the
3332 * GPU side we only need to flush internal caches to
3333 * get data visible to the CPU.
3336 * need to rebind when first used as such.
3420 * We need to replace this with a semaphore, or something.
3431 /* Cancel the retire work handler, wait for it to finish if running
3458 DRM_DEBUG("0x%x was already programmed to %x\n",
3616 * Some BIOSes fail to initialise the GTT, which will cause DMA faults when
3617 * the IOMMU is enabled. We need to clear the whole GTT.
3624 DRM_ERROR("failed to allocate framebuffer");
3630 /* copy old content to fb buffer */
3636 DRM_ERROR("failed to pin fb ret %d", ret);
3733 DRM_ERROR("failed to idle hardware: %d\n", ret);
3763 /* On GEN3 we really need to make sure the ARB C3 LP bit is set */
3782 /* Initialize fence registers to zero */
3908 DRM_ERROR("failed to init phys object %d size: %lu\n", id, obj->base.size);
3913 /* bind to the object */
3920 DRM_ERROR("failed to get page list\n");
3976 * later retire_requests won't dereference our soon-to-be-gone