Lines Matching defs:cyclic

54  *  The cyclic subsystem has been designed to take advantage of chip
60 * The cyclic subsystem is a low-level kernel subsystem designed to provide
62 * with existing terms, we dub such an interval timer a "cyclic"). Cyclics
64 * optionally bound to a CPU or a CPU partition. A cyclic's CPU or CPU
65 * partition binding may be changed dynamically; the cyclic will be "juggled"
66 * to a CPU which satisfies the new binding. Alternatively, a cyclic may
72 * The cyclic subsystem has interfaces with the kernel at-large, with other
74 * resume subsystem) and with the platform (the cyclic backend). Each
78 * The following diagram displays the cyclic subsystem's interfaces to
80 * the large arrow indicating the cyclic subsystem's consumer interface.
110 * cyclic_add() <-- Creates a cyclic
111 * cyclic_add_omni() <-- Creates an omnipresent cyclic
112 * cyclic_remove() <-- Removes a cyclic
113 * cyclic_bind() <-- Change a cyclic's CPU or partition binding
114 * cyclic_reprogram() <-- Reprogram a cyclic's expiration
119 * cyclic_offline() <-- Offlines cyclic operation on a CPU
123 * cyclic_suspend() <-- Suspends the cyclic subsystem on all CPUs
124 * cyclic_resume() <-- Resumes the cyclic subsystem on all CPUs
128 * cyclic_init() <-- Initializes the cyclic subsystem
139 * The cyclic subsystem is designed to minimize interference between cyclics
140 * on different CPUs. Thus, all of the cyclic subsystem's data structures
143 * Each cyc_cpu has a power-of-two sized array of cyclic structures (the
151 * heap is keyed by cyclic expiration time, with parents expiring earlier
157 * compares the root cyclic's expiration time to the current time. If the
159 * cyclic. Upon return from cyclic_expire(), the cyclic's new expiration time
162 * examines the (potentially changed) root cyclic, repeating the
164 * cyclic has an expiration time in the future. This expiration time
167 * shortly after the root cyclic's expiration time.
267 * the cyclic at cyp_cyclics[cyp_heap[number_of_elements]], incrementing
285 * To insert into this heap, we would just need to fill in the cyclic at
293 * because the cyclic does not keep a backpointer into the heap. This makes
300 * CY_HIGH_LEVEL to expire a cyclic. Cyclic subsystem consumers are
301 * guaranteed that for an arbitrary time t in the future, their cyclic
303 * there must be a one-to-one mapping between a cyclic's expiration at
312 * CY_HIGH_LEVEL but greater than the level of a cyclic for a period of
313 * time longer than twice the cyclic's interval, the cyclic will be expired
317 * number of times a cyclic has been expired and the number of times it's
318 * been handled in a "pending count" (the cy_pend field of the cyclic
320 * expired cyclic and posts a soft interrupt at the desired level. In the
321 * cyclic subsystem's soft interrupt handler, cyclic_softint(), we repeatedly
322 * call the cyclic handler and decrement cy_pend until we have decremented
335 * The producer (cyclic_expire() running at CY_HIGH_LEVEL) enqueues a cyclic
336 * by storing the cyclic's index to cypc_buf[cypc_prodndx] and incrementing
338 * CY_LOCK_LEVEL or CY_LOW_LEVEL) dequeues a cyclic by loading from
343 * enqueues a cyclic if its cy_pend was zero (if the cyclic's cy_pend is
345 * cyclic_softint() only consumes a cyclic after it has decremented the
404 * When cyclic_softint() discovers a cyclic in the producer/consumer buffer,
405 * it calls the cyclic's handler and attempts to atomically decrement the
411 * - If the cy_pend was decremented to 0, the cyclic has been consumed;
416 * to be done on the cyclic; cyclic_softint() calls the cyclic handler
426 * having cyclic_expire() only enqueue the specified cyclic if its
427 * cy_pend count is zero; this assures that each cyclic is enqueued at
431 * cyclic. In part to obey this constraint, cyclic_softint() calls the
432 * cyclic handler before decrementing cy_pend.
444 * on the CPU being resized, but should not affect cyclic operation on other
511 * the cyclic subsystem: after cyclic_remove() returns, the cyclic handler
514 * Here is the procedure for cyclic removal:
519 * 3. The current expiration time for the removed cyclic is recorded.
520 * 4. If the cy_pend count on the removed cyclic is non-zero, it
522 * 5. The cyclic is removed from the heap
529 * The cy_pend count is decremented in cyclic_softint() after the cyclic
534 * until cyclic_softint() has finished calling the cyclic handler. To let
535 * cyclic_softint() know that this cyclic has been removed, we zero the
538 * caught during a resize (see "Resizing", above) or that the cyclic has been
540 * cyclic handler cyp_rpend - 1 times, and posts on cyp_modify_wait.
544 * At first glance, cyclic juggling seems to be a difficult problem. The
545 * subsystem must guarantee that a cyclic doesn't execute simultaneously on
546 * different CPUs, while also assuring that a cyclic fires exactly once
549 * multiple CPUs. Therefore, to juggle a cyclic, we remove it from its
551 * in "Removing", above). We then add the cyclic to the new CPU, explicitly
553 * leverages the existing cyclic expiry processing, which will compensate
558 * Normally, after a cyclic fires, its next expiration is computed from
559 * the current time and the cyclic interval. But there are situations when
561 * is using the cyclic. cyclic_reprogram() allows this to be done. This,
562 * unlike the other kernel at-large cyclic API functions, is permitted to
563 * be called from the cyclic handler. This is because it does not use the
566 * When cyclic_reprogram() is called for an omni-cyclic, the operation is
567 * applied to the omni-cyclic's component on the current CPU.
569 * If a high-level cyclic handler reprograms its own cyclic, then
570 * cyclic_fire() detects that and does not recompute the cyclic's next
571 * expiration. However, for a lock-level or a low-level cyclic, the
572 * actual cyclic handler will execute at the lower PIL only after
576 * expiration to CY_INFINITY. This effectively moves the cyclic to the
579 * "one-shot" timers in the context of the cyclic subsystem without using
582 * Here is the procedure for cyclic reprogramming:
585 * that houses the cyclic.
587 * 3. The cyclic is located in the cyclic heap. The search for this is
590 * 4. The cyclic expiration is set and the cyclic is moved to its
593 * 5. If the cyclic move modified the root of the heap, the backend is
597 * the serialization used has to be efficient. As with all other cyclic
599 * during reprogramming, the cyclic must not be juggled (regular cyclic)
600 * or stopped (omni-cyclic). The implementation defines a per-cyclic
604 * an omni-cyclic is reprogrammed on different CPUs frequently.
607 * the responsibility of the user of the reprogrammable cyclic to make sure
608 * that the cyclic is not removed via cyclic_remove() during reprogramming.
610 * some sort of synchronization for its cyclic-related activities. This
611 * little caveat exists because the cyclic ID is not really an ID. It is
668 panic("too many cyclic coverage points");
852 cyclic_expire(cyc_cpu_t *cpu, cyc_index_t ndx, cyclic_t *cyclic)
855 cyc_level_t level = cyclic->cy_level;
858 * If this is a CY_HIGH_LEVEL cyclic, just call the handler; we don't
862 cyc_func_t handler = cyclic->cy_handler;
863 void *arg = cyclic->cy_arg;
866 DTRACE_PROBE1(cyclic__start, cyclic_t *, cyclic);
870 DTRACE_PROBE1(cyclic__end, cyclic_t *, cyclic);
881 if (cyclic->cy_pend++ == 0) {
886 * We need to enqueue this cyclic in the soft buffer.
888 CYC_TRACE(cpu, CY_HIGH_LEVEL, "expire-enq", cyclic,
900 if (cyclic->cy_pend == 0) {
901 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "expire-wrap", cyclic);
902 cyclic->cy_pend = UINT32_MAX;
905 CYC_TRACE(cpu, CY_HIGH_LEVEL, "expire-bump", cyclic, 0);
908 be->cyb_softint(be->cyb_arg, cyclic->cy_level);
916 * cyclic_fire() is the cyclic subsystem's CY_HIGH_LEVEL interrupt handler.
917 * Called by the cyclic backend.
928 * of the cyclic subsystem does not rely on the timeliness of the backend.
946 cyclic_t *cyclic, *cyclics = cpu->cyp_cyclics;
965 cyclic = &cyclics[ndx];
967 ASSERT(!(cyclic->cy_flags & CYF_FREE));
969 CYC_TRACE(cpu, CY_HIGH_LEVEL, "fire-check", cyclic,
970 cyclic->cy_expire);
972 if ((exp = cyclic->cy_expire) > now)
975 cyclic_expire(cpu, ndx, cyclic);
978 * If the handler reprogrammed the cyclic, then don't
983 if (exp != cyclic->cy_expire) {
985 * If a hi level cyclic reprograms itself,
993 if (cyclic->cy_interval == CY_INFINITY)
996 exp += cyclic->cy_interval;
999 * If this cyclic will be set to next expire in the distant
1002 * a) This is the first firing of a cyclic which had
1005 * b) We are tragically late for a cyclic -- most likely
1019 hrtime_t interval = cyclic->cy_interval;
1027 cyclic->cy_expire = exp;
1032 * Now we have a cyclic in the root slot which isn't in the past;
1039 cyclic_remove_pend(cyc_cpu_t *cpu, cyc_level_t level, cyclic_t *cyclic)
1041 cyc_func_t handler = cyclic->cy_handler;
1042 void *arg = cyclic->cy_arg;
1045 ASSERT(cyclic->cy_flags & CYF_FREE);
1046 ASSERT(cyclic->cy_pend == 0);
1050 CYC_TRACE(cpu, level, "remove-rpend", cyclic, cpu->cyp_rpend);
1058 DTRACE_PROBE1(cyclic__start, cyclic_t *, cyclic);
1062 DTRACE_PROBE1(cyclic__end, cyclic_t *, cyclic);
1077 * cyclic_softint() is the cyclic subsystem's CY_LOCK_LEVEL and CY_LOW_LEVEL
1078 * soft interrupt handler. Called by the cyclic backend.
1101 * at the mercy of its cyclic handlers. Because cyclic handlers may block
1117 * cpu_lock or any lock acquired by any cyclic handler or held across
1149 cyclic_t *cyclic = &cyclics[buf[consmasked]];
1150 cyc_func_t handler = cyclic->cy_handler;
1151 void *arg = cyclic->cy_arg;
1154 CYC_TRACE(cpu, level, "consuming", consndx, cyclic);
1157 * We have found this cyclic in the pcbuffer. We know that
1171 * to call the cyclic rpend times. We will take into
1179 DTRACE_PROBE1(cyclic__start, cyclic_t *, cyclic);
1183 DTRACE_PROBE1(cyclic__end, cyclic_t *, cyclic);
1186 pend = cyclic->cy_pend;
1192 * This cyclic has been removed while
1195 * found this cyclic in the pcbuffer).
1202 cyclic_remove_pend(cpu, level, cyclic);
1216 cyclic = &cyclics[buf[consmasked]];
1217 ASSERT(cyclic->cy_handler == handler);
1218 ASSERT(cyclic->cy_arg == arg);
1223 atomic_cas_32(&cyclic->cy_pend, pend, npend)) !=
1230 * pend count on this cyclic. In this
1238 * (c) The cyclic has been removed by an
1241 * CYS_REMOVING, and the cyclic will be
1254 (cyclic->cy_flags & CYF_FREE)))));
1354 * to CY_HIGH_LEVEL. This CPU already has a new heap, cyclic array,
1611 * (a) We have a partition-bound cyclic, and there is no CPU in
1617 * (b) We have a partition-unbound cyclic, in which case there
1657 cyclic_t *cyclic;
1678 cyclic = &cpu->cyp_cyclics[ndx];
1680 ASSERT(cyclic->cy_flags == CYF_FREE);
1681 cyclic->cy_interval = when->cyt_interval;
1688 cyclic->cy_expire = (gethrtime() / cyclic->cy_interval + 1) *
1689 cyclic->cy_interval;
1691 cyclic->cy_expire = when->cyt_when;
1694 cyclic->cy_handler = hdlr->cyh_func;
1695 cyclic->cy_arg = hdlr->cyh_arg;
1696 cyclic->cy_level = hdlr->cyh_level;
1697 cyclic->cy_flags = arg->cyx_flags;
1700 hrtime_t exp = cyclic->cy_expire;
1702 CYC_TRACE(cpu, CY_HIGH_LEVEL, "add-reprog", cyclic, exp);
1741 * actually add our cyclic.
1764 cyclic_t *cyclic;
1778 cyclic = &cpu->cyp_cyclics[ndx];
1781 * Grab the current expiration time. If this cyclic is being
1783 * will be used when the cyclic is added to the new CPU.
1786 arg->cyx_when->cyt_when = cyclic->cy_expire;
1787 arg->cyx_when->cyt_interval = cyclic->cy_interval;
1790 if (cyclic->cy_pend != 0) {
1792 * The pend is non-zero; this cyclic is currently being
1797 * that we have zeroed out pend, and will call the cyclic
1799 * softint has completed calling the cyclic handler.
1806 ASSERT(cyclic->cy_level != CY_HIGH_LEVEL);
1807 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "remove-pend", cyclic->cy_pend);
1808 cpu->cyp_rpend = cyclic->cy_pend;
1809 cyclic->cy_pend = 0;
1818 cyclic->cy_flags = CYF_FREE;
1826 panic("attempt to remove non-existent cyclic");
1883 cyclic = &cpu->cyp_cyclics[heap[0]];
1888 be->cyb_reprogram(bar, cyclic->cy_expire);
1898 cyclic_t *cyclic = &cpu->cyp_cyclics[ndx];
1899 cyc_level_t level = cyclic->cy_level;
1917 * If the cyclic we removed wasn't at CY_HIGH_LEVEL, then we need to
1919 * for all pending cyclic handlers to run.
1928 * remove this cyclic; put the CPU back in the CYS_ONLINE
1950 * If cyclic_reprogram() is called on the same CPU as the cyclic's CPU, then
1952 * an X-call to the cyclic's CPU.
1962 cyclic_t *cyclic;
1985 panic("attempt to reprogram non-existent cyclic");
1987 cyclic = &cpu->cyp_cyclics[ndx];
1988 oexpire = cyclic->cy_expire;
1989 cyclic->cy_expire = expire;
2005 cyclic = &cpu->cyp_cyclics[heap[0]];
2006 be->cyb_reprogram(bar, cyclic->cy_expire);
2038 * cyclic_juggle_one_to() should only be called when the source cyclic
2049 cyclic_t *cyclic;
2058 cyclic = &src->cyp_cyclics[ndx];
2060 flags = cyclic->cy_flags;
2063 hdlr.cyh_func = cyclic->cy_handler;
2064 hdlr.cyh_level = cyclic->cy_level;
2065 hdlr.cyh_arg = cyclic->cy_arg;
2070 * expansion before removing the cyclic. This is to prevent us
2071 * from blocking while a system-critical cyclic (notably, the clock
2072 * cyclic) isn't on a CPU.
2081 * Prevent a reprogram of this cyclic while we are relocating it.
2088 * Remove the cyclic from the source. As mentioned above, we cannot
2089 * block during this operation; if we cannot remove the cyclic
2094 * cyclic handler is blocked on a resource held by a thread which we
2114 if (delay > (cyclic->cy_interval >> 1))
2115 delay = cyclic->cy_interval >> 1;
2118 * Drop the RW lock to avoid a deadlock with the cyclic
2127 * Now add the cyclic to the destination. This won't block; we
2129 * CPU before removing the cyclic from the source CPU.
2136 * Now that we have successfully relocated the cyclic, allow
2147 cyclic_t *cyclic = &cpu->cyp_cyclics[ndx];
2155 ASSERT(!(cyclic->cy_flags & CYF_FREE));
2157 if ((dest = cyclic_pick_cpu(part, c, c, cyclic->cy_flags)) == NULL) {
2159 * Bad news: this cyclic can't be juggled.
2176 cyclic_t *cyclic = &cpu->cyp_cyclics[idp->cyi_ndx];
2181 ASSERT(!(cyclic->cy_flags & CYF_FREE));
2182 ASSERT(cyclic->cy_flags & CYF_CPU_BOUND);
2184 cyclic->cy_flags &= ~CYF_CPU_BOUND;
2196 (!res && (cyclic->cy_flags & CYF_PART_BOUND)));
2206 cyclic_t *cyclic = &cpu->cyp_cyclics[idp->cyi_ndx];
2216 ASSERT(!(cyclic->cy_flags & CYF_FREE));
2217 ASSERT(!(cyclic->cy_flags & CYF_CPU_BOUND));
2219 dest = cyclic_pick_cpu(part, d, NULL, cyclic->cy_flags | CYF_CPU_BOUND);
2223 cyclic = &dest->cyp_cyclics[idp->cyi_ndx];
2226 cyclic->cy_flags |= CYF_CPU_BOUND;
2246 * If we're on a CPU which has interrupts disabled (and if this cyclic
2326 * cyclic subsystem for this CPU is prepared to field interrupts.
2467 cyclic_t *cyclic = &cpu->cyp_cyclics[cpu->cyp_heap[0]];
2468 hrtime_t exp = cyclic->cy_expire;
2470 CYC_TRACE(cpu, CY_HIGH_LEVEL, "resume-reprog", cyclic, exp);
2531 * Prevent a reprogram of this cyclic while we are removing it.
2544 * CPU -- the definition of an omnipresent cyclic is that it runs
2556 * Remove the cyclic from the source. We cannot block during this
2558 * by the cyclic handler via cyclic_reprogram().
2560 * If we cannot remove the cyclic without waiting, we spin for a time,
2564 * succeed -- even if the cyclic handler is blocked on a resource
2587 * Drop the RW lock to avoid a deadlock with the cyclic
2596 * Now that we have successfully removed the cyclic, allow the omni
2597 * cyclic to be reprogrammed on other CPUs.
2602 * The cyclic has been removed from this CPU; time to call the
2622 * associated with the cyclic. If and only if this field is NULL, the
2623 * cyc_id_t is an omnipresent cyclic. Note that cyi_omni_list may be
2624 * NULL for an omnipresent cyclic while the cyclic is being created
2650 * cyclic_add() will create an unbound cyclic with the specified handler and
2651 * interval. The cyclic will run on a CPU which both has interrupts enabled
2660 * void *cyh_arg <-- Argument to cyclic handler
2680 * is set to 0, the cyclic will start to fire when cyt_interval next
2684 * _not_ explicitly supported by the cyclic subsystem (cyclic_add() will
2689 * For an arbitrary time t in the future, the cyclic handler is guaranteed
2693 * the cyclic handler may be called a finite number of times with an
2696 * The cyclic subsystem will not enforce any lower bound on the interval;
2702 * The cyclic handler is guaranteed to be single threaded, even while the
2703 * cyclic is being juggled between CPUs (see cyclic_juggle(), below).
2704 * That is, a given cyclic handler will never be executed simultaneously
2717 * apply. A cyclic may be added even in the presence of CPUs that have
2718 * not been configured with respect to the cyclic subsystem, but only
2719 * configured CPUs will be eligible to run the new cyclic.
2727 * A cyclic handler may not grab ANY locks held by the caller of any of
2729 * these functions may require blocking on cyclic handler completion.
2730 * Moreover, cyclic handlers may not make any call back into the cyclic
2752 * cyclic_add_omni() will create an omnipresent cyclic with the specified
2777 * The omni cyclic online handler is always called _before_ the omni
2778 * cyclic begins to fire on the specified CPU. As the above argument
2782 * allows the omni cyclic to have maximum flexibility; different CPUs may
2796 * by cyclic handlers. However, omni cyclic online handlers may _not_
2797 * call back into the cyclic subsystem, and should be generally careful
2807 * void * <-- CPU's cyclic argument (that is, value
2811 * The omni cyclic offline handler is always called _after_ the omni
2812 * cyclic has ceased firing on the specified CPU. Its purpose is to
2813 * allow cleanup of any resources dynamically allocated in the omni cyclic
2856 * this cyclic.
2869 * cyclic_remove() will remove the specified cyclic from the system.
2877 * removed cyclic handler has completed execution (this is the same
2879 * need to block, waiting for the removed cyclic to complete execution.
2881 * held across cyclic_remove() that also may be acquired by a cyclic
2892 * grabbed by any cyclic handler. See "Arguments and notes", above.
2932 * of a cyclic.
2941 * cyclic. If the specified cyclic is bound to a CPU other than the one
2942 * specified, it will be unbound from its bound CPU. Unbinding the cyclic
2944 * CPU is non-NULL, the cyclic will be subsequently rebound to the specified
2952 * attempts to bind a cyclic to an offline CPU, the cyclic subsystem will
2956 * specified cyclic. If the specified cyclic is bound to a CPU partition
2958 * partition. Unbinding the cyclic from its CPU partition may cause it
2960 * non-NULL, the cyclic will be subsequently rebound to the specified CPU
2964 * partition contains a CPU. If it does not, the cyclic subsystem will
2975 * cyclic subsystem will panic.
2978 * been configured with respect to the cyclic subsystem. Generally, this
2993 * grabbed by any cyclic handler.
3009 panic("attempt to change binding of omnipresent cyclic");
3062 * Prevent the cyclic from moving or disappearing while we reprogram.
3071 * For an omni cyclic, we reprogram the cyclic corresponding
3105 * Allow the cyclic to be moved or removed.
3137 * cyclic backend.
3144 * It is assumed that cyclic_mp_init() is called some time after cyclic
3182 * and there exists a P_ONLINE CPU in the partition. The cyclic subsystem
3183 * assures that a cyclic will never fire late or spuriously, even while
3196 * grabbed by any cyclic handler. While cyclic_juggle() _may_ be called
3199 * Failure to do so could result in an assertion failure in the cyclic
3213 * We'll go through each cyclic on the CPU, attempting to juggle
3236 * cyclic_offline() offlines the cyclic subsystem on the specified CPU.
3247 * and the cyclic subsystem on the CPU was successfully offlines.
3248 * cyclic_offline returns 0 if some cyclics remain, blocking the cyclic
3253 * on cyclic juggling.
3277 * cyclic firing on this CPU.
3369 * into the cyclic subsystem, no lock may be held which is also grabbed
3370 * by any cyclic handler.
3377 cyclic_t *cyclic;
3409 cyclic = &cpu->cyp_cyclics[idp->cyi_ndx];
3411 if (cyclic->cy_flags & CYF_CPU_BOUND)
3415 * We know that this cyclic is bound to its processor set
3419 ASSERT(cyclic->cy_flags & CYF_PART_BOUND);
3442 * a partition-bound cyclic which is CPU-bound to the specified CPU,
3463 * returns failure. As with other calls into the cyclic subsystem, no lock
3464 * may be held which is also grabbed by any cyclic handler.
3471 cyclic_t *cyclic, *cyclics = cpu->cyp_cyclics;
3486 cyclic = &cyclics[idp->cyi_ndx];
3488 if (!(cyclic->cy_flags & CYF_PART_BOUND))
3491 dest = cyclic_pick_cpu(part, c, c, cyclic->cy_flags);
3495 * We can't juggle this cyclic; we need to return
3514 * cyclic_suspend() suspends all cyclic activity throughout the cyclic
3521 * cyclic_suspend() takes no arguments. Each CPU with an active cyclic
3527 * cyclic handlers from being called after cyclic_suspend() returns: if a
3529 * of cyclic_suspend(), cyclic handlers at its level may continue to be
3549 * The cyclic subsystem must be configured on every valid CPU;
3553 * cyclic entry points, cyclic_suspend() may be called with locks held
3554 * which are also acquired by CY_LOCK_LEVEL or CY_LOW_LEVEL cyclic
3582 * cyclic_resume() resumes all cyclic activity throughout the cyclic
3587 * cyclic_resume() takes no arguments. Each CPU with an active cyclic
3602 * The cyclic subsystem must be configured on every valid CPU;
3606 * cyclic entry points, cyclic_resume() may be called with locks held which
3607 * are also acquired by CY_LOCK_LEVEL or CY_LOW_LEVEL cyclic handlers.