Lines Matching defs:be

40  *  absolute time.  As a result, these parts cannot typically be reprogrammed
48 * present a time-based interrupt source which can be reprogrammed arbitrarily
63 * can be specified to fire at high, lock or low interrupt level, and may be
65 * partition binding may be changed dynamically; the cyclic will be "juggled"
67 * be specified to be "omnipresent", denoting firing on all online CPUs.
146 * the array will be doubled. The array will never shrink. Cyclics are
165 * (guaranteed to be the earliest in the heap) is then communicated to the
197 * The heap array could be:
236 * then all siblings are guaranteed to be on the same cache line. Thus, the
257 * Heaps must always be full, balanced trees. Heap management must therefore
269 * of decrementing the number of elements, swapping the to-be-deleted element
303 * there must be a one-to-one mapping between a cyclic's expiration at
313 * time longer than twice the cyclic's interval, the cyclic will be expired
314 * twice before it can be handled.
328 * level, cyclic_softint() must be able to quickly determine which cyclics
374 * this producer/consumer buffer; it would be enqueued in the CY_LOCK_LEVEL
395 * and cyclic_softint() code paths to be lock-free.
416 * to be done on the cyclic; cyclic_softint() calls the cyclic handler
443 * with concurrent resizes. Resizes should be rare; they may induce jitter
445 * CPUs. Pending cyclics may not be dropped during a resize operation.
447 * Three key cyc_cpu data structures need to be resized: the cyclics array,
482 * soft interrupt will be generated for the remaining level.
489 * consumer buffers) can be freed.
506 * Cyclic removals should be rare. To simplify the implementation (and to
512 * has returned and will never again be called.
544 * At first glance, cyclic juggling seems to be a difficult problem. The
560 * the next expiration needs to be reprogrammed by the kernel subsystem that
561 * is using the cyclic. cyclic_reprogram() allows this to be done. This,
563 * be called from the cyclic handler. This is because it does not use the
574 * cyclics can be specified with a special interval of CY_INFINITY (INT64_MAX).
589 * would be located closer to the bottom than the top.
596 * Reprogramming can be a frequent event (see the callout subsystem). So,
597 * the serialization used has to be efficient. As with all other cyclic
599 * during reprogramming, the cyclic must not be juggled (regular cyclic)
854 cyc_backend_t *be = cpu->cyp_backend;
878 * be atomic (the high interrupt level assures that it will appear
898 * UINT32_MAX. Yes, cyclics can be lost in this case.
908 be->cyb_softint(be->cyb_arg, cyclic->cy_level);
924 * cyclic_fire() may be called spuriously without ill effect. Optimal
938 * cyclic_fire() must be called from CY_HIGH_LEVEL interrupt context.
944 cyc_backend_t *be = cpu->cyp_backend;
947 void *arg = be->cyb_arg;
981 * be used to create one-shot timers.
999 * If this cyclic will be set to next expire in the distant
1008 * In either case, we set the new expiration time to be the
1012 * We arbitrarily define "distant" to be one second (one second
1035 be->cyb_reprogram(arg, exp);
1085 * be one of CY_LOCK_LEVEL or CY_LOW_LEVEL.
1105 * cyclic_softint() may be called spuriously without ill effect.
1113 * The caller must be executing in soft interrupt context at either
1196 * There must be a non-zero rpend for
1197 * this CPU, and there must be a remove
1240 * pend will be 0, the cyp_state will be
1241 * CYS_REMOVING, and the cyclic will be
1330 cyc_backend_t *be = cpu->cyp_backend;
1333 be->cyb_softint(be->cyb_arg, level - 1);
1343 cyc_backend_t *be = cpu->cyp_backend;
1344 cyb_arg_t bar = be->cyb_arg;
1359 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
1383 * Set up the free list, and set all of the new cyclics to be CYF_FREE.
1429 be->cyb_softint(bar, CY_HIGH_LEVEL - 1);
1430 be->cyb_restore_level(bar, cookie);
1444 cyc_backend_t *be = cpu->cyp_backend;
1497 be->cyb_xcall(be->cyb_arg, cpu->cyp_cpu,
1544 * If CYF_CPU_BOUND is set in flags, the specified CPU must be non-NULL.
1545 * If CYF_PART_BOUND is set in flags, the specified partition must be non-NULL.
1547 * be in the specified partition.
1614 * If not, the avoid CPU must be the only non-CYS_OFFLINE
1618 * must only be one CPU CPU_ENABLE'd, and it must be the one
1653 cyc_backend_t *be = cpu->cyp_backend;
1656 cyb_arg_t bar = be->cyb_arg;
1661 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
1674 be->cyb_enable(bar);
1708 be->cyb_reprogram(bar, exp);
1710 be->cyb_restore_level(bar, cookie);
1719 cyc_backend_t *be = cpu->cyp_backend;
1720 cyb_arg_t bar = be->cyb_arg;
1739 * By now, we know that we're going to be able to successfully
1748 be->cyb_xcall(bar, cpu->cyp_cpu, (cyc_func_t)cyclic_add_xcall, &arg);
1759 cyc_backend_t *be = cpu->cyp_backend;
1760 cyb_arg_t bar = be->cyb_arg;
1771 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
1783 * will be used when the cyclic is added to the new CPU.
1793 * executed (or will be executed shortly). If the caller
1836 be->cyb_disable(bar);
1888 be->cyb_reprogram(bar, cyclic->cy_expire);
1890 be->cyb_restore_level(bar, cookie);
1896 cyc_backend_t *be = cpu->cyp_backend;
1913 be->cyb_xcall(be->cyb_arg, cpu->cyp_cpu,
1957 cyc_backend_t *be = cpu->cyp_backend;
1958 cyb_arg_t bar = be->cyb_arg;
1966 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
2006 be->cyb_reprogram(bar, cyclic->cy_expire);
2009 be->cyb_restore_level(bar, cookie);
2022 cyc_backend_t *be = cpu->cyp_backend;
2033 be->cyb_xcall(be->cyb_arg, cpu->cyp_cpu,
2038 * cyclic_juggle_one_to() should only be called when the source cyclic
2039 * can be juggled and the destination CPU is known to be able to accept
2137 * it to be reprogrammed.
2159 * Bad news: this cyclic can't be juggled.
2324 * On platforms where stray interrupts may be taken during startup,
2337 cyc_backend_t *be = cpu->cyp_backend;
2338 cyb_arg_t bar = be->cyb_arg;
2350 be->cyb_unconfigure(bar);
2351 kmem_free(be, sizeof (cyc_backend_t));
2420 cyc_backend_t *be = cpu->cyp_backend;
2422 cyb_arg_t bar = be->cyb_arg;
2424 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
2431 * elements (cpu_lock assures that no one else may be attempting
2436 be->cyb_disable(bar);
2442 be->cyb_suspend(bar);
2443 be->cyb_restore_level(bar, cookie);
2450 cyc_backend_t *be = cpu->cyp_backend;
2452 cyb_arg_t bar = be->cyb_arg;
2455 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
2460 be->cyb_resume(bar);
2472 be->cyb_enable(bar);
2473 be->cyb_reprogram(bar, exp);
2480 be->cyb_restore_level(bar, cookie);
2557 * operation because we are holding the cyi_lock which can be held
2597 * cyclic to be reprogrammed on other CPUs.
2623 * cyc_id_t is an omnipresent cyclic. Note that cyi_omni_list may be
2661 * cyc_level_t cyh_level <-- Level at which to fire; must be one of
2664 * Note that cyh_level is _not_ an ipl or spl; it must be one the
2683 * The cyt_interval field _must_ be filled in by the caller; one-shots are
2687 * cyt_when + cyt_interval <= INT64_MAX. Neither field may be negative.
2691 * be true even if interrupts have been disabled for periods greater than
2693 * the cyclic handler may be called a finite number of times with an
2702 * The cyclic handler is guaranteed to be single threaded, even while the
2704 * That is, a given cyclic handler will never be executed simultaneously
2709 * cyclic_add() returns a cyclic_id_t, which is guaranteed to be a value
2714 * cpu_lock must be held by the caller, and the caller must not be in
2716 * memory allocation, so the usual rules (e.g. p_lock cannot be held)
2717 * apply. A cyclic may be added even in the presence of CPUs that have
2719 * configured CPUs will be eligible to run the new cyclic.
2723 * Cyclic handlers will be executed in the interrupt context corresponding
2763 * void *cyo_arg <-- Argument to be passed to on/offline handlers
2771 * cpu_t * <-- Pointer to CPU about to be onlined
2772 * cyc_handler_t * <-- Pointer to cyc_handler_t; must be filled in
2774 * cyc_time_t * <-- Pointer to cyc_time_t; must be filled in by
2786 * (b) be explicitly in or out of phase with one another
2797 * call back into the cyclic subsystem, and should be generally careful
2806 * cpu_t * <-- Pointer to CPU about to be offlined
2817 * The offline handler is optional; it may be NULL.
2821 * cyclic_add_omni() returns a cyclic_id_t, which is guaranteed to be a
2880 * This leads to an important constraint on the caller: no lock may be
2881 * held across cyclic_remove() that also may be acquired by a cyclic
2890 * cpu_lock must be held by the caller, and the caller must not be in
2937 * cyclic_bind() may _not_ be called on a cyclic_id returned from
2942 * specified, it will be unbound from its bound CPU. Unbinding the cyclic
2943 * from its CPU may cause it to be juggled to another CPU. If the specified
2944 * CPU is non-NULL, the cyclic will be subsequently rebound to the specified
2948 * only cyclics not bound to the CPU can be juggled away; CPU-bound cyclics
2950 * cannot be offlined (attempts to offline the CPU will return EBUSY).
2951 * Likewise, cyclics may not be bound to an offline CPU; if the caller
2957 * other than the one specified, it will be unbound from its bound
2959 * to be juggled to another CPU. If the specified CPU partition is
2960 * non-NULL, the cyclic will be subsequently rebound to the specified CPU
2965 * panic. A CPU partition with bound cyclics cannot be destroyed (attempts
2968 * bound to the CPU's partition (but not bound to the CPU) will be juggled
2980 * which this may not be true are during MP boot (i.e. after cyclic_init()
2982 * reconfiguration; cyclic_bind() should only be called with great care
2991 * cpu_lock must be held by the caller, and the caller must not be in
3105 * Allow the cyclic to be moved or removed.
3121 cyclic_init(cyc_backend_t *be, hrtime_t resolution)
3125 CYC_PTRACE("init", be, resolution);
3130 * be done before the CPU can be configured.
3132 bcopy(be, &cyclic_backend, sizeof (cyc_backend_t));
3174 * specified CPU; all remaining cyclics on the CPU will either be CPU-
3180 * should be juggled. CPU-bound cyclics are never juggled; partition-bound
3189 * be juggled away from the CPU, and zero if one or more cyclics could
3190 * not be juggled away.
3194 * cpu_lock must be held by the caller, and the caller must not be in
3196 * grabbed by any cyclic handler. While cyclic_juggle() _may_ be called
3197 * in any context satisfying these constraints, it _must_ be called
3249 * offline operation. All remaining cyclics on the CPU will either be
3257 * The only caller of cyclic_offline() should be the processor management
3284 * We cannot possibly be offlining the last CPU; cyi_omni_list
3285 * must be non-NULL.
3308 * cyclic_online() returns, the specified CPU will be eligible to execute
3317 * cyclic_online() should only be called by the processor management
3318 * subsystem; cpu_lock must be held.
3367 * cyclic_move_in() should _only_ be called immediately after a CPU has
3369 * into the cyclic subsystem, no lock may be held which is also grabbed
3416 * (otherwise, it would not be on a CPU with interrupts
3457 * cyclic_move_out() should _only_ be called immediately before a CPU has
3464 * may be held which is also grabbed by any cyclic handler.
3515 * subsystem. It should be called only by subsystems which are attempting
3529 * of cyclic_suspend(), cyclic handlers at its level may continue to be
3540 * That is, every timestamp obtained before cyclic_suspend() will be less
3549 * The cyclic subsystem must be configured on every valid CPU;
3550 * cyclic_suspend() may not be called during boot or during dynamic
3551 * reconfiguration. Additionally, cpu_lock must be held, and the caller
3552 * cannot be in high-level interrupt context. However, unlike most other
3553 * cyclic entry points, cyclic_suspend() may be called with locks held
3563 cyc_backend_t *be;
3571 be = cpu->cyp_backend;
3574 be->cyb_xcall(be->cyb_arg, c,
3583 * subsystem. It should be called only by system-suspending subsystems.
3593 * That is, every timestamp obtained before cyclic_suspend() will be less
3602 * The cyclic subsystem must be configured on every valid CPU;
3603 * cyclic_resume() may not be called during boot or during dynamic
3604 * reconfiguration. Additionally, cpu_lock must be held, and the caller
3605 * cannot be in high-level interrupt context. However, unlike most other
3606 * cyclic entry points, cyclic_resume() may be called with locks held which
3615 cyc_backend_t *be;
3624 be = cpu->cyp_backend;
3627 be->cyb_xcall(be->cyb_arg, c,