Lines Matching refs:the

4  * The contents of this file are subject to the terms of the
5 * Common Development and Distribution License (the "License").
6 * You may not use this file except in compliance with the License.
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
10 * See the License for the specific language governing permissions
11 * and limitations under the License.
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 * If applicable, add the following below this CDDL HEADER, with the
67 ! Grab the first or list head intr_vec_t off the intr_head[pil]
69 ! intr_head[pil] to next intr_vec_t on the list and clear softint
75 sll %g4, CPTRSHIFT, %g5 ! %g5 = offset to the pil entry
126 ! clear the iv_pending flag for this interrupt request
171 * SERVE_INTR_PRE is called once, just before the first invocation
191 * After calling SERVE_INTR, the caller must check if os3 is set. If
195 * Before calling SERVE_INTR_NEXT, the caller may perform accounting
197 * handler. However, the values of ls1 and os3 *must* be preserved and
202 * ls1 - the pil just processed
203 * ls2 - the pointer to intr_vec_t (iv) just processed
390 ! Only account for the time slice if the starting timestamp is non-zero.
395 ! will account for the interrupted thread's time slice, but
397 ! for the time slice, we want to "atomically" load the thread's
398 ! starting timestamp, calculate the interval with %tick, and zero
400 ! To do this, we do a casx on the t_intr_start field, and store 0 to it.
401 ! If it has changed since we loaded it above, we need to re-compute the
404 ! and the %tick val in %o4 had become stale.
408 ! If %l2 == %o3, our casx was successful. If not, the starting timestamp
409 ! changed between loading it (after label 0b) and computing the
420 ! We now know that a valid interval for the interrupted interrupt
447 ! for each level on the CPU.
449 ! Note that the code in kcpc_overflow_intr -relies- on the ordering
450 ! of events here -- in particular that t->t_lwp of the interrupt thread
451 ! is set to the pinned thread *before* curthread is changed.
482 ! Consider the new thread part of the same LWP so that
483 ! window overflow code can find the PCB.
488 ! Threads on the interrupt thread free list could have state already
490 ! Could eliminate the next two instructions with a little work.
496 ! Set the new thread as the current one.
497 ! Set interrupted thread's T_SP because if it is the idle thread,
524 ! the timestamp, try again.
537 ! Tracing is enabled - write the trace entry.
549 ! call the handler
561 ! Note: %l1 is the pil level we're processing, but we may have a
588 ! The general outline of what the code here does is:
589 ! 1. load t_intr_start, %tick, and calculate the delta
596 ! is to load t_intr_start and the last is to use casx to store the new
615 ! means a high-level interrupt can arrive and update the same stats
647 ! we've crossed the threshold and we should unpin the pinned threads
648 ! by preempt()ing ourselves, which will bubble up the t_intr chain
649 ! until hitting the non-interrupt thread, which will then in turn
650 ! preempt itself allowing the interrupt processing to resume. Finally,
651 ! the scheduler takes over and picks the next thread to run.
653 ! If our CPU is quiesced, we cannot preempt because the idle thread
654 ! won't ever re-enter the scheduler, and the interrupt will be forever
659 ! This insures we enter the scheduler if a higher-priority thread
671 cmp %o5, INTRCNT_LIMIT ! have we hit the limit?
676 ! We've reached the limit. Set cpu_intrcnt and cpu_kprunrun, and do
742 ! then the interrupt was never blocked and the return is fairly
749 ! link the thread back onto the interrupt thread pool
755 ! set the thread state to free so kernel debuggers don't see it
760 ! Switch back to the interrupted thread and return
779 ! the timestamp, try again.
783 ! If the thread being restarted isn't pinning anyone, and no interrupts
800 ! an interrupt thread stack, but the interrupted process is no longer
801 ! there. This means the interrupt must have blocked.
804 ! on the CPU's free list and resume the idle thread which will dispatch
805 ! the next thread to run.
807 ! All traps below DISP_LEVEL are disabled here, but the mondo interrupt
838 ! Put thread back on the interrupt thread list.
842 ! Set the CPU's base SPL level.
867 ! set the thread state to free so kernel debuggers don't see it
872 ! Put thread on either the interrupt pool or the free pool and
900 * Handle an interrupt in the current thread
989 ! compute the interval it ran for, and update its cumulative counter.
997 ! Use cpu_intr_actv to find the cpu_pil_high_start[] offset for the
1000 ! at one level below the current PIL. Since %o5 contains the active
1008 ! ASSERT(%l1 != 0) (we didn't shift the bit off the right edge)
1059 ! We need to find the CPU offset of the cumulative counter. We start
1080 ! done by the lowest priority high-level interrupt active.
1097 ! the accounting for the underlying interrupt thread.
1104 sub %o4, %o5, %o5 ! o5 has the interval
1146 call cmn_err ! %o2 has the %pil already
1162 ! call the handler
1302 ! We found another high-level interrupt active below the one that just
1303 ! returned. Store a starting timestamp for it in the CPU structure.
1305 ! Use cpu_intr_actv to find the cpu_pil_high_start[] offset for the
1308 ! at one level below the current PIL. Since %l2 contains the active
1318 ! ASSERT(%l1 != 0) (we didn't shift the bit off the right edge)
1339 ! done by the lowest priority high-level interrupt active.
1406 * must dig it out of the save area.
1544 ! Put the request on the cpu's softint priority list and
1634 ! Put the request on the cpu's softint priority list and
1730 ! Verify the inumber received (should be inum < MAXIVNUM).
1738 ! Fetch data from intr_vec_table according to the inum.
1740 ! We have an interrupt number. Fetch the interrupt vector requests
1741 ! from the interrupt vector table for a given interrupt number and
1750 ! Verify the first intr_vec_t pointer for a given inum and it should
1752 ! cause spurious tick interrupts when the softint register is programmed
1753 ! with 1 << 0 at the end of this routine. Now we always check for a
1759 ! Traverse the intr_vec_t link list, put each item on to corresponding
1760 ! CPU softint priority queue, and compose the final softint pil mask.
1957 tst %o3 ! see if any of the bits set
1980 * Table that finds the most significant bit set in a five bit field.
1981 * Each entry is the high-order bit number + 1 of it's index in the table.
1982 * This read-only data is in the text segment.
2012 ! restore registers from the base of the stack of the interrupt thread.
2033 ! put registers into the save area at the top of the interrupted
2034 ! thread's stack, pointed to by %l7 in the save area just loaded.
2069 * much time has been spent handling the current interrupt. Such a function
2070 * is needed because higher level interrupts can arrive during the
2072 * the handler inaccurate. intr_get_time() only returns time spent in the
2080 * it returns the time since the interrupt handler was invoked. Subsequent
2081 * calls will return the time since the prior call to intr_get_time(). Time
2084 * not be the same across CPUs.
2091 * intrstat[pil][0] is a cumulative count of the number of ticks spent
2092 * handling all interrupts at the specified pil on this CPU. It is
2093 * exported via kstats to the user.
2095 * intrstat[pil][1] is always a count of ticks less than or equal to the
2096 * value in [0]. The difference between [1] and [0] is the value returned
2097 * by a call to intr_get_time(). At the start of interrupt processing,
2098 * [0] and [1] will be equal (or nearly so). As the interrupt consumes
2099 * time, [0] will increase, but [1] will remain the same. A call to
2100 * intr_get_time() will return the difference, then update [1] to be the
2101 * same as [0]. Future calls will return the time since the last call.
2102 * Finally, when the interrupt completes, [1] is updated to the same as [0].
2107 * "checkpoints" the timing information by incrementing intrstat[pil][0]
2109 * It then sets the return value to intrstat[pil][0] - intrstat[pil][1],
2110 * and updates intrstat[pil][1] to be the same as the new value of
2113 * In the normal handling of interrupts, after an interrupt handler returns
2114 * and the code in intr_thread() updates intrstat[pil][0], it then sets
2115 * intrstat[pil][1] to the new value of intrstat[pil][0]. When [0] == [1],
2116 * the timings are reset, i.e. intr_get_time() will return [0] - [1] which
2120 * interrupt, they update the lower pil's [0] to show time spent in the
2122 * between [0] and [1], which is returned the next time intr_get_time() is
2123 * called. Time spent in the higher-pil interrupt will not be returned in
2124 * the next intr_get_time() call from the original interrupt, because
2125 * the higher-pil interrupt's time is accumulated in intrstat[higherpil][].
2212 ! cpu_m.intrstat[pil][1], which is either when the interrupt was
2213 ! first entered, or the last time intr_get_time() was invoked. Then
2227 ld [%o5 + CPU_BASE_SPL], %o2 ! restore %pil to the greater