thread.c revision 5ac745124ea886136d288a418ca7b2640c840456
* Min/Max stack sizes for stack size parameters * default_stksize overrides lwp_default_stksize if it is set. * forward declarations for internal thread specific data (tsd) * "struct _klwp" includes a "struct pcb", which includes a * "struct fpu", which needs to be 16-byte aligned on amd64 * Allocate thread structures from static_arena. This prevents * issues where a thread tries to relocate its own thread * structure and touches it after the mapping has been suspended. * Originally, we had two parameters to set default stack * size: one for lwp's (lwp_default_stksize), and one for * kernel-only threads (DEFAULTSTKSZ, a.k.a. _defaultstksz). * Now we have a third parameter that overrides both if it is * set to a legal stack size, called default_stksize. * Set up the first CPU's idle thread. * It runs whenever the CPU has nothing worthwhile to do. * Registering a thread in the callback table is usually * done in the initialization code of the thread. In this * case, we do it right after thread creation to avoid * blocking idle thread while registering itself. It also * avoids the possibility of reregistration in case a CPU * restarts its idle thread. * Finish initializing the kernel memory allocator now that * thread_create() is available. * thread_create() blocks for memory if necessary. It never fails. * If stk is NULL, the thread is created at the base of the stack * Every thread keeps a turnstile around in case it needs to block. * The only reason the turnstile is not simply part of the thread * structure is that we may have to break the association whenever * more than one thread blocks on a given synchronization object. * From a memory-management standpoint, turnstiles are like the * "attached mblks" that hang off dblks in the streams allocator. * alloc both thread and stack in segkp chunk * The machine-dependent mutex code may require that * thread pointers (since they may be used for mutex owner * fields) have certain alignment requirements. * PTR24_ALIGN is the size of the alignment quanta. * XXX - assumes stack grows toward low addresses. " too small to hold thread.");
#
else /* stack grows to larger addresses */#
endif /* STACK_GROWTH_DOWN */ * Initialize t_stk to the kernel stack pointer to use * upon entry to the kernel #
endif /* STACK_GROWTH_DOWN */ /* set default stack flag */ * p_cred could be NULL if it thread_create is called before cred_init * Callers who give us a NULL proc must do their own * stack initialization. e.g. lwp_create() * Put a hold on project0. If this thread is actually in a * different project, then t_proj will be changed later in * lwp_create(). All kernel-only threads must be in project 0. * Add the thread to the list of all threads, and initialize * its t_cpu pointer. We need to block preemption since * cpu_offline walks the thread list looking for threads * with t_cpu pointing to the CPU being offlined. We want * to make sure that the list is consistent and that if t_cpu * is set, the thread is on the list. * Threads should never have a NULL t_cpu pointer so assign it * here. If the thread is being created with state TS_RUN a * better CPU may be chosen when it is placed on the run queue. * We need to keep kernel preemption disabled when setting all * three fields to keep them in sync. Also, always create in * the default partition since that's where kernel threads go * (if this isn't a kernel thread, t_cpupart will be changed * in lwp_create before setting the thread runnable). * For now, affiliate this thread with the root lgroup. * Since the kernel does not (presently) allocate its memory * in a locality aware fashion, the root is an appropriate home. * If this thread is later associated with an lwp, it will have * it's lgroup re-assigned at that time. * Inherit the current cpu. If this cpu isn't part of the chosen * lgroup, a new cpu will be chosen by cpu_choose when the thread * Initialize thread state and the dispatcher lock pointer. * Need to hold onto pidlock to block allthreads walkers until * Free state will be used for intr threads. * The interrupt routine must set the thread dispatcher * lock pointer (t_lockp) if starting on a CPU * other than the current one. default:
/* TS_SLEEP, TS_ZOMB or TS_TRANS */ * Move thread to project0 and take care of project reference counters. tsd_exit();
/* Clean up this thread's TSD */ * No kernel thread should have called poll() without arranging * calling pollcleanup() here. door_slam();
/* in case thread did an upcall */ * remove thread from the all threads list so that * death-row can use the same pointers. * Check to see if the specified thread is active (defined as being on * the thread list). This is certainly a slow way to do this; if there's * ever a reason to speed it up, we could maintain a hash table of active * threads indexed by their t_did. * Wait for specified thread to exit. Returns immediately if the thread * could not be found, meaning that it has either already exited or never * Make sure we check that the thread is on the thread list * before blocking on it; otherwise we could end up blocking on * a cv that's already been freed. In other words, don't cache * the thread pointer across calls to cv_wait. * The choice of loop invariant means that whenever a thread * is taken off the allthreads list, a cv_broadcast must be * performed on that thread's t_joincv to wake up any waiters. * The broadcast doesn't have to happen right away, but it * shouldn't be postponed indefinitely (e.g., by doing it in * thread_free which may only be executed when the deathrow t->
t_lockp =
NULL;
/* nothing should try to lock this thread now */ panic(
"thread_free: turnstile still active");
* Barrier for clock thread. The clock holds this lock to * keep the thread from going away while it's looking at it. * Free thread struct and its stack. /* thread struct is embedded in stack */ * Removes threads associated with the given zone from a deathrow queue. * tp is a pointer to the head of the deathrow queue, and countp is a * pointer to the current deathrow count. Returns a linked list of * threads removed from the list. * Pull threads and lwps associated with zone off deathrow lists. * cleanup zombie threads that are on deathrow. * Register callback to clean up threads when zone is destroyed. * This is called by resume() to put a zombie thread onto deathrow. * The thread's state is changed to TS_FREE to indicate that is reapable. * This is called from the idle thread so it must not block (just spin). * lwp_deathrow contains only threads with lwp linkage * that are of the default stacksize. Anything else goes * Install thread context ops for the current thread. void (*
fork)(
void *,
void *),
void (*
free)(
void *,
int))
* Remove thread context ops from the current thread. * (Or allow the agent thread to remove thread context ops from another * thread in the same process) void (*
fork)(
void *,
void *),
void (*
free)(
void *,
int))
* There's a potential race for t_ctx between the agent thread * and the target thread when lwps are exiting (for example, * when the process is reacting to having been killed). At * other times, the target thread will be TS_STOPPED whilst the * agent thread is inside this function. However, from the * perspective of the cost of locking, it seems cheaper to take * a thread-specific lock everytime we come through here. * Note that this operator is only invoked via the _lwp_create * system call. The system may have other reasons to create lwps * e.g. the agent lwp or the doors unreferenced lwp. * exitctx is called from thread_exit() and lwp_exit() to perform any actions * needed when the thread/LWP leaves the processor for the last time. This * routine is not intended to deal with freeing memory; freectx() is used for * that purpose during thread_free(). This routine is provided to allow for * clean-up that can't wait until thread_free(). * freectx is called from thread_free() and exec() to get * rid of old thread context ops. * Set the thread running; arrange for it to be swapped in if necessary. * Already on dispatcher queue. * All of the sending of SIGCONT (TC_XSTART) and /proc * (TC_PSTART) and lwp_continue() (TC_CSTART) must have * requested that the thread be run. * Just calling setrun() is not sufficient to set a stopped * thread running. TP_TXSTART is always set if the thread * is not stopped by a jobcontrol stop signal. * TP_TPSTART is always set if /proc is not controlling it. * TP_TCSTART is always set if lwp_suspend() didn't stop it. * The thread won't be stopped unless one of these * three mechanisms did it. * These flags must be set before calling setrun_locked(t). * They can't be passed as arguments because the streams * code calls setrun() indirectly and the mechanism for * doing so admits only one argument. Note that the * thread must be locked in order to change t_schedflags. * Process is no longer stopped (a thread is running). * Strictly speaking, we do not have to clear these * flags here; they are cleared on entry to stop(). * However, they are confusing when doing kernel * debugging or when they are revealed by ps(1). * Let the class put the process on the dispatcher queue. * Unpin an interrupted thread. * When an interrupt occurs, the interrupt is handled on the stack * of an interrupt thread, taken from a pool linked to the CPU structure. * When swtch() is switching away from an interrupt thread because it * blocked or was preempted, this routine is called to complete the * saving of the interrupted thread state, and returns the interrupted * thread pointer so it may be resumed. * Called by swtch() only at high spl. int i;
/* interrupt level */ * Get state from interrupt thread for the one "intr_passivate:level %d curthread %p (%T) ithread %p (%T)",
* Dissociate the current thread from the interrupted thread's LWP. * Interrupt handlers above the level that spinlocks block must * Compute the CPU's base interrupt level based on the active * Create and initialize an interrupt thread. * Returns non-zero on error. * Called at spl7() or better. * Set the thread in the TS_FREE state. The state will change * to TS_ONPROC only while the interrupt is active. Think of these * as being on a private free list for the CPU. Being TS_FREE keeps * inactive interrupt threads out of debugger thread lists. * We cannot call thread_create with TS_FREE because of the current * checks there for ONPROC. Fix this when thread_create takes flags. * Nobody should ever reference the credentials of an interrupt * thread so make it NULL to catch any such references. * Don't make a user-requested binding on this thread so that * the processor can be offlined. *(
tp->
t_stk) = 0;
/* terminate intr thread stack */ * Link onto CPU's interrupt pool. * TSD -- THREAD SPECIFIC DATA /* per-key destructor funcs */ /* list of tsd_thread's */ * Needed because NULL destructor means that the key is unused * Create a key (index into per thread array) * Locks out tsd_create, tsd_destroy, and tsd_exit * May allocate memory with lock held * if key is allocated, do nothing * if no unused keys, increase the size of the destructor array * allocate the next available unused key * Destroy a key -- this is for unloadable modules * Assumes that the caller is preventing tsd_set and tsd_get * Locks out tsd_create, tsd_destroy, and tsd_exit * May free memory with lock held * protect the key namespace and our destructor lists * for every thread with TSD, call key's destructor * no TSD for key in this thread * call destructor for key * actually free the key (NULL destructor == unused) * Quickly return the per thread value that was stored with the specified key * Assumes the caller is protecting key from tsd_create and tsd_destroy * Set a per thread value indexed with the specified key * Like tsd_get(), except that the agent lwp can get the tsd of * another thread in the same process (the agent thread only runs when the * process is completely stopped by /proc), or syslwp is creating a new lwp. * Like tsd_set(), except that the agent lwp can set the tsd of * another thread in the same process, or syslwp can set the tsd * of a thread it's in the middle of creating. * Assumes the caller is protecting key from tsd_create and tsd_destroy * May lock out tsd_destroy (and tsd_create), may allocate memory with * Link onto list of threads with TSD * Allocate thread local storage and set the value for key * Return the per thread value that was stored with the specified key * If necessary, create the key and the value * Assumes the caller is protecting *keyp from tsd_destroy * Called from thread_exit() to run the destructor function for each tsd * Locks out tsd_create and tsd_destroy * Assumes that the destructor *DOES NOT* use tsd * lock out tsd_create and tsd_destroy, call * the destructor, and mark the value as destroyed. * remove from linked list of threads with TSD * Check to see if an interrupt thread might be active at a given ipl. * We must be conservative--it is ok to give a false yes, but a false no * will cause disaster. (But if the situation changes after we check it is * ok--the caller is trying to ensure that an interrupt routine has been * This is used when trying to remove an interrupt handler from an autovector * Return non-zero if an interrupt is being serviced. /* Are we an interrupt thread */ /* Are we servicing a high level interrupt? */ * Change the dispatch priority of a thread in the system. * Used when raising or lowering a thread's priority. * (E.g., priority inheritance) * Since threads are queued according to their priority, we * we must check the thread's state to determine whether it * is on a queue somewhere. If it is, we've got to: * o Change its effective priority. * Assumptions: The thread whose priority we wish to change * must be locked before we call thread_change_(e)pri(). * The thread_change(e)pri() function doesn't drop the thread * lock--that must be done by its caller. * If the inherited priority hasn't actually changed, * If it's not on a queue, change the priority with * It's either on a sleep queue or a run queue. * Take the thread out of its sleep queue. * Change the inherited priority. * Each synchronization object exports a function * to do this in an appropriate manner. * The thread is on a run queue. * Note: setbackdq() may not put the thread * back on the same run queue where it originally }
/* end of thread_change_epri */ * Function: Change the t_pri field of a thread. * Side Effects: Adjust the thread ordering on a run queue * or sleep queue, if necessary. * Returns: 1 if the thread was on a run queue, else 0. * If it's not on a queue, change the priority with * It's either on a sleep queue or a run queue. * If the priority has changed, take the thread out of * its sleep queue and change the priority. * Each synchronization object exports a function * to do this in an appropriate manner. * The thread is on a run queue. * Note: setbackdq() may not put the thread * back on the same run queue where it originally * We still requeue the thread even if the priority * is unchanged to preserve round-robin (and other) * effects between threads of the same priority.