thr.c revision bdf0047c9427cca40961a023475891c898579c37
2N/A * The contents of this file are subject to the terms of the 2N/A * Common Development and Distribution License (the "License"). 2N/A * You may not use this file except in compliance with the License. 2N/A * See the License for the specific language governing permissions 2N/A * and limitations under the License. 2N/A * When distributing Covered Code, include this CDDL HEADER in each 2N/A * If applicable, add the following below this CDDL HEADER, with the 2N/A * fields enclosed by brackets "[]" replaced with your own identifying 2N/A * information: Portions Copyright [yyyy] [name of copyright owner] 2N/A * Copyright 2010 Sun Microsystems, Inc. All rights reserved. 2N/A * Use is subject to license terms. 2N/A * These symbols should not be exported from libc, but 2N/A * components reference them. These need to be fixed, too. 2N/A * Between Solaris 2.5 and Solaris 9, __threaded was used to indicate 2N/A * "we are linked with libthread". The Sun Workshop 6 update 1 compilation 2N/A * system used it illegally (it is a consolidation private symbol). 2N/A * To accommodate this and possibly other abusers of the symbol, 2N/A * we make it always equal to 1 now that libthread has been folded 2N/A * into libc. The new __libc_threaded symbol is used to indicate 2N/A * the new meaning, "more than one thread exists". 2N/A * thr_concurrency and pthread_concurrency are not used by the library. 2N/A * They exist solely to hold and return the values set by calls to 2N/A * thr_setconcurrency() and pthread_setconcurrency(). 2N/A * Because thr_concurrency is affected by the THR_NEW_LWP flag 2N/A * to thr_create(), thr_concurrency is protected by link_lock. 2N/A/* initial allocation, just enough for one lwp */ 2N/A * The weak version is known to libc_db and mdb. 2N/A { 0, },
/* tdb_hash_lock_stats */ 2N/A { { 0 }, },
/* siguaction[NSIG] */ 2N/A 0,
/* primary_map */ 2N/A 0,
/* bucket_init */ 2N/A { 0 },
/* uberflags */ 2N/A 1,
/* hash_size: size of the hash table */ 2N/A 0,
/* hash_mask: hash_size - 1 */ 2N/A 10,
/* thread_stack_cache */ 2N/A 0,
/* tdb_register_count */ 2N/A 0,
/* tdb_hash_alloc_failed */ 2N/A 0,
/* tdb_sync_alloc */ 2N/A { 0, 0 },
/* tdb_ev_global_mask */ 2N/A * The weak version is known to libc_db and mdb. 2N/A * Insert the lwp into the hash table. 2N/A * Delete the lwp from the hash table. 2N/A * Retain stack information for thread structures that are being recycled for 2N/A * new threads. All other members of the thread structure should be zeroed. 2N/A * Answer the question, "Is the lwp in question really dead?" 2N/A * We must inquire of the operating system to be really sure 2N/A * because the lwp may have called lwp_exit() but it has not 2N/A * yet completed the exit. 2N/A * Attempt to keep the stack cache within the specified cache limit. 2N/A * Now put the free ulwp on the ulwp freelist. 2N/A * Find an unused stack of the requested size 2N/A * or create a new stack of the requested size. 2N/A * Return a pointer to the ulwp_t structure referring to the stack, or NULL. 2N/A * thr_exit() stores 1 in the ul_dead member. 2N/A * thr_join() stores -1 in the ul_lwpid member. 2N/A * The stack is allocated PROT_READ|PROT_WRITE|PROT_EXEC 2N/A * unless overridden by the system's configuration. 2N/A * One megabyte stacks by default, but subtract off 2N/A * two pages for the system-created red zones. 2N/A * Round up a non-zero stack size to a pagesize multiple. 2N/A * Round up the mapping size to a multiple of pagesize. 2N/A * Note: mmap() provides at least one page of red zone 2N/A * so we deduct that from the value of guardsize. * The previous lwp is gone; reuse the stack. * Remove the ulwp from the stack list. * None of the cached stacks matched our mapping size. * Reduce the stack cache to get rid of possibly * very old stacks that will never be reused. * We have allocated our stack. Now allocate the ulwp. if (
guardsize)
/* protect the extra red zone */ * Get a ulwp_t structure from the free list or allocate a new one. * Such ulwp_t's do not have a stack allocated by the library. /* LINTED pointer cast may result in improper alignment */ * If there is an associated stack, put it on the stack list and * munmap() previously freed stacks up to the residual cache limit. * Else put it on the ulwp free list and never call lfree() on it. * Find a named lwp and return a pointer to its hash list location. * On success, returns with the hash lock held. * Wake up all lwps waiting on this lwp for some reason. * Find a named lwp and return a pointer to it. * Returns with the hash lock held. * Enforce the restriction of not creating any threads * until the primary link map has been initialized. * Also, disallow thread creation to a child of vfork(). /* initialize the private stack */ /* ulwp is not in the hash table; make sure hash_out() doesn't fail */ /* creating a thread: enforce mt-correctness in mutex_lock() */ /* per-thread copies of global variables, for speed */ /* new thread inherits creating thread's scheduling parameters */ * We cache several instructions in the thread structure for use * by the fasttrap DTrace provider. When changing this, read the * comment in fasttrap.h for the all the other places that must * Defer signals on the new thread until its TLS constructors * have been called. _thrp_setup() will call sigon() after * it has called tls_setup(). * Call enter_critical() to avoid being suspended until we * have linked the new thread into the proper lists. * This is necessary because forkall() and fork1() must * suspend all threads and they must see a complete list. * A special cancellation cleanup hook for DCE. * cleanuphndlr, when it is not NULL, will contain a callback * function to be called before a thread is terminated in * thr_exit() as a result of being cancelled. * _pthread_setcleanupinit: sets the cleanup hook. * We are the last non-daemon thread exiting. * Exit the process. We retain our TSD and TLS so * that atexit() application functions can use them. tsd_exit();
/* deallocate thread-specific data */ tls_exit();
/* deallocate thread-local storage */ /* block all signals to finish exiting */ /* also prevent ourself from being suspended */ * We want to free the stack for reuse but must keep * the ulwp_t struct for the benefit of thr_join(). * For this purpose we allocate a replacement ulwp_t. /* collect queue lock statistics before marking ourself dead */ * Having just changed the address of curthread, we * must reset the ownership of the locks we hold so * that assertions will not fire when we release them. * On i386, %gs still references the original, not the * replacement, ulwp structure. Fetching the replacement * curthread pointer via %gs:0 works correctly since the * original ulwp structure will not be reallocated until * this lwp has completed its lwp_exit() system call (see * dead_and_buried()), but from here on out, we must make * no references to %gs:<offset> other than %gs:0. * Put non-detached terminated threads in the all_zombies list. * Notify everyone waiting for this thread. * Prevent any more references to the schedctl data. * We are exiting and continue_fork() may not find us. * Do this just before dropping link_lock, since fork * serializes on link_lock. thr_panic(
"_thrp_exit(): _lwp_terminate() returned");
* Disable cancellation and call the special DCE cancellation * cleanup hook if it is enabled. Do nothing else before calling * the DCE cancellation cleanup hook; it may call longjmp() and * Block application signals while we are exiting. * We call out to C++, TSD, and TLS destructors while exiting * and these are application-defined, so we cannot be assured * that they won't reset the signal mask. We use sigoff() to * defer any signals that may be received as a result of this * bad behavior. Such signals will be lost to the process * when the thread finishes exiting. * If thr_exit is being called from the places where * C++ destructors are to be called such as cancellation * points, then set this flag. It is checked in _t_cancel() * to decide whether _ex_unwind() is to be called or not. * _thrp_unwind() will eventually call _thrp_exit(). thr_panic(
"_thrp_exit_common(): _thrp_unwind() returned");
for (;;)
/* to shut the compiler up about __NORETURN */ * Called when a thread returns from its start function. * We are at the top of the stack; no unwinding is necessary. * We must hold link_lock to avoid a race condition with find_stack(). * lwp_wait() found an lwp that the library doesn't know * about. It must have been created with _lwp_create(). * Just return its lwpid; we can't know its status. * Remove ulwp from the hash table. * Remove ulwp from all_zombies list. * We can't call ulwp_unlock(ulwp) after we set * ulwp->ul_ix = -1 so we have to get a pointer to the * ulwp's hash table mutex now in order to unlock it below. * pthread_join() differs from Solaris thr_join(): * It does not return the departed thread's id * and hence does not have a "departed" argument. * It returns EINVAL if tid refers to a detached thread. while ((c = *
match++) !=
'\0') {
* Look for and evaluate environment variables of the form "_THREAD_*". * For compatibility with the past, we also look for environment * names of the form "LIBTHREAD_*". if (c ==
'_' &&
strncmp(
ev,
"_THREAD_",
8) == 0)
if (c ==
'L' &&
strncmp(
ev,
"LIBTHREAD_",
10) == 0)
/* PROBE_SUPPORT begin */ /* same as atexit() but private to the library */ extern int _atexit(
void (*)(
void));
/* same as _cleanup() but private to the library */ * libc_init() is called by ld.so.1 for library initialization. * We perform minimal initialization; enough to work with the main thread. * For the initial stage of initialization, we must be careful * not to call any function that could possibly call _cerror(). * For this purpose, we call only the raw system call wrappers. * Gather information about cache layouts for optimized * AMD and Intel assembler strfoo() and memfoo() functions. * Every libc, regardless of which link map, must register __cleanup(). * We keep our uberdata on one of (a) the first alternate link map * or (b) the primary link map. We switch to the primary link map * and stay there once we see it. All intermediate link maps are * subject to being unloaded at any time. atfork_init();
/* every link map needs atfork() processing */ * To establish the main stack information, we have to get our context. * This is also convenient to use for getting our signal mask. thr_panic(
"cannot allocate thread structure for main thread");
/* LINTED pointer cast may result in improper alignment */ * Are the old and new sets different? * (This can happen if we are currently blocking SIGCANCEL.) * If so, we must explicitly set our signal mask, below. * We cache several instructions in the thread structure for use * by the fasttrap DTrace provider. When changing this, read the * comment in fasttrap.h for the all the other places that must * Retrieve all pointers to uberdata allocated * while running on previous link maps. * We would like to do a structure assignment here, but * gcc turns structure assignments into calls to memcpy(), * a function exported from libc. We can't call any such * external functions until we establish curthread, below, * so we just call our private version of memcpy(). * These items point to global data on the primary link map. * In every link map, tdb_bootstrap points to the same piece of * allocated memory. When the primary link map is initialized, * the allocated memory is assigned a pointer to the one true * uberdata. This allows libc_db to initialize itself regardless * of which instance of libc it finds in the address space. * Cancellation can't happen until: * pthread_cancel() is called * another thread is created * For now, as a single-threaded process, set the flag that tells #
endif /* __i386 || __amd64 */ * Now curthread is established and it is safe to call any * function in libc except one that uses thread-local storage. /* tls_size was zero when oldself was allocated */ * If the stack is unlimited, we set the size to zero to disable * XXX: Work harder here. Get the stack size from /proc/self/rmap * Get the variables that affect thread behavior from the environment. * Make per-thread copies of global variables, for speed. * Tell the kernel to fix up ldx/stx instructions that * refer to non-8-byte aligned data instead of giving * the process an alignment trap and generating SIGBUS. * Programs compiled for 32-bit sparc with the Studio SS12 * compiler get this done for them automatically (in _init()). * We do it here for the benefit of programs compiled with * other compilers, like gcc. * This is necessary for the _THREAD_LOCKS_MISALIGNED=1 * environment variable horrible hack to work. * When we have initialized the primary link map, inform * the dynamic linker about our interface functions. * Defer signals until TLS constructors have been called. * Make private copies of __xpg4 and __xpg6 so libc can test * them after this point without invoking the dynamic linker. /* PROBE_SUPPORT begin */ * We need to reset __threaded dynamically at runtime, so that * __threaded can be bound to __threaded outside libc which may not * have initial value of 1 (without a copy relocation in a.out). * If we are doing fini processing for the instance of libc * on the first alternate link map (this happens only when * the dynamic linker rejects a bad audit library), then clear * __curthread(). We abandon whatever memory was allocated by * lmalloc() while running on this alternate link-map but we * don't care (and can't find the memory in any case); we just * want to protect the application from this bad audit library. * No fini processing is done by libc in the normal case. * finish_init is called when we are about to become multi-threaded, * that is, on the first call to thr_create(). * No locks needed here; we are single-threaded on the first call. * We can be called only after the primary link map has been set up. * Initialize self->ul_policy, self->ul_cid, and self->ul_pri. * Allocate the queue_head array if not already allocated. * Now allocate the thread hash table. thr_panic(
"cannot allocate thread hash table");
* Set up the SIGCANCEL handler for threads cancellation. * Arrange to do special things on exit -- * - collect queue statistics from all remaining active threads. * - dump queue statistics to stderr if _THREAD_QUEUE_DUMP is set. * - grab assert_lock to ensure that assertion failures * and a core dump take precedence over _exit(). * (Functions are called in the reverse order of their registration.) * Used only by postfork1_child(), below. * This is called from fork1() in the child. * Reset our data structures to reflect one lwp. /* daemon threads shouldn't call fork1(), but oh well... */ * Some thread in the parent might have been suspended * while holding udp->callout_lock or udp->ld_lock. * Reinitialize the child's copies. /* no one in the child is on a sleep queue; reinitialize */ * All lwps except ourself are gone. Mark them so. * First mark all of the lwps that have already been freed. * Then mark and free all of the active lwps except ourself. * Since we are single-threaded, no locks are required here. * Do post-fork1 processing for subsystems that need it. ts.
tv_sec = 0;
/* give him a chance to run */ ts.
tv_nsec =
100000;
/* 100 usecs or clock tick */ break;
/* so we are done */ * He is marked as being in the process of stopping * himself. Loop around and continue him again. * He may not have been stopped the first time. * Suspend an lwp with lwp_suspend(), then move it to a safe point, * that is, to a point where ul_critical and ul_rtld are both zero. * On return, the ulwp_lock() is dropped as with ulwp_unlock(). * If 'link_dropped' is non-NULL, then 'link_lock' is held on entry. * If we have to drop link_lock, we store 1 through link_dropped. * If the lwp exits before it can be suspended, we return ESRCH. * We must grab the target's spin lock before suspending it. * See the comments below and in _thrp_suspend() for why. /* thread is already safe */ * Setting ul_pleasestop causes the target thread to stop * itself in _thrp_suspend(), below, after we drop its lock. * We must continue the critical thread before dropping * link_lock because the critical thread may be holding * the queue lock for link_lock. This is delicate. /* be sure to drop link_lock only once */ * The thread may disappear by calling thr_exit() so we * cannot rely on the ulwp pointer after dropping the lock. * Instead, we search the hash table to find it again. * When we return, we may find that the thread has been * interfaces are prone to such race conditions by design. * Do another lwp_suspend() to make sure we don't * return until the target thread is fully stopped * in the kernel. Don't apply lwp_suspend() until * we know that the target is not holding any * queue locks, that is, that it has completed * ulwp_unlock(self) and has, or at least is * about to, call lwp_suspend() on itself. We do * this by grabbing the target's spin lock. * If some other thread did a thr_continue() * on the target thread we have to start over. * We can't suspend anyone except ourself while * some other thread is performing a fork. * This also allows only one suspension at a time. * After suspending the other thread, move it out of a * critical section and deal with the schedctl mappings. * safe_suspend() suspends the other thread, calls * ulwp_broadcast(ulwp) and drops the ulwp lock. * We are suspending ourself. We must not take a signal * until we return from lwp_suspend() and clear ul_stopping. * This is to guard against siglongjmp(). * Grab our spin lock before dropping ulwp_mutex(self). * This prevents the suspending thread from applying * lwp_suspend() to us before we emerge from * lmutex_unlock(mp) and have dropped mp's queue lock. * From this point until we return from lwp_suspend(), * we must not call any function that might invoke the * dynamic linker, that is, we can only call functions * private to the library. * Also, this is a nasty race condition for a process * that is undergoing a forkall() operation: * Once we clear our spinlock (below), we are vulnerable * to being suspended by the forkall() thread before * we manage to suspend ourself in ___lwp_suspend(). * See safe_suspend() and force_continue(). * To avoid a SIGSEGV due to the disappearance * of the schedctl mappings in the child process, * which can happen in spin_lock_clear() if we * are suspended while we are in the middle of * its call to preempt(), we preemptively clear * our own schedctl pointer before dropping our * spinlock. We reinstate it, in both the parent * and (if this really is a forkall()) the child. * Somebody else continued us. * We can't grab ulwp_lock(self) * until after clearing ul_stopping. * force_continue() relies on this. * Suspend all lwps other than ourself in preparation for fork. * Move the stopped lwp out of a critical section. * Clear the schedctl pointers in the child of forkall(). * Set all lwps that were stopped for fork() running again. * Exit a critical section, take deferred actions if necessary. * Called from exit_critical() and from sigon(). * Don't suspend ourself or take a deferred signal while dying * or while executing inside the dynamic linker (ld.so.1). * Avoid a recursive call to exit_critical() in _thrp_suspend() * by keeping self->ul_critical == 1 here. * Guard against suspending ourself while on a sleep * queue. See the comments in call_user_handler(). * Clear ul_cursig before proceeding. * This protects us from the dynamic linker's * calls to bind_guard()/bind_clear() in the * event that it is invoked to resolve a symbol * like take_deferred_signal() below. * _ti_bind_guard() and _ti_bind_clear() are called by the dynamic linker * (ld.so.1) when it has do do something, like resolve a symbol to be called * by the application or one of its libraries. _ti_bind_guard() is called * on entry to ld.so.1, _ti_bind_clear() on exit from ld.so.1 back to the * application. The dynamic linker gets special dispensation from libc to * run in a critical region (all signals deferred and no thread suspension * or forking allowed), and to be immune from cancellation for the duration. sigoff(
self);
/* see no signals while holding ld_lock */ * Tell the dynamic linker (ld.so.1) whether or not it was entered from * a critical region in libc. Return zero if not, else return non-zero. return (
level);
/* ld.so.1 hasn't (yet) called enter() */ * sigoff() and sigon() enable cond_wait() to behave (optionally) like * it does in the old libthread (see the comments in cond_wait_queue()). * Also, signals are deferred at thread startup until TLS constructors * have all been called, at which time _thrp_setup() calls sigon(). * _sigoff() and _sigon() are external consolidation-private interfaces to * sigoff() and sigon(), respectively, in libc. These are used in libnsl. * Also, _sigoff() and _sigon() are called from dbx's run-time checking * (librtc.so) to defer signals during its critical sections (not to be * confused with libc critical sections [see exit_critical() above]). if (
new_level >
65536)
/* 65536 is totally arbitrary */ if (
new_level >
65536)
/* 65536 is totally arbitrary */ * The remainder of this file implements the private interfaces to java for * garbage collection. It is no longer used, at least by java 1.2. * It can all go away once all old JVMs have disappeared. * Get the available register state for the target thread. * Return non-volatile registers: TRS_NONVOLATILE * Set the appropriate register state for the target thread. * This is not used by java. It exists solely for the MSTC test suite. /* do /proc stuff here? */ yield();
/* give him a chance to stop */ /* "__gettsp(%u): can't read lwpstatus" w/o stdio */ (
void)
strcat(
buf,
"): can't read lwpstatus");
* This tells java stack walkers how to find the ucontext * structure passed to signal handlers. * Mark a thread a mutator or reset a mutator to being a default, * The target thread should be the caller itself or a suspended thread. * This prevents the target from also changing its ul_mutator field. * Establish a barrier against new mutators. Any non-mutator trying * to become a mutator is suspended until the barrier is removed. * Wait if trying to set the barrier while it is already set. * Wakeup any blocked non-mutators when barrier is removed. * Suspend the set of all mutators except for the caller. The list * of actively running threads is searched and only the mutators * in this list are suspended. Actively running non-mutators remain * running. Any other thread is suspended. * Move the stopped lwp out of a critical section. * Suspend the target mutator. The caller is permitted to suspend * itself. If a mutator barrier is enabled, the caller will suspend * itself as though it had been suspended by thr_suspend_allmutators(). * When the barrier is removed, this thread will be resumed. Any * suspended mutator, whether suspended by thr_suspend_mutator(), or by * thr_suspend_allmutators(), can be resumed by thr_continue_mutator(). * Resume the set of all suspended mutators. * Resume a suspended mutator. /* PROBE_SUPPORT begin */