ThreadPoolExecutor.java revision 38
2362N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 0N/A * This code is free software; you can redistribute it and/or modify it 0N/A * under the terms of the GNU General Public License version 2 only, as 0N/A * published by the Free Software Foundation. Sun designates this 2362N/A * particular file as subject to the "Classpath" exception as provided 0N/A * by Sun in the LICENSE file that accompanied this code. 0N/A * This code is distributed in the hope that it will be useful, but WITHOUT 0N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 0N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 0N/A * version 2 for more details (a copy is included in the LICENSE file that 0N/A * accompanied this code). 0N/A * You should have received a copy of the GNU General Public License version 0N/A * 2 along with this work; if not, write to the Free Software Foundation, 0N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 0N/A * Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, 2362N/A * CA 95054 USA or visit www.sun.com if you need additional information or 0N/A * This file is available under and governed by the GNU General Public 0N/A * License version 2 only, as published by the Free Software Foundation. 0N/A * However, the following notice accompanied the original version of this 0N/A * Written by Doug Lea with assistance from members of JCP JSR-166 0N/A * Expert Group and released to the public domain, as explained at 0N/A * An {@link ExecutorService} that executes each submitted task using 0N/A * one of possibly several pooled threads, normally configured 0N/A * using {@link Executors} factory methods. 0N/A * <p>Thread pools address two different problems: they usually 0N/A * provide improved performance when executing large numbers of 0N/A * asynchronous tasks, due to reduced per-task invocation overhead, 0N/A * and they provide a means of bounding and managing the resources, 0N/A * including threads, consumed when executing a collection of tasks. 0N/A * Each {@code ThreadPoolExecutor} also maintains some basic 0N/A * statistics, such as the number of completed tasks. 0N/A * <p>To be useful across a wide range of contexts, this class 0N/A * provides many adjustable parameters and extensibility 0N/A * hooks. However, programmers are urged to use the more convenient 0N/A * {@link Executors} factory methods {@link 0N/A * Executors#newCachedThreadPool} (unbounded thread pool, with 0N/A * automatic thread reclamation), {@link Executors#newFixedThreadPool} 0N/A * (fixed size thread pool) and {@link 0N/A * Executors#newSingleThreadExecutor} (single background thread), that 0N/A * preconfigure settings for the most common usage 0N/A * scenarios. Otherwise, use the following guide when manually 0N/A * configuring and tuning this class: 0N/A * <dt>Core and maximum pool sizes</dt> 0N/A * <dd>A {@code ThreadPoolExecutor} will automatically adjust the 0N/A * pool size (see {@link #getPoolSize}) 0N/A * according to the bounds set by 0N/A * corePoolSize (see {@link #getCorePoolSize}) and 0N/A * maximumPoolSize (see {@link #getMaximumPoolSize}). 0N/A * When a new task is submitted in method {@link #execute}, and fewer 0N/A * than corePoolSize threads are running, a new thread is created to 0N/A * handle the request, even if other worker threads are idle. If 0N/A * there are more than corePoolSize but less than maximumPoolSize 0N/A * threads running, a new thread will be created only if the queue is 0N/A * full. By setting corePoolSize and maximumPoolSize the same, you 0N/A * create a fixed-size thread pool. By setting maximumPoolSize to an 0N/A * essentially unbounded value such as {@code Integer.MAX_VALUE}, you 0N/A * allow the pool to accommodate an arbitrary number of concurrent 0N/A * tasks. Most typically, core and maximum pool sizes are set only 0N/A * upon construction, but they may also be changed dynamically using 0N/A * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd> 0N/A * <dt>On-demand construction</dt> 0N/A * <dd> By default, even core threads are initially created and 0N/A * started only when new tasks arrive, but this can be overridden 0N/A * dynamically using method {@link #prestartCoreThread} or {@link 0N/A * #prestartAllCoreThreads}. You probably want to prestart threads if 0N/A * you construct the pool with a non-empty queue. </dd> 0N/A * <dt>Creating new threads</dt> 0N/A * <dd>New threads are created using a {@link ThreadFactory}. If not 0N/A * otherwise specified, a {@link Executors#defaultThreadFactory} is 0N/A * used, that creates threads to all be in the same {@link 0N/A * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and 0N/A * non-daemon status. By supplying a different ThreadFactory, you can 0N/A * alter the thread's name, thread group, priority, daemon status, 0N/A * etc. If a {@code ThreadFactory} fails to create a thread when asked 0N/A * by returning null from {@code newThread}, the executor will 0N/A * continue, but might not be able to execute any tasks. Threads 0N/A * should possess the "modifyThread" {@code RuntimePermission}. If 0N/A * worker threads or other threads using the pool do not possess this 0N/A * permission, service may be degraded: configuration changes may not 0N/A * take effect in a timely manner, and a shutdown pool may remain in a 0N/A * state in which termination is possible but not completed.</dd> 0N/A * <dt>Keep-alive times</dt> 0N/A * <dd>If the pool currently has more than corePoolSize threads, 0N/A * excess threads will be terminated if they have been idle for more 0N/A * than the keepAliveTime (see {@link #getKeepAliveTime}). This 0N/A * provides a means of reducing resource consumption when the pool is 0N/A * not being actively used. If the pool becomes more active later, new 0N/A * threads will be constructed. This parameter can also be changed 0N/A * dynamically using method {@link #setKeepAliveTime}. Using a value 0N/A * of {@code Long.MAX_VALUE} {@link TimeUnit#NANOSECONDS} effectively 0N/A * disables idle threads from ever terminating prior to shut down. By 0N/A * default, the keep-alive policy applies only when there are more 0N/A * than corePoolSizeThreads. But method {@link 0N/A * #allowCoreThreadTimeOut(boolean)} can be used to apply this 0N/A * time-out policy to core threads as well, so long as the 0N/A * keepAliveTime value is non-zero. </dd> 0N/A * <dd>Any {@link BlockingQueue} may be used to transfer and hold 0N/A * submitted tasks. The use of this queue interacts with pool sizing: 0N/A * <li> If fewer than corePoolSize threads are running, the Executor 0N/A * always prefers adding a new thread 0N/A * rather than queuing.</li> 0N/A * <li> If corePoolSize or more threads are running, the Executor 0N/A * always prefers queuing a request rather than adding a new 0N/A * <li> If a request cannot be queued, a new thread is created unless 0N/A * this would exceed maximumPoolSize, in which case, the task will be 0N/A * There are three general strategies for queuing: 0N/A * <li> <em> Direct handoffs.</em> A good default choice for a work 0N/A * queue is a {@link SynchronousQueue} that hands off tasks to threads 0N/A * without otherwise holding them. Here, an attempt to queue a task 0N/A * will fail if no threads are immediately available to run it, so a 0N/A * new thread will be constructed. This policy avoids lockups when 0N/A * handling sets of requests that might have internal dependencies. 0N/A * Direct handoffs generally require unbounded maximumPoolSizes to 0N/A * avoid rejection of new submitted tasks. This in turn admits the 0N/A * possibility of unbounded thread growth when commands continue to 0N/A * arrive on average faster than they can be processed. </li> 0N/A * <li><em> Unbounded queues.</em> Using an unbounded queue (for 0N/A * example a {@link LinkedBlockingQueue} without a predefined 0N/A * capacity) will cause new tasks to wait in the queue when all 0N/A * corePoolSize threads are busy. Thus, no more than corePoolSize 0N/A * threads will ever be created. (And the value of the maximumPoolSize 0N/A * therefore doesn't have any effect.) This may be appropriate when 0N/A * each task is completely independent of others, so tasks cannot 0N/A * affect each others execution; for example, in a web page server. 0N/A * While this style of queuing can be useful in smoothing out 0N/A * transient bursts of requests, it admits the possibility of 0N/A * unbounded work queue growth when commands continue to arrive on 0N/A * average faster than they can be processed. </li> 0N/A * <li><em>Bounded queues.</em> A bounded queue (for example, an 0N/A * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when 0N/A * used with finite maximumPoolSizes, but can be more difficult to 0N/A * tune and control. Queue sizes and maximum pool sizes may be traded 0N/A * off for each other: Using large queues and small pools minimizes 0N/A * CPU usage, OS resources, and context-switching overhead, but can 0N/A * lead to artificially low throughput. If tasks frequently block (for 0N/A * example if they are I/O bound), a system may be able to schedule 0N/A * time for more threads than you otherwise allow. Use of small queues 0N/A * generally requires larger pool sizes, which keeps CPUs busier but 0N/A * may encounter unacceptable scheduling overhead, which also 0N/A * decreases throughput. </li> 0N/A * <dt>Rejected tasks</dt> 0N/A * <dd> New tasks submitted in method {@link #execute} will be 0N/A * <em>rejected</em> when the Executor has been shut down, and also 0N/A * when the Executor uses finite bounds for both maximum threads and 0N/A * work queue capacity, and is saturated. In either case, the {@code 0N/A * execute} method invokes the {@link 0N/A * RejectedExecutionHandler#rejectedExecution} method of its {@link 0N/A * RejectedExecutionHandler}. Four predefined handler policies are 0N/A * <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the 0N/A * handler throws a runtime {@link RejectedExecutionException} upon 0N/A * <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread 0N/A * that invokes {@code execute} itself runs the task. This provides a 0N/A * simple feedback control mechanism that will slow down the rate that 0N/A * new tasks are submitted. </li> 0N/A * <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that 0N/A * cannot be executed is simply dropped. </li> 0N/A * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the 0N/A * executor is not shut down, the task at the head of the work queue 0N/A * is dropped, and then execution is retried (which can fail again, 0N/A * causing this to be repeated.) </li> 0N/A * It is possible to define and use other kinds of {@link 0N/A * RejectedExecutionHandler} classes. Doing so requires some care * especially when policies are designed to work only under particular * capacity or queuing policies. </dd> * <dd>This class provides {@code protected} overridable {@link * #beforeExecute} and {@link #afterExecute} methods that are called * before and after execution of each task. These can be used to * manipulate the execution environment; for example, reinitializing * ThreadLocals, gathering statistics, or adding log * entries. Additionally, method {@link #terminated} can be overridden * to perform any special processing that needs to be done once the * Executor has fully terminated. * <p>If hook or callback methods throw exceptions, internal worker * threads may in turn fail and abruptly terminate.</dd> * <dt>Queue maintenance</dt> * <dd> Method {@link #getQueue} allows access to the work queue for * purposes of monitoring and debugging. Use of this method for any * other purpose is strongly discouraged. Two supplied methods, * {@link #remove} and {@link #purge} are available to assist in * storage reclamation when large numbers of queued tasks become * <dd> A pool that is no longer referenced in a program <em>AND</em> * has no remaining threads will be {@code shutdown} automatically. If * you would like to ensure that unreferenced pools are reclaimed even * if users forget to call {@link #shutdown}, then you must arrange * that unused threads eventually die, by setting appropriate * keep-alive times, using a lower bound of zero core threads and/or * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd> * <p> <b>Extension example</b>. Most extensions of this class * override one or more of the protected hook methods. For example, * here is a subclass that adds a simple pause/resume feature: * class PausableThreadPoolExecutor extends ThreadPoolExecutor { * private boolean isPaused; * private ReentrantLock pauseLock = new ReentrantLock(); * private Condition unpaused = pauseLock.newCondition(); * public PausableThreadPoolExecutor(...) { super(...); } * protected void beforeExecute(Thread t, Runnable r) { * super.beforeExecute(t, r); * while (isPaused) unpaused.await(); * } catch (InterruptedException ie) { * The main pool control state, ctl, is an atomic integer packing * workerCount, indicating the effective number of threads * runState, indicating whether running, shutting down etc * In order to pack them into one int, we limit workerCount to * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2 * billion) otherwise representable. If this is ever an issue in * the future, the variable can be changed to be an AtomicLong, * and the shift/mask constants below adjusted. But until the need * arises, this code is a bit faster and simpler using an int. * The workerCount is the number of workers that have been * permitted to start and not permitted to stop. The value may be * transiently different from the actual number of live threads, * for example when a ThreadFactory fails to create a thread when * asked, and when exiting threads are still performing * bookkeeping before terminating. The user-visible pool size is * reported as the current size of the workers set. * The runState provides the main lifecyle control, taking on values: * RUNNING: Accept new tasks and process queued tasks * SHUTDOWN: Don't accept new tasks, but process queued tasks * STOP: Don't accept new tasks, don't process queued tasks, * and interrupt in-progress tasks * TIDYING: All tasks have terminated, workerCount is zero, * the thread transitioning to state TIDYING * will run the terminated() hook method * TERMINATED: terminated() has completed * The numerical order among these values matters, to allow * ordered comparisons. The runState monotonically increases over * time, but need not hit each state. The transitions are: * On invocation of shutdown(), perhaps implicitly in finalize() * (RUNNING or SHUTDOWN) -> STOP * On invocation of shutdownNow() * When both queue and pool are empty * When the terminated() hook method has completed * Threads waiting in awaitTermination() will return when the * state reaches TERMINATED. * Detecting the transition from SHUTDOWN to TIDYING is less * straightforward than you'd like because the queue may become * empty after non-empty and vice versa during SHUTDOWN state, but * we can only terminate if, after seeing that it is empty, we see * that workerCount is 0 (which sometimes entails a recheck -- see // runState is stored in the high-order bits // Packing and unpacking ctl * Bit field accessors that don't require unpacking ctl. * These depend on the bit layout and on workerCount being never negative. * Attempt to CAS-increment the workerCount field of ctl. * Attempt to CAS-decrement the workerCount field of ctl. * Decrements the workerCount field of ctl. This is called only on * abrupt termination of a thread (see processWorkerExit). Other * decrements are performed within getTask. * The queue used for holding tasks and handing off to worker * threads. We do not require that workQueue.poll() returning * null necessarily means that workQueue.isEmpty(), so rely * solely on isEmpty to see if the queue is empty (which we must * do for example when deciding whether to transition from * SHUTDOWN to TIDYING). This accommodates special-purpose * queues such as DelayQueues for which poll() is allowed to * return null even if it may later return non-null when delays * Lock held on access to workers set and related bookkeeping. * While we could use a concurrent set of some sort, it turns out * to be generally preferable to use a lock. Among the reasons is * that this serializes interruptIdleWorkers, which avoids * unnecessary interrupt storms, especially during shutdown. * Otherwise exiting threads would concurrently interrupt those * that have not yet interrupted. It also simplifies some of the * associated statistics bookkeeping of largestPoolSize etc. We * also hold mainLock on shutdown and shutdownNow, for the sake of * ensuring workers set is stable while separately checking * permission to interrupt and actually interrupting. * Set containing all worker threads in pool. Accessed only when * Wait condition to support awaitTermination * Tracks largest attained pool size. Accessed only under * Counter for completed tasks. Updated only on termination of * worker threads. Accessed only under mainLock. * All user control parameters are declared as volatiles so that * ongoing actions are based on freshest values, but without need * for locking, since no internal invariants depend on them * changing synchronously with respect to other actions. * Factory for new threads. All threads are created using this * factory (via method addWorker). All callers must be prepared * for addWorker to fail, which may reflect a system or user's * policy limiting the number of threads. Even though it is not * treated as an error, failure to create threads may result in * new tasks being rejected or existing ones remaining stuck in * the queue. On the other hand, no special precautions exist to * handle OutOfMemoryErrors that might be thrown while trying to * create threads, since there is generally no recourse from * Handler called when saturated or shutdown in execute. * Timeout in nanoseconds for idle threads waiting for work. * Threads use this timeout when there are more than corePoolSize * present or if allowCoreThreadTimeOut. Otherwise they wait * If false (default), core threads stay alive even when idle. * If true, core threads use keepAliveTime to time out waiting * Core pool size is the minimum number of workers to keep alive * (and not allow to time out etc) unless allowCoreThreadTimeOut * is set, in which case the minimum is zero. * Maximum pool size. Note that the actual maximum is internally * The default rejected execution handler * Permission required for callers of shutdown and shutdownNow. * We additionally require (see checkShutdownAccess) that callers * have permission to actually interrupt threads in the worker set * (as governed by Thread.interrupt, which relies on * ThreadGroup.checkAccess, which in turn relies on * SecurityManager.checkAccess). Shutdowns are attempted only if * All actual invocations of Thread.interrupt (see * interruptIdleWorkers and interruptWorkers) ignore * SecurityExceptions, meaning that the attempted interrupts * silently fail. In the case of shutdown, they should not fail * unless the SecurityManager has inconsistent policies, sometimes * allowing access to a thread and sometimes not. In such cases, * failure to actually interrupt threads may disable or delay full * termination. Other uses of interruptIdleWorkers are advisory, * and failure to actually interrupt will merely delay response to * configuration changes so is not handled exceptionally. * Class Worker mainly maintains interrupt control state for * threads running tasks, along with other minor bookkeeping. * This class opportunistically extends AbstractQueuedSynchronizer * to simplify acquiring and releasing a lock surrounding each * task execution. This protects against interrupts that are * intended to wake up a worker thread waiting for a task from * instead interrupting a task being run. We implement a simple * non-reentrant mutual exclusion lock rather than use ReentrantLock * because we do not want worker tasks to be able to reacquire the * lock when they invoke pool control methods like setCorePoolSize. * This class will never be serialized, but we provide a * serialVersionUID to suppress a javac warning. /** Thread this worker is running in. Null if factory fails. */ /** Initial task to run. Possibly null. */ /** Per-thread task counter */ * Creates with given first task and thread from ThreadFactory. * @param firstTask the first task (null if none) /** Delegates main run loop to outer runWorker */ // The value 0 represents the unlocked state. // The value 1 represents the locked state. * Methods for setting control state * Transitions runState to given target, or leaves it alone if * already at least the given target. * @param targetState the desired state, either SHUTDOWN or STOP * (but not TIDYING or TERMINATED -- use tryTerminate for that) * Transitions to TERMINATED state if either (SHUTDOWN and pool * and queue empty) or (STOP and pool empty). If otherwise * eligible to terminate but workerCount is nonzero, interrupts an * idle worker to ensure that shutdown signals propagate. This * method must be called following any action that might make * termination possible -- reducing worker count or removing tasks * from the queue during shutdown. The method is non-private to * allow access from ScheduledThreadPoolExecutor. // else retry on failed CAS * Methods for controlling interrupts to worker threads. * If there is a security manager, makes sure caller has * permission to shut down threads in general (see shutdownPerm). * If this passes, additionally makes sure the caller is allowed * to interrupt each worker thread. This might not be true even if * first check passed, if the SecurityManager treats some threads * Interrupts all threads, even if active. Ignores SecurityExceptions * (in which case some threads may remain uninterrupted). * Interrupts threads that might be waiting for tasks (as * indicated by not being locked) so they can check for * termination or configuration changes. Ignores * SecurityExceptions (in which case some threads may remain * @param onlyOne If true, interrupt at most one worker. This is * called only from tryTerminate when termination is otherwise * enabled but there are still other workers. In this case, at * most one waiting worker is interrupted to propagate shutdown * signals in case all threads are currently waiting. * Interrupting any arbitrary thread ensures that newly arriving * workers since shutdown began will also eventually exit. * To guarantee eventual termination, it suffices to always * interrupt only one idle worker, but shutdown() interrupts all * idle workers so that redundant workers exit promptly, not * waiting for a straggler task to finish. * Common form of interruptIdleWorkers, to avoid having to * remember what the boolean argument means. private static final boolean ONLY_ONE =
true;
* Ensures that unless the pool is stopping, the current thread * does not have its interrupt set. This requires a double-check * of state in case the interrupt was cleared concurrently with a * shutdownNow -- if so, the interrupt is re-enabled. * Misc utilities, most of which are also exported to * ScheduledThreadPoolExecutor * Invokes the rejected execution handler for the given command. * Package-protected for use by ScheduledThreadPoolExecutor. * Performs any further cleanup following run state transition on * invocation of shutdown. A no-op here, but used by * ScheduledThreadPoolExecutor to cancel delayed tasks. * State check needed by ScheduledThreadPoolExecutor to * enable running tasks during shutdown. * @param shutdownOK true if should return true if SHUTDOWN * Drains the task queue into a new list, normally using * drainTo. But if the queue is a DelayQueue or any other kind of * queue for which poll or drainTo may fail to remove some * elements, it deletes them one by one. * Methods for creating, running and cleaning up after workers * Checks if a new worker can be added with respect to current * pool state and the given bound (either core or maximum). If so, * the worker count is adjusted accordingly, and, if possible, a * new worker is created and started running firstTask as its * first task. This method returns false if the pool is stopped or * eligible to shut down. It also returns false if the thread * factory fails to create a thread when asked, which requires a * backout of workerCount, and a recheck for termination, in case * the existence of this worker was holding up termination. * @param firstTask the task the new thread should run first (or * null if none). Workers are created with an initial first task * (in method execute()) to bypass queuing when there are fewer * than corePoolSize threads (in which case we always start one), * or when the queue is full (in which case we must bypass queue). * Initially idle threads are usually created via * prestartCoreThread or to replace other dying workers. * @param core if true use corePoolSize as bound, else * maximumPoolSize. (A boolean indicator is used here rather than a * value to ensure reads of fresh values after checking other pool * @return true if successful // Check if queue empty only if necessary. // else CAS failed due to workerCount change; retry inner loop // Recheck while holding lock. // Back out on ThreadFactory failure or if // shut down before lock acquired. // It is possible (but unlikely) for a thread to have been // added to workers, but not yet started, during transition to // STOP, which could result in a rare missed interrupt, // because Thread.interrupt is not guaranteed to have any effect // on a non-yet-started Thread (see Thread#interrupt). * Performs cleanup and bookkeeping for a dying worker. Called * only from worker threads. Unless completedAbruptly is set, * assumes that workerCount has already been adjusted to account * for exit. This method removes thread from worker set, and * possibly terminates the pool or replaces the worker if either * it exited due to user task exception or if fewer than * corePoolSize workers are running or queue is non-empty but * @param completedAbruptly if the worker died due to user exception return;
// replacement not needed * Performs blocking or timed wait for a task, depending on * current configuration settings, or returns null if this worker * must exit because of any of: * 1. There are more than maximumPoolSize workers (due to * a call to setMaximumPoolSize). * 2. The pool is stopped. * 3. The pool is shutdown and the queue is empty. * 4. This worker timed out waiting for a task, and timed-out * workers are subject to termination (that is, * {@code allowCoreThreadTimeOut || workerCount > corePoolSize}) * both before and after the timed wait. * @return task, or null if the worker must exit, in which case * workerCount is decremented boolean timedOut =
false;
// Did the last poll() time out? // Check if queue empty only if necessary. boolean timed;
// Are workers subject to culling? // else CAS failed due to workerCount change; retry inner loop * Main worker run loop. Repeatedly gets tasks from queue and * executes them, while coping with a number of issues: * 1. We may start out with an initial task, in which case we * don't need to get the first one. Otherwise, as long as pool is * running, we get tasks from getTask. If it returns null then the * worker exits due to changed pool state or configuration * parameters. Other exits result from exception throws in * external code, in which case completedAbruptly holds, which * usually leads processWorkerExit to replace this thread. * 2. Before running any task, the lock is acquired to prevent * other pool interrupts while the task is executing, and * clearInterruptsForTaskRun called to ensure that unless pool is * stopping, this thread does not have its interrupt set. * 3. Each task run is preceded by a call to beforeExecute, which * might throw an exception, in which case we cause thread to die * (breaking loop with completedAbruptly true) without processing * 4. Assuming beforeExecute completes normally, we run the task, * gathering any of its thrown exceptions to send to * afterExecute. We separately handle RuntimeException, Error * (both of which the specs guarantee that we trap) and arbitrary * Throwables. Because we cannot rethrow Throwables within * Runnable.run, we wrap them within Errors on the way out (to the * thread's UncaughtExceptionHandler). Any thrown exception also * conservatively causes thread to die. * 5. After task.run completes, we call afterExecute, which may * also throw an exception, which will also cause thread to * die. According to JLS Sec 14.20, this exception is the one that * will be in effect even if task.run throws. * The net effect of the exception mechanics is that afterExecute * and the thread's UncaughtExceptionHandler have as accurate * information as we can provide about any problems encountered by // Public constructors and methods * Creates a new {@code ThreadPoolExecutor} with the given initial * parameters and default thread factory and rejected execution handler. * It may be more convenient to use one of the {@link Executors} factory * methods instead of this general purpose constructor. * @param corePoolSize the number of threads to keep in the pool, even * if they are idle, unless {@code allowCoreThreadTimeOut} is set * @param maximumPoolSize the maximum number of threads to allow in the * @param keepAliveTime when the number of threads is greater than * the core, this is the maximum time that excess idle threads * will wait for new tasks before terminating. * @param unit the time unit for the {@code keepAliveTime} argument * @param workQueue the queue to use for holding tasks before they are * executed. This queue will hold only the {@code Runnable} * tasks submitted by the {@code execute} method. * @throws IllegalArgumentException if one of the following holds:<br> * {@code corePoolSize < 0}<br> * {@code keepAliveTime < 0}<br> * {@code maximumPoolSize <= 0}<br> * {@code maximumPoolSize < corePoolSize} * @throws NullPointerException if {@code workQueue} is null * Creates a new {@code ThreadPoolExecutor} with the given initial * parameters and default rejected execution handler. * @param corePoolSize the number of threads to keep in the pool, even * if they are idle, unless {@code allowCoreThreadTimeOut} is set * @param maximumPoolSize the maximum number of threads to allow in the * @param keepAliveTime when the number of threads is greater than * the core, this is the maximum time that excess idle threads * will wait for new tasks before terminating. * @param unit the time unit for the {@code keepAliveTime} argument * @param workQueue the queue to use for holding tasks before they are * executed. This queue will hold only the {@code Runnable} * tasks submitted by the {@code execute} method. * @param threadFactory the factory to use when the executor * @throws IllegalArgumentException if one of the following holds:<br> * {@code corePoolSize < 0}<br> * {@code keepAliveTime < 0}<br> * {@code maximumPoolSize <= 0}<br> * {@code maximumPoolSize < corePoolSize} * @throws NullPointerException if {@code workQueue} * or {@code threadFactory} is null * Creates a new {@code ThreadPoolExecutor} with the given initial * parameters and default thread factory. * @param corePoolSize the number of threads to keep in the pool, even * if they are idle, unless {@code allowCoreThreadTimeOut} is set * @param maximumPoolSize the maximum number of threads to allow in the * @param keepAliveTime when the number of threads is greater than * the core, this is the maximum time that excess idle threads * will wait for new tasks before terminating. * @param unit the time unit for the {@code keepAliveTime} argument * @param workQueue the queue to use for holding tasks before they are * executed. This queue will hold only the {@code Runnable} * tasks submitted by the {@code execute} method. * @param handler the handler to use when execution is blocked * because the thread bounds and queue capacities are reached * @throws IllegalArgumentException if one of the following holds:<br> * {@code corePoolSize < 0}<br> * {@code keepAliveTime < 0}<br> * {@code maximumPoolSize <= 0}<br> * {@code maximumPoolSize < corePoolSize} * @throws NullPointerException if {@code workQueue} * or {@code handler} is null * Creates a new {@code ThreadPoolExecutor} with the given initial * @param corePoolSize the number of threads to keep in the pool, even * if they are idle, unless {@code allowCoreThreadTimeOut} is set * @param maximumPoolSize the maximum number of threads to allow in the * @param keepAliveTime when the number of threads is greater than * the core, this is the maximum time that excess idle threads * will wait for new tasks before terminating. * @param unit the time unit for the {@code keepAliveTime} argument * @param workQueue the queue to use for holding tasks before they are * executed. This queue will hold only the {@code Runnable} * tasks submitted by the {@code execute} method. * @param threadFactory the factory to use when the executor * @param handler the handler to use when execution is blocked * because the thread bounds and queue capacities are reached * @throws IllegalArgumentException if one of the following holds:<br> * {@code corePoolSize < 0}<br> * {@code keepAliveTime < 0}<br> * {@code maximumPoolSize <= 0}<br> * {@code maximumPoolSize < corePoolSize} * @throws NullPointerException if {@code workQueue} * or {@code threadFactory} or {@code handler} is null * Executes the given task sometime in the future. The task * may execute in a new thread or in an existing pooled thread. * If the task cannot be submitted for execution, either because this * executor has been shutdown or because its capacity has been reached, * the task is handled by the current {@code RejectedExecutionHandler}. * @param command the task to execute * @throws RejectedExecutionException at discretion of * {@code RejectedExecutionHandler}, if the task * cannot be accepted for execution * @throws NullPointerException if {@code command} is null * 1. If fewer than corePoolSize threads are running, try to * start a new thread with the given command as its first * task. The call to addWorker atomically checks runState and * workerCount, and so prevents false alarms that would add * threads when it shouldn't, by returning false. * 2. If a task can be successfully queued, then we still need * to double-check whether we should have added a thread * (because existing ones died since last checking) or that * the pool shut down since entry into this method. So we * recheck state and if necessary roll back the enqueuing if * stopped, or start a new thread if there are none. * 3. If we cannot queue task, then we try to add a new * thread. If it fails, we know we are shut down or saturated * and so reject the task. * Initiates an orderly shutdown in which previously submitted * tasks are executed, but no new tasks will be accepted. * Invocation has no additional effect if already shut down. * <p>This method does not wait for previously submitted tasks to * complete execution. Use {@link #awaitTermination awaitTermination} * @throws SecurityException {@inheritDoc} onShutdown();
// hook for ScheduledThreadPoolExecutor * Attempts to stop all actively executing tasks, halts the * processing of waiting tasks, and returns a list of the tasks * that were awaiting execution. These tasks are drained (removed) * from the task queue upon return from this method. * <p>This method does not wait for actively executing tasks to * terminate. Use {@link #awaitTermination awaitTermination} to * <p>There are no guarantees beyond best-effort attempts to stop * processing actively executing tasks. This implementation * cancels tasks via {@link Thread#interrupt}, so any task that * fails to respond to interrupts may never terminate. * @throws SecurityException {@inheritDoc} * Returns true if this executor is in the process of terminating * after {@link #shutdown} or {@link #shutdownNow} but has not * completely terminated. This method may be useful for * debugging. A return of {@code true} reported a sufficient * period after shutdown may indicate that submitted tasks have * ignored or suppressed interruption, causing this executor not * @return true if terminating but not yet terminated * Invokes {@code shutdown} when this executor is no longer * referenced and it has no threads. * Sets the thread factory used to create new threads. * @param threadFactory the new thread factory * @throws NullPointerException if threadFactory is null * Returns the thread factory used to create new threads. * @return the current thread factory * Sets a new handler for unexecutable tasks. * @param handler the new handler * @throws NullPointerException if handler is null * @see #getRejectedExecutionHandler * Returns the current handler for unexecutable tasks. * @return the current handler * @see #setRejectedExecutionHandler * Sets the core number of threads. This overrides any value set * in the constructor. If the new value is smaller than the * current value, excess existing threads will be terminated when * they next become idle. If larger, new threads will, if needed, * be started to execute any queued tasks. * @param corePoolSize the new core size * @throws IllegalArgumentException if {@code corePoolSize < 0} // We don't really know how many new threads are "needed". // As a heuristic, prestart enough new workers (up to new // core size) to handle the current number of tasks in // queue, but stop if queue becomes empty while doing so. * Returns the core number of threads. * @return the core number of threads * Starts a core thread, causing it to idly wait for work. This * overrides the default policy of starting core threads only when * new tasks are executed. This method will return {@code false} * if all core threads have already been started. * @return {@code true} if a thread was started * Starts all core threads, causing them to idly wait for work. This * overrides the default policy of starting core threads only when * new tasks are executed. * @return the number of threads started * Returns true if this pool allows core threads to time out and * terminate if no tasks arrive within the keepAlive time, being * replaced if needed when new tasks arrive. When true, the same * keep-alive policy applying to non-core threads applies also to * core threads. When false (the default), core threads are never * terminated due to lack of incoming tasks. * @return {@code true} if core threads are allowed to time out, * Sets the policy governing whether core threads may time out and * terminate if no tasks arrive within the keep-alive time, being * replaced if needed when new tasks arrive. When false, core * threads are never terminated due to lack of incoming * tasks. When true, the same keep-alive policy applying to * non-core threads applies also to core threads. To avoid * continual thread replacement, the keep-alive time must be * greater than zero when setting {@code true}. This method * should in general be called before the pool is actively used. * @param value {@code true} if should time out, else {@code false} * @throws IllegalArgumentException if value is {@code true} * and the current keep-alive time is not greater than zero * Sets the maximum allowed number of threads. This overrides any * value set in the constructor. If the new value is smaller than * the current value, excess existing threads will be * terminated when they next become idle. * @param maximumPoolSize the new maximum * @throws IllegalArgumentException if the new maximum is * less than or equal to zero, or * less than the {@linkplain #getCorePoolSize core pool size} * @see #getMaximumPoolSize * Returns the maximum allowed number of threads. * @return the maximum allowed number of threads * @see #setMaximumPoolSize * Sets the time limit for which threads may remain idle before * being terminated. If there are more than the core number of * threads currently in the pool, after waiting this amount of * time without processing a task, excess threads will be * terminated. This overrides any value set in the constructor. * @param time the time to wait. A time value of zero will cause * excess threads to terminate immediately after executing tasks. * @param unit the time unit of the {@code time} argument * @throws IllegalArgumentException if {@code time} less than zero or * if {@code time} is zero and {@code allowsCoreThreadTimeOut} * Returns the thread keep-alive time, which is the amount of time * that threads in excess of the core pool size may remain * idle before being terminated. * @param unit the desired time unit of the result /* User-level queue utilities */ * Returns the task queue used by this executor. Access to the * task queue is intended primarily for debugging and monitoring. * This queue may be in active use. Retrieving the task queue * does not prevent queued tasks from executing. * Removes this task from the executor's internal queue if it is * present, thus causing it not to be run if it has not already * <p> This method may be useful as one part of a cancellation * scheme. It may fail to remove tasks that have been converted * into other forms before being placed on the internal queue. For * example, a task entered using {@code submit} might be * converted into a form that maintains {@code Future} status. * However, in such cases, method {@link #purge} may be used to * remove those Futures that have been cancelled. * @param task the task to remove * @return true if the task was removed * Tries to remove from the work queue all {@link Future} * tasks that have been cancelled. This method can be useful as a * storage reclamation operation, that has no other impact on * functionality. Cancelled tasks are never executed, but may * accumulate in work queues until worker threads can actively * remove them. Invoking this method instead tries to remove them now. * However, this method may fail to remove tasks in * the presence of interference by other threads. // Take slow path if we encounter interference during traversal. // Make copy for traversal and call remove for cancelled entries. // The slow path is more likely to be O(N*N). * Returns the current number of threads in the pool. * @return the number of threads // Remove rare and surprising possibility of // isTerminated() && getPoolSize() > 0 * Returns the approximate number of threads that are actively * @return the number of threads * Returns the largest number of threads that have ever * simultaneously been in the pool. * @return the number of threads * Returns the approximate total number of tasks that have ever been * scheduled for execution. Because the states of tasks and * threads may change dynamically during computation, the returned * value is only an approximation. * @return the number of tasks * Returns the approximate total number of tasks that have * completed execution. Because the states of tasks and threads * may change dynamically during computation, the returned value * is only an approximation, but one that does not ever decrease * across successive calls. * @return the number of tasks * Method invoked prior to executing the given Runnable in the * given thread. This method is invoked by thread {@code t} that * will execute task {@code r}, and may be used to re-initialize * ThreadLocals, or to perform logging. * <p>This implementation does nothing, but may be customized in * subclasses. Note: To properly nest multiple overridings, subclasses * should generally invoke {@code super.beforeExecute} at the end of * @param t the thread that will run task {@code r} * @param r the task that will be executed * Method invoked upon completion of execution of the given Runnable. * This method is invoked by the thread that executed the task. If * non-null, the Throwable is the uncaught {@code RuntimeException} * or {@code Error} that caused execution to terminate abruptly. * <p>This implementation does nothing, but may be customized in * subclasses. Note: To properly nest multiple overridings, subclasses * should generally invoke {@code super.afterExecute} at the * beginning of this method. * <p><b>Note:</b> When actions are enclosed in tasks (such as * {@link FutureTask}) either explicitly or via methods such as * {@code submit}, these task objects catch and maintain * computational exceptions, and so they do not cause abrupt * termination, and the internal exceptions are <em>not</em> * passed to this method. If you would like to trap both kinds of * failures in this method, you can further probe for such cases, * as in this sample subclass that prints either the direct cause * or the underlying exception if a task has been aborted: * class ExtendedExecutor extends ThreadPoolExecutor { * protected void afterExecute(Runnable r, Throwable t) { * super.afterExecute(r, t); * if (t == null && r instanceof Future<?>) { * Object result = ((Future<?>) r).get(); * } catch (CancellationException ce) { * } catch (ExecutionException ee) { * } catch (InterruptedException ie) { * @param r the runnable that has completed * @param t the exception that caused termination, or null if * execution completed normally * Method invoked when the Executor has terminated. Default * implementation does nothing. Note: To properly nest multiple * overridings, subclasses should generally invoke * {@code super.terminated} within this method. /* Predefined RejectedExecutionHandlers */ * A handler for rejected tasks that runs the rejected task * directly in the calling thread of the {@code execute} method, * unless the executor has been shut down, in which case the task * Creates a {@code CallerRunsPolicy}. * Executes task r in the caller's thread, unless the executor * has been shut down, in which case the task is discarded. * @param r the runnable task requested to be executed * @param e the executor attempting to execute this task * A handler for rejected tasks that throws a * {@code RejectedExecutionException}. * Creates an {@code AbortPolicy}. * Always throws RejectedExecutionException. * @param r the runnable task requested to be executed * @param e the executor attempting to execute this task * @throws RejectedExecutionException always. * A handler for rejected tasks that silently discards the * Creates a {@code DiscardPolicy}. * Does nothing, which has the effect of discarding task r. * @param r the runnable task requested to be executed * @param e the executor attempting to execute this task * A handler for rejected tasks that discards the oldest unhandled * request and then retries {@code execute}, unless the executor * is shut down, in which case the task is discarded. * Creates a {@code DiscardOldestPolicy} for the given executor. * Obtains and ignores the next task that the executor * would otherwise execute, if one is immediately available, * and then retries execution of task r, unless the executor * is shut down, in which case task r is instead discarded. * @param r the runnable task requested to be executed * @param e the executor attempting to execute this task