0N/A * Copyright (c) 1998, 2012, Oracle and/or its affiliates. All rights reserved. 0N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 0N/A * This code is free software; you can redistribute it and/or modify it 2362N/A * under the terms of the GNU General Public License version 2 only, as 0N/A * published by the Free Software Foundation. 0N/A * This code is distributed in the hope that it will be useful, but WITHOUT 0N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 0N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 0N/A * version 2 for more details (a copy is included in the LICENSE file that 0N/A * accompanied this code). 0N/A * You should have received a copy of the GNU General Public License version 0N/A * 2 along with this work; if not, write to the Free Software Foundation, 0N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 0N/A * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 0N/A// o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o 0N/A// Native Monitor-Mutex locking - theory of operations 0N/A// * Native Monitors are completely unrelated to Java-level monitors, 0N/A// although the "back-end" slow-path implementations share a common lineage. 0N/A// Native Monitors do *not* support nesting or recursion but otherwise 0N/A// they're basically Hoare-flavor monitors. 0N/A// in the _LockWord from zero to non-zero. Note that the _Owner field 23N/A// is advisory and is used only to verify that the thread calling unlock() 0N/A// is indeed the last thread to have acquired the lock. 5503N/A// * Contending threads "push" themselves onto the front of the contention 5503N/A// The _LockWord contains the LockByte as well as the pointer to the head 5503N/A// of the cxq. Colocating the LockByte with the cxq precludes certain races. 3200N/A// * Using a separately addressable LockByte allows for CAS:MEMBAR or CAS:0 3200N/A// idioms. We currently use MEMBAR in the uncontended unlock() path, as 3200N/A// MEMBAR often has less latency than CAS. If warranted, we could switch to 3200N/A// a CAS:0 mode, using timers to close the resultant race, as is done 3200N/A// See the following for a discussion of the relative cost of atomics (CAS) 3200N/A// MEMBAR, and ways to eliminate such instructions from the common-case paths: 5182N/A// * Overall goals - desiderata 5182N/A// 1. Minimize context switching 5182N/A// 2. Minimize lock migration 5182N/A// 3. Minimize CPI -- affinity and locality 5182N/A// 4. Minimize the execution of high-latency instructions such as CAS or MEMBAR 5182N/A// 5. Minimize outer lock hold times 5182N/A// 6. Behave gracefully on a loaded system 0N/A// * Thread flow and list residency: 0N/A// Contention queue --> EntryList --> OnDeck --> Owner --> !Owner 0N/A// [..resident on monitor list..] 0N/A// [...........contending..................] 0N/A// -- The contention queue (cxq) contains recently-arrived threads (RATs). 0N/A// Threads on the cxq eventually drain into the EntryList. 0N/A// -- Invariant: a thread appears on at most one list -- cxq, EntryList 0N/A// or WaitSet -- at any one time. 0N/A// -- For a given monitor there can be at most one "OnDeck" thread at any 0N/A// given time but if needbe this particular invariant could be relaxed. 0N/A// * The WaitSet and EntryList linked lists are composed of ParkEvents. 0N/A// I use ParkEvent instead of threads as ParkEvents are immortal and 0N/A// type-stable, meaning we can safely unpark() a possibly stale 0N/A// list element in the unlock()-path. (That's benign). 0N/A// * Succession policy - providing for progress: 0N/A// As necessary, the unlock()ing thread identifies, unlinks, and unparks 0N/A// an "heir presumptive" tentative successor thread from the EntryList. 0N/A// This becomes the so-called "OnDeck" thread, of which there can be only 0N/A// one at any given time for a given monitor. The wakee will recontend 0N/A// for ownership of monitor. 23N/A// Succession is provided for by a policy of competitive handoff. 23N/A// The exiting thread does _not_ grant or pass ownership to the 0N/A// successor thread. (This is also referred to as "handoff" succession"). 0N/A// Instead the exiting thread releases ownership and possibly wakes 0N/A// a successor, so the successor can (re)compete for ownership of the lock. 0N/A// Competitive handoff provides excellent overall throughput at the expense 0N/A// of short-term fairness. If fairness is a concern then one remedy might 0N/A// be to add an AcquireCounter field to the monitor. After a thread acquires 0N/A// the lock it will decrement the AcquireCounter field. When the count 0N/A// reaches 0 the thread would reset the AcquireCounter variable, abdicate 0N/A// the lock directly to some thread on the EntryList, and then move itself to the 0N/A// tail of the EntryList. 0N/A// But in practice most threads engage or otherwise participate in resource 0N/A// bounded producer-consumer relationships, so lock domination is not usually 0N/A// a practical concern. Recall too, that in general it's easier to construct 0N/A// a fair lock from a fast lock, but not vice-versa. 0N/A// * The cxq can have multiple concurrent "pushers" but only one concurrent 0N/A// detaching thread. This mechanism is immune from the ABA corruption. 0N/A// More precisely, the CAS-based "push" onto cxq is ABA-oblivious. 0N/A// We use OnDeck as a pseudo-lock to enforce the at-most-one detaching 0N/A// thread constraint. 0N/A// * Taken together, the cxq and the EntryList constitute or form a 0N/A// single logical queue of threads stalled trying to acquire the lock. 0N/A// We use two distinct lists to reduce heat on the list ends. 0N/A// Threads in lock() enqueue onto cxq while threads in unlock() will 0N/A// dequeue from the EntryList. (c.f. Michael Scott's "2Q" algorithm). 0N/A// A key desideratum is to minimize queue & monitor metadata manipulation 0N/A// that occurs while holding the "outer" monitor lock -- that is, we want to 0N/A// minimize monitor lock holds times. 0N/A// The EntryList is ordered by the prevailing queue discipline and 0N/A// can be organized in any convenient fashion, such as a doubly-linked list or 0N/A// a circular doubly-linked list. If we need a priority queue then something akin 0N/A// to Solaris' sleepq would work nicely. Viz., 0N/A// Queue discipline is enforced at ::unlock() time, when the unlocking thread 0N/A// drains the cxq into the EntryList, and orders or reorders the threads on the 0N/A// EntryList accordingly. 0N/A// Barring "lock barging", this mechanism provides fair cyclic ordering, 0N/A// somewhat similar to an elevator-scan. 0N/A// -- For a given monitor there can be at most one OnDeck thread at any given 0N/A// instant. The OnDeck thread is contending for the lock, but has been 0N/A// unlinked from the EntryList and cxq by some previous unlock() operations. 0N/A// Once a thread has been designated the OnDeck thread it will remain so 0N/A// until it manages to acquire the lock -- being OnDeck is a stable property. 0N/A// -- Threads on the EntryList or cxq are _not allowed to attempt lock acquisition. 0N/A// -- OnDeck also serves as an "inner lock" as follows. Threads in unlock() will, after 0N/A// having cleared the LockByte and dropped the outer lock, attempt to "trylock" 0N/A// OnDeck by CASing the field from null to non-null. If successful, that thread 0N/A// is then responsible for progress and succession and can use CAS to detach and 0N/A// drain the cxq into the EntryList. By convention, only this thread, the holder of 0N/A// the OnDeck inner lock, can manipulate the EntryList or detach and drain the 0N/A// RATs on the cxq into the EntryList. This avoids ABA corruption on the cxq as 0N/A// we allow multiple concurrent "push" operations but restrict detach concurrency 0N/A// to at most one thread. Having selected and detached a successor, the thread then 0N/A// changes the OnDeck to refer to that successor, and then unparks the successor. 0N/A// That successor will eventually acquire the lock and clear OnDeck. Beware 0N/A// that the OnDeck usage as a lock is asymmetric. A thread in unlock() transiently 0N/A// "acquires" OnDeck, performs queue manipulations, passes OnDeck to some successor, 0N/A// and then the successor eventually "drops" OnDeck. Note that there's never 0N/A// any sense of contention on the inner lock, however. Threads never contend 0N/A// or wait for the inner lock. 0N/A// -- OnDeck provides for futile wakeup throttling a described in section 3.3 of 0N/A// In a sense, OnDeck subsumes the ObjectMonitor _Succ and ObjectWaiter 0N/A// * Waiting threads reside on the WaitSet list -- wait() puts 0N/A// the caller onto the WaitSet. Notify() or notifyAll() simply 0N/A// transfers threads from the WaitSet to either the EntryList or cxq. 0N/A// Subsequent unlock() operations will eventually unpark the notifyee. 0N/A// Unparking a notifee in notify() proper is inefficient - if we were to do so 0N/A// it's likely the notifyee would simply impale itself on the lock held 0N/A// * The mechanism is obstruction-free in that if the holder of the transient 0N/A// OnDeck lock in unlock() is preempted or otherwise stalls, other threads 0N/A// can still acquire and release the outer lock and continue to make progress. 0N/A// At worst, waking of already blocked contending threads may be delayed, 0N/A// but nothing worse. (We only use "trylock" operations on the inner OnDeck 0N/A// * Note that thread-local storage must be initialized before a thread 0N/A// uses Native monitors or mutexes. The native monitor-mutex subsystem 0N/A// depends on Thread::current(). 0N/A// * The monitor synchronization subsystem avoids the use of native 0N/A// synchronization primitives except for the narrow platform-specific 0N/A// the semantics of park-unpark. Put another way, this monitor implementation 0N/A// depends only on atomic operations and park-unpark. The monitor subsystem 0N/A// manages all RUNNING->BLOCKED and BLOCKED->READY transitions while the 6092N/A// underlying OS manages the READY<->RUN transitions. 6092N/A// * The memory consistency model provide by lock()-unlock() is at least as 3200N/A// strong or stronger than the Java Memory model defined by JSR-133. 3200N/A// That is, we guarantee at least entry consistency, if not stronger. 24N/A// * Thread:: currently contains a set of purpose-specific ParkEvents: 0N/A// _MutexEvent, _ParkEvent, etc. A better approach might be to do away with 0N/A// the purpose-specific ParkEvents and instead implement a general per-thread 0N/A// stack of available ParkEvents which we could provision on-demand. The 23N/A// stack acts as a local cache to avoid excessive calls to ParkEvent::Allocate() 0N/A// and ::Release(). A thread would simply pop an element from the local stack before it 0N/A// enqueued or park()ed. When the contention was over the thread would 0N/A// push the no-longer-needed ParkEvent back onto its stack. 0N/A// * A slightly reduced form of ILock() and IUnlock() have been partially 0N/A// model-checked (Murphi) for safety and progress at T=1,2,3 and 4. 0N/A// It'd be interesting to see if TLA/TLC could be useful as well. 0N/A// * Mutex-Monitor is a low-level "leaf" subsystem. That is, the monitor 0N/A// code should never call other code in the JVM that might itself need to 0N/A// acquire monitors or mutexes. That's true *except* in the case of the 23N/A// ThreadBlockInVM state transition wrappers. The ThreadBlockInVM DTOR handles 0N/A// mutator reentry (ingress) by checking for a pending safepoint in which case it will 0N/A// call SafepointSynchronize::block(), which in turn may call Safepoint_lock->lock(), etc. 0N/A// In that particular case a call to lock() for a given Monitor can end up recursively 0N/A// calling lock() on another monitor. While distasteful, this is largely benign 0N/A// as the calls come from jacket that wraps lock(), and not from deep within lock() itself. 0N/A// It's unfortunate that native mutexes and thread state transitions were convolved. 0N/A// They're really separate concerns and should have remained that way. Melding 0N/A// them together was facile -- a bit too facile. The current implementation badly 0N/A// conflates the two concerns. 0N/A// -- Add DTRACE probes for contended acquire, contended acquired, contended unlock 0N/A// We should also add DTRACE probes in the ParkEvent subsystem for 0N/A// Park-entry, Park-exit, and Unpark. 3200N/A// -- We have an excess of mutex-like constructs in the JVM, namely: 3200N/A// 2. low-level muxAcquire and muxRelease 5182N/A// 3. low-level spinAcquire and spinRelease 5182N/A// 4. native Mutex:: and Monitor:: 5182N/A// 5. jvm_raw_lock() and _unlock() 5182N/A// 6. JVMTI raw monitors -- distinct from (5) despite having a confusingly 5182N/A// o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o 0N/A// CASPTR() uses the canonical argument order that dominates in the literature. 24N/A// Our internal cmpxchg_ptr() uses a bastardized ordering to accommodate Sun .il templates. 0N/A// Simplistic low-quality Marsaglia SHIFT-XOR RNG. 0N/A// Bijective except for the trailing mask operation. 3200N/A// Useful for spin loops as the compiler can't optimize it away. 0N/A x ^= ((
unsigned)x) >>
21;
0N/A // Make this impossible for the compiler to optimize away, 0N/A // but (mostly) avoid W coherency sharing on MP systems. 0N/A if (v ==
0x12345)
rv = v ;
0N/A if (v == u)
return 1 ;
0N/A // Optimistic fast-path form ... 0N/A // Fast-path attempt for the common uncontended case. 5503N/A // Avoid RTS->RTO $ coherence upgrade on typical SMP systems. 0N/A// Polite TATAS spinlock with exponential backoff - bounded spin. 0N/A// Ideally we'd use processor cycles, time or vtime to control 5503N/A// the loop, but we currently use iterations. 5503N/A// All the constants within were derived empirically but work over 5503N/A// over the spectrum of J2SE reference platforms. 5503N/A// On Niagara-class systems the back-off is unnecessary but 5503N/A// is relatively harmless. (At worst it'll slightly retard 5503N/A// acquisition times). The back-off is critical for older SMP systems 5503N/A// where constant fetching of the LockWord would otherwise impair 0N/A// Clamp spinning at approximately 1/2 of a context-switch round-trip. 5182N/A // Periodically increase Delay -- variable Delay form 5182N/A // conceptually: delay *= 1 + 1/Exponent 5182N/A // CONSIDER: Delay += 1 + (Delay/4); Delay &= 0x7FF ; 5182N/A // Consider checking _owner's schedctl state, if OFFPROC abort spin. 5182N/A // If the owner is OFFPROC then it's unlike that the lock will be dropped 5182N/A // in a timely fashion, which suggests that spinning would not be fruitful 5182N/A // Stall for "Delay" time units - iterations in the current implementation. 5182N/A // Avoid generating coherency traffic while stalled. 0N/A // Possible ways to delay: 0N/A // PAUSE, SLEEP, MEMBAR #sync, MEMBAR #halt, 0N/A // wr %g0,%asi, gethrtime, rdstick, rdtick, rdtsc, etc. ... 0N/A // Note that on Niagara-class systems we want to minimize STs in the 0N/A // spin loop. N1 and brethren write-around the L1$ over the xbar into the L2$. 0N/A // Furthermore, they don't have a W$ like traditional SPARC processors. 0N/A // We currently use a Marsaglia Shift-Xor RNG loop. 0N/A // Diagnostic support - periodically unwedge blocked threads 5182N/A if (u == v)
return 1 ;
// indicate acquired 0N/A if (u == v)
return 0 ;
// indicate pushed onto cxq 0N/A // Interference - LockWord change - just retry 0N/A// ILock and IWait are the lowest level primitive internal blocking 0N/A// synchronization functions. The callers of IWait and ILock must have 0N/A// performed any needed state transitions beforehand. 0N/A// IWait and ILock may directly call park() without any concern for thread state. 0N/A// Note that ILock and IWait do *not* access _owner. 0N/A// _owner is a higher-level logical concept. 5182N/A // As an optimization, spinners could conditionally try to set ONDECK to _LBIT 0N/A // Slow-path - the lock is contended. 0N/A // Either Enqueue Self on cxq or acquire the outer lock. 0N/A // LockWord encoding = (cxq,LOCKBYTE) 0N/A // Optional optimization ... try barging on the inner lock // At any given time there is at most one ondeck thread. // ondeck implies not resident on cxq and not resident on EntryList // Only the OnDeck thread can try to acquire -- contended for -- the lock. // CONSIDER: use Self->OnDeck instead of m->OnDeck. // Deschedule Self so that others may run. // Self is now in the ONDECK position and will remain so until it // manages to acquire the lock. // CONSIDER: if ESelf->TryPark() && TryLock() break ... // It's probably wise to spin only if we *actually* blocked // CONSIDER: check the lockbyte, if it remains set then // preemptively drain the cxq into the EntryList. // The best place and time to perform queue operations -- lock metadata -- // is _before having acquired the outer lock, while waiting for the lock to drop. // Note that we current drop the inner lock (clear OnDeck) in the slow-path // epilog immediately after having acquired the outer lock. // But instead we could consider the following optimizations: // A. Shift or defer dropping the inner lock until the subsequent IUnlock() operation. // This might avoid potential reacquisition of the inner lock in IUlock(). // B. While still holding the inner lock, attempt to opportunistically select // and unlink the next ONDECK thread from the EntryList. // If successful, set ONDECK to refer to that thread, otherwise clear ONDECK. // It's critical that the select-and-unlink operation run in constant-time as // it executes when holding the outer lock and may artificially increase the // effective length of the critical section. // Note that (A) and (B) are tantamount to succession by direct handoff for // Conceptually we need a MEMBAR #storestore|#loadstore barrier or fence immediately // before the store that releases the lock. Crucially, all the stores and loads in the // critical section must be globally visible before the store of 0 into the lock-word // that releases the lock becomes globally visible. That is, memory accesses in the // critical section should not be allowed to bypass or overtake the following ST that // releases the lock. As such, to prevent accesses within the critical section // from "leaking" out, we need a release fence between the critical section and the // store that releases the lock. In practice that release barrier is elided on // platforms with strong memory models such as TSO. // Note that the OrderAccess::storeload() fence that appears after unlock store // provides for progress conditions and succession and is _not related to exclusion // safety or lock release consistency. // Either we have a valid ondeck thread or ondeck is transiently "locked" // by some exiting thread as it arranges for succession. The LSBit of // OnDeck allows us to discriminate two cases. If the latter, the // responsibility for progress and succession lies with that other thread. // For good performance, we also depend on the fact that redundant unpark() // operations are cheap. That is, repeated Unpark()ing of the ONDECK thread // is inexpensive. This approach provides implicit futile wakeup throttling. // Note that the referent "w" might be stale with respect to the lock. // In that case the following unpark() is harmless and the worst that'll happen // is a spurious return from a park() operation. Critically, if "w" _is stale, // then progress is known to have occurred as that means the thread associated // with "w" acquired the lock. In that case this thread need take no further // action to guarantee progress. return ;
// normal fast-path exit - cxq and EntryList both empty // Optional optimization ... // Some other thread acquired the lock in the window since this // thread released it. Succession is now that thread's responsibility. // Slow-path exit - this thread must ensure succession and progress. // OnDeck serves as lock to protect cxq and EntryList. // Only the holder of OnDeck can manipulate EntryList or detach the RATs from cxq. // Avoid ABA - allow multiple concurrent producers (enqueue via push-CAS) // but only one concurrent consumer (detacher of RATs). // Consider protecting this critical section with schedctl on Solaris. // Unlike a normal lock, however, the exiting thread "locks" OnDeck, // picks a successor and marks that thread as OnDeck. That successor // thread will then clear OnDeck once it eventually acquires the outer lock. // Transfer the head of the EntryList to the OnDeck position. // Once OnDeck, a thread stays OnDeck until it acquires the lock. // For a given lock there is at most OnDeck thread at any one instant. // as a diagnostic measure consider setting w->_ListNext = BAD // w will clear OnDeck once it acquires the outer lock // Another optional optimization ... // For heavily contended locks it's not uncommon that some other // thread acquired the lock while this thread was arranging succession. // Try to defer the unpark() operation - Delegate the responsibility // for unpark()ing the OnDeck thread to the current or subsequent owners // That is, the new owner is responsible for unparking the OnDeck thread. // The EntryList is empty but the cxq is populated. // drain RATs from cxq into EntryList // Detach RATs segment with CAS and then merge into EntryList // optional optimization - if locked, the owner is responsible for succession // Interference - LockWord changed - Just retry // We can see concurrent interference from contending threads // pushing themselves onto the cxq or from lock-unlock operations. // From the perspective of this thread, EntryList is stable and // the cxq is prepend-only -- the head is volatile but the interior // of the cxq is stable. In theory if we encounter interference from threads // pushing onto cxq we could simply break off the original cxq suffix and // move that segment to the EntryList, avoiding a 2nd or multiple CAS attempts // on the high-traffic LockWord variable. For instance lets say the cxq is "ABCD" // when we first fetch cxq above. Between the fetch -- where we observed "A" // -- and CAS -- where we attempt to CAS null over A -- "PQR" arrive, // yielding cxq = "PQRABCD". In this case we could simply set A.ListNext // null, leaving cxq = "PQRA" and transfer the "BCD" segment to the EntryList. // Note too, that it's safe for this thread to traverse the cxq // without taking any special concurrency precautions. // We don't currently reorder the cxq segment as we move it onto // the EntryList, but it might make sense to reverse the order // or perhaps sort by thread priority. See the comments in // cxq|EntryList is empty. // w == NULL implies that cxq|EntryList == NULL in the past. // Possible race - rare inopportune interleaving. // A thread could have added itself to cxq since this thread previously checked. // Detect and recover by refetching cxq. // Resample LockWord/cxq to recover from possible race. // For instance, while this thread T1 held OnDeck, some other thread T2 might // acquire the outer lock. Another thread T3 might try to acquire the outer // lock, but encounter contention and enqueue itself on cxq. T2 then drops the // outer lock, but skips succession as this thread T1 still holds OnDeck. // T1 is and remains responsible for ensuring succession of T3. // Note that we don't need to recheck EntryList, just cxq. // If threads moved onto EntryList since we dropped OnDeck // that implies some other thread forced succession. goto Succession ;
// potential race -- re-run succession // Transfer one thread from the WaitSet to the EntryList or cxq. // Currently we just unlink the head of the WaitSet and prepend to the cxq. // And of course we could just unlink it and unpark it, too, but // in that case it'd likely impale itself on the reentry. // interference - _LockWord changed -- just retry // Note that setting Notified before pushing nfy onto the cxq is // also legal and safe, but the safety properties are much more // subtle, so for the sake of code stewardship ... // Experimental code ... light up the wakee in the hope that this thread (the owner) // will drop the lock just about the time the wakee comes ONPROC. // Currently notifyAll() transfers the waiters one-at-a-time from the waitset // to the cxq. This could be done more efficiently with a single bulk en-mass transfer, // but in practice notifyAll() for large #s of threads is rare and not time-critical. // Beware too, that we invert the order of the waiters. Lets say that the // waitset is "ABCD" and the cxq is "XYZ". After a notifyAll() the waitset // will be empty and the cxq will be "DCBAXYZ". This is benign, of course. // 1. Enqueue Self on WaitSet - currently prepend // 2. unlock - drop the outer lock // 3. wait for either notification or timeout // 4. lock - reentry - reacquire the outer lock // Ideally only the holder of the outer lock would manipulate the WaitSet - // That is, the outer lock would implicitly protect the WaitSet. // But if a thread in wait() encounters a timeout it will need to dequeue itself // from the WaitSet _before it becomes the owner of the lock. We need to dequeue // as the ParkEvent -- which serves as a proxy for the thread -- can't reside // on both the WaitSet and the EntryList|cxq at the same time.. That is, a thread // on the WaitSet can't be allowed to compete for the lock until it has managed to // unlink its ParkEvent from WaitSet. Thus the need for WaitLock. // Contention on the WaitLock is minimal. // Another viable approach would be add another ParkEvent, "WaitEvent" to the // thread class. The WaitSet would be composed of WaitEvents. Only the // owner of the outer lock would manipulate the WaitSet. A thread in wait() // could then compete for the outer lock, and then, if necessary, unlink itself // from the WaitSet only after having acquired the outer lock. More precisely, // there would be no WaitLock. A thread in in wait() would enqueue its WaitEvent // on the WaitSet; release the outer lock; wait for either notification or timeout; // reacquire the inner lock; and then, if needed, unlink itself from the WaitSet. // Alternatively, a 2nd set of list link fields in the ParkEvent might suffice. // One set would be for the WaitSet and one for the EntryList. // We could also deconstruct the ParkEvent into a "pure" event and add a // new immortal/TSM "ListElement" class that referred to ParkEvents. // In that case we could have one ListElement on the WaitSet and another // on the EntryList, with both referring to the same pure Event. // Release the outer lock // We call IUnlock (RelaxAssert=true) as a thread T1 might // enqueue itself on the WaitSet, call IUnlock(), drop the lock, // and then stall before it can attempt to wake a successor. // Some other thread T2 acquires the lock, and calls notify(), moving // T1 from the WaitSet to the cxq. T2 then drops the lock. T1 resumes, // and then finds *itself* on the cxq. During the course of a normal // IUnlock() call a thread should _never find itself on the EntryList // or cxq, but in the case of wait() it's possible. // Wait for either notification or timeout // Beware that in some circumstances we might propagate // spurious wakeups back to the caller. // Prepare for reentry - if necessary, remove ESelf from WaitSet // 1. Still on the WaitSet. This can happen if we exited the loop by timeout. // 2. On the cxq or EntryList // 3. Not resident on cxq, EntryList or WaitSet, but in the OnDeck position. // ESelf is resident on the WaitSet -- unlink it. // A doubly-linked list would be better here so we can unlink in constant-time. // We have to unlink before we potentially recontend as ESelf might otherwise // end up on the cxq|EntryList -- it can't be on two lists at once. }
else {
// found in interior WasOnWaitSet =
1 ;
// We were *not* notified but instead encountered timeout // Reentry phase - reacquire the lock // ESelf was previously on the WaitSet but we just unlinked it above // because of a timeout. ESelf is not resident on any list and is not OnDeck // A prior notify() operation moved ESelf from the WaitSet to the cxq. // ESelf is now on the cxq, EntryList or at the OnDeck position. // The following fragment is extracted from Monitor::ILock() // ON THE VMTHREAD SNEAKING PAST HELD LOCKS: // In particular, there are certain types of global lock that may be held // by a Java thread while it is blocked at a safepoint but before it has // written the _owner field. These locks may be sneakily acquired by the // VM thread during a safepoint to avoid deadlocks. Alternatively, one should // identify all such locks, and ensure that Java threads never block at // safepoints while holding them (_no_safepoint_check_flag). While it // seems as though this could increase the time to reach a safepoint // (or at least increase the mean, if not the variance), the latter // approach might make for a cleaner, more maintainable JVM design. // Sneaking is vile and reprehensible and should be excised at the 1st // opportunity. It's possible that the need for sneaking could be obviated // as follows. Currently, a thread might (a) while TBIVM, call pthread_mutex_lock // or ILock() thus acquiring the "physical" lock underlying Monitor/Mutex. // (b) stall at the TBIVM exit point as a safepoint is in effect. Critically, // it'll stall at the TBIVM reentry state transition after having acquired the // underlying lock, but before having set _owner and having entered the actual // critical section. The lock-sneaking facility leverages that fact and allowed the // VM thread to logically acquire locks that had already be physically locked by mutators // but where mutators were known blocked by the reentry thread state transition. // If we were to modify the Monitor-Mutex so that TBIVM state transitions tightly // wrapped calls to park(), then we could likely do away with sneaking. We'd // decouple lock acquisition and parking. The critical invariant to eliminating // sneaking is to ensure that we never "physically" acquire the lock while TBIVM. // An easy way to accomplish this is to wrap the park calls in a narrow TBIVM jacket. // One difficulty with this approach is that the TBIVM wrapper could recurse and // call lock() deep from within a lock() call, while the MutexEvent was already enqueued. // Using a stack (N=2 at minimum) of ParkEvents would take care of that problem. // But of course the proper ultimate approach is to avoid schemes that require explicit // sneaking or dependence on any any clever invariants or subtle implementation properties // of Mutex-Monitor and instead directly address the underlying design flaw. // Clear unhandled oops so we get a crash right away. Only clear for non-vm #
endif // CHECK_UNHANDLED_OOPS // The lock is contended ... // a java thread has locked the lock but has not entered the // critical region -- let's just pretend we've locked the lock // and go on. we note this with _snuck so we can also // pretend to unlock when the time comes. // Try a brief spin to avoid passing thru thread state transition ... // Horribile dictu - we suffer through a state transition // Lock without safepoint check - a degenerate variant of lock(). // Should ONLY be used by safepoint code and other code // that is guaranteed not to block while running inside the VM. If this is called with // thread state set to be in VM, the safepoint synchronization code will deadlock! // Returns true if thread succeceed [sic] in grabbing the lock, otherwise false. // assert(!thread->is_inside_signal_handler(), "don't lock inside signal handler"); // Special case, where all Java threads are stopped. // The lock may have been acquired but _owner is not yet set. // In that case the VM thread can safely grab the lock. // It strikes me this should appear _after the TryLock() fails, below. set_owner(
Self);
// Do not need to be atomic, since we are at a safepoint // Yet another degenerate version of Monitor::lock() or lock_without_safepoint_check() // jvm_raw_lock() and _unlock() can be called by non-Java threads via JVM_RawMonitorEnter. // There's no expectation that JVM_RawMonitors will interoperate properly with the native // Mutex-Monitor constructs. We happen to implement JVM_RawMonitors in terms of // native Mutex-Monitors simply as a matter of convenience. A simple abstraction layer // over a pthread_mutex_t would work equally as well, but require more platform-specific // code -- a "PlatformMutex". Alternatively, a simply layer over muxAcquire-muxRelease // Since the caller might be a foreign thread, we don't necessarily have a Thread.MutexEvent // instance available. Instead, we transiently allocate a ParkEvent on-demand if // we encounter contention. That ParkEvent remains associated with the thread // until it manages to acquire the lock, at which time we return the ParkEvent // to the global ParkEvent free list. This is correct and suffices for our purposes. // Beware that the original jvm_raw_unlock() had a "_snuck" test but that // jvm_raw_lock() didn't have the corresponding test. I suspect that's an // oversight, but I've replicated the original suspect logic in the new code ... // This can potentially be called by non-java Threads. Thus, the ThreadLocalStorage // might return NULL. Don't call set_owner since it will break on an NULL owner // Consider installing a non-null "ANON" distinguished value instead of just NULL. // slow-path - apparent contention // Allocate a ParkEvent for transient use. // The ParkEvent remains associated with this thread until // the time the thread manages to acquire the lock. // Either Enqueue Self on cxq or acquire the outer lock. // At any given time there is at most one ondeck thread. // ondeck implies not resident on cxq and not resident on EntryList // Only the OnDeck thread can try to acquire -- contended for -- the lock. // CONSIDER: use Self->OnDeck instead of m->OnDeck. // Nearly the same as Monitor::unlock() ... // directly set _owner instead of using set_owner(null) // as_suspend_equivalent logically implies !no_safepoint_check // !no_safepoint_check logically implies java_thread assert(
least !=
this,
"Specification of get_least_... call above");
tty->
print(
"Attempting to wait on monitor %s/%d while holding" " lock %s/%d -- possible deadlock",
assert(
false,
"Shouldn't block(wait) while holding a lock of rank special");
// conceptually set the owner to NULL in anticipation of // abdicating the lock in wait // Enter safepoint region - ornate and Rococo ... // cleared by handle_special_suspend_equivalent_condition() or // were we externally suspended while we were waiting? // Our event wait has finished and we own the lock, but // while we were waiting another thread suspended us. We don't // want to hold the lock while suspended because that // would surprise the thread that suspended us. // Conceptually reestablish ownership of the lock. // The "real" lock -- the LockByte -- was reacquired by IWait(). // ---------------------------------------------------------------------------------- // In this case, we expect the held locks to be // in increasing rank order (modulo any native ranks) // In this case, we expect the held locks to be // in increasing rank order (modulo any native ranks) // Called immediately after lock acquisition or release as a diagnostic // to track the lock-set of the thread and test for rank violations that // might indicate exposure to deadlock. // Rather like an EventListener for _owner (:>). // This function is solely responsible for maintaining // and checking the invariant that threads and locks // are in a 1/N relation, with some some locks unowned. // It uses the Mutex::_owner, Mutex::_next, and // Thread::_owned_locks fields, and no other function // It is illegal to set the mutex from one non-NULL // owner to another--it must be owned by NULL as an // the thread is acquiring this lock // link "this" into the owned locks list #
ifdef ASSERT // Thread::_owned_locks is under the same ifdef // Mutex::set_owner_implementation is a friend of Thread // Deadlock avoidance rules require us to acquire Mutexes only in // a global total order. For example m1 is the lowest ranked mutex // that the thread holds and m2 is the mutex the thread is trying // to acquire, then deadlock avoidance rules require that the rank // of m2 be less than the rank of m1. // The rank Mutex::native is an exception in that it is not subject // to the verification rules. // Here are some further notes relating to mutex acquisition anomalies: // . under Solaris, the interrupt lock gets acquired when doing // profiling, so any lock could be held. // . it is also ok to acquire Safepoint_lock at the very end while we // already hold Terminator_lock - may happen because of periodic safepoints fatal(
err_msg(
"acquiring lock %s/%d out of order with lock %s/%d -- " "possible deadlock",
this->
name(),
this->
rank(),
// the thread is releasing this lock // remove "this" from the owned locks list // Factored out common sanity checks for locking mutex'es. Used by lock() and try_lock() fatal(
err_msg(
"VM thread using lock %s (not allowed to block on)",
warning(
"VM thread blocked on lock");