Lines Matching refs:lock

60 //   is indeed the last thread to have acquired the lock.
82 // 2. Minimize lock migration
85 // 5. Minimize outer lock hold times
118 // a successor, so the successor can (re)compete for ownership of the lock.
123 // the lock it will decrement the AcquireCounter field. When the count
125 // the lock directly to some thread on the EntryList, and then move itself to the
129 // bounded producer-consumer relationships, so lock domination is not usually
131 // a fair lock from a fast lock, but not vice-versa.
136 // We use OnDeck as a pseudo-lock to enforce the at-most-one detaching
140 // single logical queue of threads stalled trying to acquire the lock.
142 // Threads in lock() enqueue onto cxq while threads in unlock() will
145 // that occurs while holding the "outer" monitor lock -- that is, we want to
146 // minimize monitor lock holds times.
158 // Barring "lock barging", this mechanism provides fair cyclic ordering,
163 // instant. The OnDeck thread is contending for the lock, but has been
166 // until it manages to acquire the lock -- being OnDeck is a stable property.
167 // -- Threads on the EntryList or cxq are _not allowed to attempt lock acquisition.
168 // -- OnDeck also serves as an "inner lock" as follows. Threads in unlock() will, after
169 // having cleared the LockByte and dropped the outer lock, attempt to "trylock"
173 // the OnDeck inner lock, can manipulate the EntryList or detach and drain the
178 // That successor will eventually acquire the lock and clear OnDeck. Beware
179 // that the OnDeck usage as a lock is asymmetric. A thread in unlock() transiently
182 // any sense of contention on the inner lock, however. Threads never contend
183 // or wait for the inner lock.
194 // it's likely the notifyee would simply impale itself on the lock held
198 // OnDeck lock in unlock() is preempted or otherwise stalls, other threads
199 // can still acquire and release the outer lock and continue to make progress.
202 // lock).
216 // * The memory consistency model provide by lock()-unlock() is at least as
239 // call SafepointSynchronize::block(), which in turn may call Safepoint_lock->lock(), etc.
240 // In that particular case a call to lock() for a given Monitor can end up recursively
241 // calling lock() on another monitor. While distasteful, this is largely benign
242 // as the calls come from jacket that wraps lock(), and not from deep within lock() itself.
389 // If the owner is OFFPROC then it's unlike that the lock will be dropped
472 // Slow-path - the lock is contended.
473 // Either Enqueue Self on cxq or acquire the outer lock.
478 // Optional optimization ... try barging on the inner lock
487 // Only the OnDeck thread can try to acquire -- contended for -- the lock.
495 // manages to acquire the lock.
504 // The best place and time to perform queue operations -- lock metadata --
505 // is _before having acquired the outer lock, while waiting for the lock to drop.
512 // Note that we current drop the inner lock (clear OnDeck) in the slow-path
513 // epilog immediately after having acquired the outer lock.
515 // A. Shift or defer dropping the inner lock until the subsequent IUnlock() operation.
516 // This might avoid potential reacquisition of the inner lock in IUlock().
517 // B. While still holding the inner lock, attempt to opportunistically select
521 // it executes when holding the outer lock and may artificially increase the
524 // the inner lock.
531 // before the store that releases the lock. Crucially, all the stores and loads in the
532 // critical section must be globally visible before the store of 0 into the lock-word
533 // that releases the lock becomes globally visible. That is, memory accesses in the
535 // releases the lock. As such, to prevent accesses within the critical section
537 // store that releases the lock. In practice that release barrier is elided on
542 // safety or lock release consistency.
543 OrderAccess::release_store(&_LockWord.Bytes[_LSBINDEX], 0); // drop outer lock
556 // Note that the referent "w" might be stale with respect to the lock.
560 // with "w" acquired the lock. In that case this thread need take no further
572 // Some other thread acquired the lock in the window since this
579 // OnDeck serves as lock to protect cxq and EntryList.
584 // Unlike a normal lock, however, the exiting thread "locks" OnDeck,
586 // thread will then clear OnDeck once it eventually acquires the outer lock.
594 // Once OnDeck, a thread stays OnDeck until it acquires the lock.
595 // For a given lock there is at most OnDeck thread at any one instant.
604 // w will clear OnDeck once it acquires the outer lock
608 // thread acquired the lock while this thread was arranging succession.
633 // pushing themselves onto the cxq or from lock-unlock operations.
665 _OnDeck = NULL ; // Release inner lock.
670 // acquire the outer lock. Another thread T3 might try to acquire the outer
671 // lock, but encounter contention and enqueue itself on cxq. T2 then drops the
672 // outer lock, but skips succession as this thread T1 still holds OnDeck.
717 // will drop the lock just about the time the wakee comes ONPROC.
743 // 2. unlock - drop the outer lock
745 // 4. lock - reentry - reacquire the outer lock
753 // Ideally only the holder of the outer lock would manipulate the WaitSet -
754 // That is, the outer lock would implicitly protect the WaitSet.
756 // from the WaitSet _before it becomes the owner of the lock. We need to dequeue
759 // on the WaitSet can't be allowed to compete for the lock until it has managed to
765 // owner of the outer lock would manipulate the WaitSet. A thread in wait()
766 // could then compete for the outer lock, and then, if necessary, unlink itself
767 // from the WaitSet only after having acquired the outer lock. More precisely,
769 // on the WaitSet; release the outer lock; wait for either notification or timeout;
770 // reacquire the inner lock; and then, if needed, unlink itself from the WaitSet.
784 // Release the outer lock
786 // enqueue itself on the WaitSet, call IUnlock(), drop the lock,
788 // Some other thread T2 acquires the lock, and calls notify(), moving
789 // T1 from the WaitSet to the cxq. T2 then drops the lock. T1 resumes,
841 // Reentry phase - reacquire the lock
865 // In particular, there are certain types of global lock that may be held
878 // or ILock() thus acquiring the "physical" lock underlying Monitor/Mutex.
881 // underlying lock, but before having set _owner and having entered the actual
882 // critical section. The lock-sneaking facility leverages that fact and allowed the
888 // decouple lock acquisition and parking. The critical invariant to eliminating
889 // sneaking is to ensure that we never "physically" acquire the lock while TBIVM.
892 // call lock() deep from within a lock() call, while the MutexEvent was already enqueued.
899 void Monitor::lock (Thread * Self) {
920 // The lock is contended ...
924 // a java thread has locked the lock but has not entered the
925 // critical region -- let's just pretend we've locked the lock
948 void Monitor::lock() {
949 this->lock(Thread::current());
952 // Lock without safepoint check - a degenerate variant of lock().
969 // Returns true if thread succeceed [sic] in grabbing the lock, otherwise false.
974 // assert(!thread->is_inside_signal_handler(), "don't lock inside signal handler");
977 // The lock may have been acquired but _owner is not yet set.
978 // In that case the VM thread can safely grab the lock.
988 // We got the lock
1008 // Yet another degenerate version of Monitor::lock() or lock_without_safepoint_check()
1021 // until it manages to acquire the lock, at which time we return the ParkEvent
1047 // the time the thread manages to acquire the lock.
1052 // Either Enqueue Self on cxq or acquire the outer lock.
1060 // Only the OnDeck thread can try to acquire -- contended for -- the lock.
1100 " lock %s/%d -- possible deadlock",
1102 assert(false, "Shouldn't block(wait) while holding a lock of rank special");
1108 // abdicating the lock in wait
1130 // Our event wait has finished and we own the lock, but
1132 // want to hold the lock while suspended because that
1142 // Conceptually reestablish ownership of the lock.
1143 // The "real" lock -- the LockByte -- was reacquired by IWait().
1259 bool Monitor::contains(Monitor* locks, Monitor * lock) {
1261 if (locks == lock)
1268 // Called immediately after lock acquisition or release as a diagnostic
1269 // to track the lock-set of the thread and test for rank violations that
1285 // the thread is acquiring this lock
1297 assert(this->rank() >= 0, "bad lock rank");
1307 // . under Solaris, the interrupt lock gets acquired when doing
1308 // profiling, so any lock could be held.
1319 fatal(err_msg("acquiring lock %s/%d out of order with lock %s/%d -- "
1329 // the thread is releasing this lock
1352 assert(found, "Removing a lock not owned");
1364 // Factored out common sanity checks for locking mutex'es. Used by lock() and try_lock()
1370 fatal(err_msg("VM thread using lock %s (not allowed to block on)",
1380 warning("VM thread blocked on lock");