3157N/A * Copyright (c) 1997, 2012, Oracle and/or its affiliates. All rights reserved. 0N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 0N/A * This code is free software; you can redistribute it and/or modify it 0N/A * under the terms of the GNU General Public License version 2 only, as 0N/A * published by the Free Software Foundation. 0N/A * This code is distributed in the hope that it will be useful, but WITHOUT 0N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 0N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 0N/A * version 2 for more details (a copy is included in the LICENSE file that 0N/A * accompanied this code). 0N/A * You should have received a copy of the GNU General Public License version 0N/A * 2 along with this work; if not, write to the Free Software Foundation, 0N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 1472N/A * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 0N/A// Every time a compiled IC is changed or its type is being accessed, 0N/A// either the CompiledIC_lock must be set or we must be at a safe point. 0N/A//----------------------------------------------------------------------------- 0N/A// Low-level access to an inline cache. Private, since they might not be 0N/A // fix up the relocations 0N/A // If we let the oop value here be initialized to zero... 0N/A "no raw nulls in CompiledIC oops, because of patching races");
0N/A// Returns native address of 'call' instruction in inline-cache. Used by 0N/A// the InlineCacheBuffer when it needs to find the stub. 0N/A//----------------------------------------------------------------------------- 0N/A// High-level access to an inline cache. Guaranteed to be MT-safe. 0N/A // Can be different than method->vtable_index(), due to package-private etc. 0N/A // We can't check this anymore. With lazy deopt we could have already 0N/A // cleaned this IC entry before we even return. This is possible if 0N/A // we ran out of space in the inline cache buffer trying to do the 0N/A // set_next and we safepointed to free up space. This is a benign 0N/A // race because the IC entry was complete when we safepointed so 0N/A // cleaning it immediately is harmless. 0N/A // assert(is_megamorphic(), "sanity check"); 0N/A// true if destination is megamorphic stub 0N/A // Cannot rely on cached_oop. It is either an interface or a method. 0N/A // Use unsafe, since an inline cache might point to a zombie method. However, the zombie 0N/A // method is guaranteed to still exist, since we only remove methods after all inline caches 0N/A // has been cleaned up 0N/A // Check that the cached_oop is a klass for non-optimized monomorphic calls 0N/A // This assertion is invalid for compiler1: a call that does not look optimized (no static stub) can be used 0N/A // for calling directly to vep without using the inline cache (i.e., cached_oop == NULL) 0N/A // Call to interpreter if destination is either calling to a stub (if it 0N/A // is optimized), or calling to an I2C blob 0N/A // must use unsafe because the destination can be a zombie (and we're cleaning) 0N/A // and the print_compiled_ic code wants to know if site (in the non-zombie) 0N/A // is to the interpreter. 0N/A // Check if we are calling into our own codeblob (i.e., to a stub) 0N/A // A zombie transition will always be safe, since the oop has already been set to NULL, so 0N/A // we only need to patch the destination 0N/A // Kill any leftover stub we might have too 0N/A // Unsafe transition - create stub. 0N/A // We can't check this anymore. With lazy deopt we could have already 0N/A // cleaned this IC entry before we even return. This is possible if 0N/A // we ran out of space in the inline cache buffer trying to do the 0N/A // set_next and we safepointed to free up space. This is a benign 0N/A // race because the IC entry was complete when we safepointed so 0N/A // cleaning it immediately is harmless. 0N/A // assert(is_clean(), "sanity check"); 0N/A // Updating a cache to the wrong entry can cause bugs that are very hard 0N/A // to track down - if cache entry gets invalid - we just clean it. In 0N/A // this way it is always the same code path that is responsible for 0N/A // updating and resolving an inline cache 0N/A // The above is no longer true. SharedRuntime::fixup_callers_callsite will change optimized 0N/A // callsites. In addition ic_miss code will update a site to monomorphic if it determines 0N/A // that an monomorphic call to the interpreter can now be monomorphic to compiled code. 0N/A // In both of these cases the only thing being modifed is the jump/call target and these 0N/A // transitions are mt_safe 0N/A // Call to interpreter 0N/A // the call analysis (callee structure) specifies that the call is optimized 0N/A // (either because of CHA or the static target is final) 0N/A // At code generation time, this call has been emitted as static call 0N/A // Call via method-klass-holder 0N/A // Call to compiled code 0N/A // This is MT safe if we come from a clean-cache and go through a 0N/A // non-verified entry point 0N/A // We can't check this anymore. With lazy deopt we could have already 0N/A // cleaned this IC entry before we even return. This is possible if 0N/A // we ran out of space in the inline cache buffer trying to do the 0N/A // set_next and we safepointed to free up space. This is a benign 0N/A // race because the IC entry was complete when we safepointed so 0N/A // cleaning it immediately is harmless. 0N/A // assert(is_call_to_compiled() || is_call_to_interpreted(), "sanity check"); 0N/A// is_optimized: Compiler has generated an optimized call (i.e., no inline 0N/A// cache) static_bound: The call can be static bound (i.e, no need to use 0N/A // Call to compiled code 0N/A // Call to compiled code 0N/A // Note: the following problem exists with Compiler1: 0N/A // - at compile time we may or may not know if the destination is final 0N/A // - if we know that the destination is final, we will emit an optimized 0N/A // virtual call (no inline cache), and need a methodOop to make a call 0N/A // to the interpreter 0N/A // - if we do not know if the destination is final, we emit a standard 0N/A // virtual call, and use CompiledICHolder to call interpreted code 0N/A // (no static call stub has been generated) 0N/A // However in that case we will now notice it is static_bound 0N/A // and convert the call into what looks to be an optimized 0N/A // virtual call. This causes problems in verifying the IC because 0N/A // it look vanilla but is optimized. Code in is_call_to_interpreted 0N/A // is aware of this and weakens its asserts. 0N/A // static_bound should imply is_optimized -- otherwise we have a 0N/A // performance bug (statically-bindable method is called via 0N/A // dynamically-dispatched call note: the reverse implication isn't 0N/A // necessarily true -- the call may have been optimized based on compiler 0N/A // analysis (static_bound is only based on "final" etc.) 0N/A // can't check the assert because we don't have the CompiledIC with which to 0N/A // find the address if the call instruction. 0N/A // CodeBlob* cb = find_blob_unsafe(instruction_address()); 0N/A // assert(cb->is_compiled_by_c1() || !static_bound || is_optimized, "static_bound should imply is_optimized"); 0N/A // Mergers please note: Sun SC5.x CC insists on an lvalue for a reference parameter. 0N/A// ---------------------------------------------------------------------------- 0N/A // Do not reset stub here: It is too expensive to call find_stub. 0N/A // Instead, rely on caller (nmethod::clear_inline_caches) to clear 0N/A // both the call and its stub. 0N/A // It is a call to interpreted, if it calls to a stub. Hence, the destination 0N/A // must be in the stub part of the nmethod that contains the call 0N/A // Update jump to call 0N/A // Updating a cache to the wrong entry can cause bugs that are very hard 0N/A // to track down - if cache entry gets invalid - we just clean it. In 0N/A // this way it is always the same code path that is responsible for 0N/A // updating and resolving an inline cache 0N/A // Call to interpreted code 0N/A // Call to compiled code 0N/A// Compute settings for a CompiledStaticCall. Since we might have to set 0N/A// the stub when calling to the interpreter, we need to return arguments. 0N/A // Callee is interpreted code. In any case entering the interpreter 0N/A // puts a converter-frame on the stack to save arguments. 0N/A // Find reloc. information containing this call-site 0N/A // We check here for opt_virtual_call_type, since we reuse the code 0N/A // from the CompiledIC implementation 0N/A//----------------------------------------------------------------------------- 0N/A// Non-product mode code 0N/A // make sure code pattern is actually a call imm32 instruction