compiledIC.cpp revision 0
0N/A * Copyright 1997-2006 Sun Microsystems, Inc. All Rights Reserved. 0N/A * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 0N/A * This code is free software; you can redistribute it and/or modify it 0N/A * under the terms of the GNU General Public License version 2 only, as 0N/A * published by the Free Software Foundation. 0N/A * This code is distributed in the hope that it will be useful, but WITHOUT 0N/A * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 0N/A * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 0N/A * version 2 for more details (a copy is included in the LICENSE file that 0N/A * accompanied this code). 0N/A * You should have received a copy of the GNU General Public License version 0N/A * 2 along with this work; if not, write to the Free Software Foundation, 0N/A * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 0N/A * Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, 0N/A * CA 95054 USA or visit www.sun.com if you need additional information or 0N/A * have any questions. 0N/A#
include "incls/_precompiled.incl" 0N/A// Every time a compiled IC is changed or its type is being accessed, 0N/A// either the CompiledIC_lock must be set or we must be at a safe point. 0N/A//----------------------------------------------------------------------------- 0N/A// Low-level access to an inline cache. Private, since they might not be 0N/A // fix up the relocations // If we let the oop value here be initialized to zero... "no raw nulls in CompiledIC oops, because of patching races");
// Returns native address of 'call' instruction in inline-cache. Used by // the InlineCacheBuffer when it needs to find the stub. //----------------------------------------------------------------------------- // High-level access to an inline cache. Guaranteed to be MT-safe. // Can be different than method->vtable_index(), due to package-private etc. // We can't check this anymore. With lazy deopt we could have already // cleaned this IC entry before we even return. This is possible if // we ran out of space in the inline cache buffer trying to do the // set_next and we safepointed to free up space. This is a benign // race because the IC entry was complete when we safepointed so // cleaning it immediately is harmless. // assert(is_megamorphic(), "sanity check"); // true if destination is megamorphic stub // Cannot rely on cached_oop. It is either an interface or a method. // Use unsafe, since an inline cache might point to a zombie method. However, the zombie // method is guaranteed to still exist, since we only remove methods after all inline caches // Check that the cached_oop is a klass for non-optimized monomorphic calls // This assertion is invalid for compiler1: a call that does not look optimized (no static stub) can be used // for calling directly to vep without using the inline cache (i.e., cached_oop == NULL) // Call to interpreter if destination is either calling to a stub (if it // is optimized), or calling to an I2C blob // must use unsafe because the destination can be a zombie (and we're cleaning) // and the print_compiled_ic code wants to know if site (in the non-zombie) // is to the interpreter. // Check if we are calling into our own codeblob (i.e., to a stub) // A zombie transition will always be safe, since the oop has already been set to NULL, so // we only need to patch the destination // Kill any leftover stub we might have too // Unsafe transition - create stub. // We can't check this anymore. With lazy deopt we could have already // cleaned this IC entry before we even return. This is possible if // we ran out of space in the inline cache buffer trying to do the // set_next and we safepointed to free up space. This is a benign // race because the IC entry was complete when we safepointed so // cleaning it immediately is harmless. // assert(is_clean(), "sanity check"); // Updating a cache to the wrong entry can cause bugs that are very hard // to track down - if cache entry gets invalid - we just clean it. In // this way it is always the same code path that is responsible for // updating and resolving an inline cache // The above is no longer true. SharedRuntime::fixup_callers_callsite will change optimized // callsites. In addition ic_miss code will update a site to monomorphic if it determines // that an monomorphic call to the interpreter can now be monomorphic to compiled code. // In both of these cases the only thing being modifed is the jump/call target and these // transitions are mt_safe // the call analysis (callee structure) specifies that the call is optimized // (either because of CHA or the static target is final) // At code generation time, this call has been emitted as static call // Call via method-klass-holder // This is MT safe if we come from a clean-cache and go through a // non-verified entry point (
safe) ?
"" :
"via stub");
// We can't check this anymore. With lazy deopt we could have already // cleaned this IC entry before we even return. This is possible if // we ran out of space in the inline cache buffer trying to do the // set_next and we safepointed to free up space. This is a benign // race because the IC entry was complete when we safepointed so // cleaning it immediately is harmless. // assert(is_call_to_compiled() || is_call_to_interpreted(), "sanity check"); // is_optimized: Compiler has generated an optimized call (i.e., no inline // cache) static_bound: The call can be static bound (i.e, no need to use // Note: the following problem exists with Compiler1: // - at compile time we may or may not know if the destination is final // - if we know that the destination is final, we will emit an optimized // virtual call (no inline cache), and need a methodOop to make a call // - if we do not know if the destination is final, we emit a standard // virtual call, and use CompiledICHolder to call interpreted code // (no static call stub has been generated) // However in that case we will now notice it is static_bound // and convert the call into what looks to be an optimized // virtual call. This causes problems in verifying the IC because // it look vanilla but is optimized. Code in is_call_to_interpreted // is aware of this and weakens its asserts. // static_bound should imply is_optimized -- otherwise we have a // performance bug (statically-bindable method is called via // dynamically-dispatched call note: the reverse implication isn't // necessarily true -- the call may have been optimized based on compiler // analysis (static_bound is only based on "final" etc.) // can't check the assert because we don't have the CompiledIC with which to // find the address if the call instruction. // CodeBlob* cb = find_blob_unsafe(instruction_address()); // assert(cb->is_compiled_by_c1() || !static_bound || is_optimized, "static_bound should imply is_optimized"); // Mergers please note: Sun SC5.x CC insists on an lvalue for a reference parameter. // ---------------------------------------------------------------------------- // Do not reset stub here: It is too expensive to call find_stub. // Instead, rely on caller (nmethod::clear_inline_caches) to clear // both the call and its stub. // It is a call to interpreted, if it calls to a stub. Hence, the destination // must be in the stub part of the nmethod that contains the call // Updating a cache to the wrong entry can cause bugs that are very hard // to track down - if cache entry gets invalid - we just clean it. In // this way it is always the same code path that is responsible for // updating and resolving an inline cache // Call to interpreted code // Compute settings for a CompiledStaticCall. Since we might have to set // the stub when calling to the interpreter, we need to return arguments. // Callee is interpreted code. In any case entering the interpreter // puts a converter-frame on the stack to save arguments. // Find reloc. information containing this call-site // We check here for opt_virtual_call_type, since we reuse the code // from the CompiledIC implementation //----------------------------------------------------------------------------- // make sure code pattern is actually a call imm32 instruction