cppInterpreter_x86.cpp revision 2117
* published by the Free Software Foundation. * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any // Routine exists to make tracebacks look decent in debugger // while we are recursed in the frame manager/c++ interpreter. // We could use an address in the frame manager but having // frames look natural in the debugger is a plus. // c++ interpreter entry point this holds that entry point label. // default registers for state and sender_sp // state and sender_sp are the same on 32bit because we have no choice. // state could be rsi on 64bit but it is an arg reg and not callee save // so r13 is better choice. // address AbstractInterpreter::_remove_activation_preserving_args_entry; case T_INT : i =
4;
break;
// Is this pc anywhere within code owned by the interpreter? // This only works for pc that might possibly be exposed to frame // walkers. It clearly misses all of the actual c++ interpreter case T_INT :
/* nothing to do */ break;
__ pop(t);
// remove return address first // Must return a result for interpreter or compiler. In SSE // mode, results are returned in xmm0 and the FPU stack must // Store as float and empty fpu stack __ push(t);
// restore return address // retrieve result from frame __ ret(0);
// return from result handler // tosca based result to c++ interpreter stack based result. // Result goes to top of native stack. // A result is in the tosca (abi result) from either a native method call or compiled // code. Place this result on the java expression stack so C++ interpreter can use it. __ pop(t);
// remove return address first __ push(
rdx);
// pushes useless junk on 64bit // Result is in ST(0)/xmm0 __ jmp(t);
// return from result handler // A result is in the java expression stack of the interpreted method that has just // returned. Place this result on the java expression stack of the caller. // The current interpreter activation in rsi/r13 is for the method just returning its // result. So we know that the result of this method is on the top of the current // execution stack (which is pre-pushed) and will be return to the top of the caller // stack. The top of the callers stack is the bottom of the locals of the current // Because of the way activation are managed by the frame manager the value of rsp is // below both the stack top of the current activation and naturally the stack top // of the calling activation. This enable this routine to leave the return address // to the frame manager on the stack and do a vanilla return. // On entry: rsi/r13 - interpreter state of activation returning a (potential) result // On Return: rsi/r13 - unchanged // rax - new stack top for caller activation (i.e. activation in _prev_link) // return top two words on current expression stack to caller's expression stack // The caller's expression stack is adjacent to the current frame manager's intepretState // except we allocated one extra word for this intepretState so we won't overwrite it // when we return a two word result. // A result is in the java expression stack of the interpreted method that has just // returned. Place this result in the native abi that the caller expects. // Similar to generate_stack_to_stack_converter above. Called at a similar time from the // frame manager execept in this situation the caller is native code (c1/c2/call_stub) // and so rather than return result onto caller's java expression stack we return the // result in the expected location based on the native abi. // On entry: rsi/r13 - interpreter state of activation returning a (potential) result // On Return: rsi/r13 - unchanged // Other registers changed [rax/rdx/ST(0) as needed for the result returned] // make it look good in the debugger // On entry the "locals" argument points to locals[0] (or where it would be in case no locals in // a static method). "state" contains any previous frame manager state which we must save a link // to in the newly generated state object. On return "state" is a pointer to the newly allocated // state object. We must allocate and initialize a new interpretState object and the method // expression stack. Because the returned result (if any) of the method will be placed on the caller's // expression stack and this will overlap with locals[0] (and locals[1] if double/long) we must // be sure to leave space on the caller's stack so that this result will not overwrite values when // locals[0] and locals[1] do not exist (and in fact are return address and saved rbp). So when // we are non-native we in essence ensure that locals[0-1] exist. We play an extra trick in // non-product builds and initialize this last local with the previous interpreterState as // this makes things look real nice in the debugger. // Assumes locals == &locals[0] // Assumes state == any previous frame manager state (assuming call path from c++ interpreter) // Assumes rax = return address // Modifies rcx, rdx, rax // state == address of new interpreterState // rsp == bottom of method's expression stack. // On entry sp is the sender's sp. This includes the space for the arguments // that the sender pushed. If the sender pushed no args (a static) and the // caller returns a long then we need two words on the sender's stack which // are not present (although when we return a restore full size stack the // space will be present). If we didn't allocate two words here then when // we "push" the result of the caller's stack we would overwrite the return // address and the saved rbp. Not good. So simply allocate 2 words now // just to be safe. This is the "static long no_params() method" issue. // We don't need this for native calls because they return result in // register and the stack is expanded in the caller before we store // the results on the stack. // Now that we are assure of space for stack result, setup typical linkage // initialize the "shadow" frame so that use since C++ interpreter not directly // recursive. Simpler to recurse but we can't trim expression stack as we call // entries run from -1..x where &monitor[x] == // Must not attempt to lock method until we enter interpreter as gc won't be able to find the // initial frame. However we allocate a free monitor so we don't have to shuffle the expression stack // Allocate initial monitor and pre initialize it // get synchronization object // add space for monitor & lock // compute full expression stack limit const int extra_stack = 0;
//6815692//methodOopDesc::extra_stack_words(); // Allocate expression stack // Make sure stack is properly aligned and sized for the abi __ andptr(
rsp, -
16);
// must be 16 byte boundary (see amd64 ABI) // Helpers for commoning out cases in the various type of method entries. // increment invocation count & check for overflow // Note: checking for negative value instead of overflow // so we have a 'sticky' overflow test // rcx: invocation counter // Update standard invocation counters // profile_method is non-null only for interpreted method so // profile_method != NULL == !native_call // BytecodeInterpreter only calls for native so code is elided. // C++ interpreter on entry // rsi/r13 - new interpreter state pointer // rbp - interpreter frame pointer // On return (i.e. jump to entry_point) [ back to invocation of interpreter ] // rcx - rcvr (assuming there is one) // top of stack return address of interpreter caller // rsi/r13 - previous interpreter state pointer // InterpreterRuntime::frequency_counter_overflow takes one argument // indicating if the counter overflow occurs at a backwards branch (non-NULL bcp). // The call returns the address of the verified entry point for the method or NULL // if the compilation did not complete (either went background or bailed out). // for c++ interpreter can rsi really be munged? // see if we've got enough room on the stack for locals plus overhead. // the expression stack grows down incrementally, so the normal guard // page mechanism will work for that. // Registers live on entry: // rdx: number of additional locals this frame needs (what we must check) // rsi/r13: previous interpreter frame state object // rdx: number of additional locals this frame needs (what we must check) // NOTE: since the additional locals are also always pushed (wasn't obvious in // generate_method_entry) so the guard should work for them too. // monitor entry size: see picture of stack set (generate_method_entry) and frame_i486.hpp // total overhead size: entry_size + (saved rbp, thru expr stack bottom). // compute rsp as if this were going to be the last frame on // the stack before the red zone // save rsi == caller's bytecode ptr (c++ previous interp. state) // QQQ problem here?? rsi overload???? // locals + overhead, in bytes // Always give one monitor to allow us to start interp if sync method. // Any additional monitors need a check when moving the expression stack const int extra_stack = 0;
//6815692//methodOopDesc::extra_stack_entries(); // verify that thread stack base is non-zero __ stop(
"stack base is zero");
// verify that thread stack size is non-zero __ stop(
"stack size is zero");
// Add stack base to locals and subtract stack size // We should have a magic number here for the size of the c++ interpreter frame. // We can't actually tell this ahead of time. The debug version size is around 3k // product is 1k and fastdebug is 4k // Use the maximum number of pages we might bang. // Only need this if we are stack banging which is temporary while // check against the current stack bottom // throw exception return address becomes throwing pc // all done with frame size check // Find preallocated monitor and lock method (C++ interpreter) // assumes state == rsi/r13 == pointer to current interpreterState // minimally destroys rax, rdx|c_rarg1, rdi // find initial monitor i.e. monitors[-1] __ stop(
"method doesn't need synchronization");
// get synchronization object __ stop(
"wrong synchronization lobject");
// can destroy rax, rdx|c_rarg1, rcx, and (via call_VM) rdi! // Call an accessor method (assuming it is resolved, otherwise drop into vanilla (slow path) entry // rsi/r13: senderSP must preserved for slow path, set SP to it on fast path // do fastpath for resolved accessor methods // If we need a safepoint check, generate full interpreter entry. // Code: _aload_0, _(i|a)getfield, _(i|a)return or any rewrites thereof; parameter size = 1 // Note: We can only use this code if the getfield has been resolved // and if we don't have a null-pointer exception => check for // these conditions first and use slow path if necessary. // check if local 0 != NULL and read field // read first instruction word and extract bytecode @ 1 and index @ 2 // Shift codes right to get the index on the right. // The bytecode fetched looks like <index><0xb4><0x2a> // rcx: receiver - do not destroy since it is needed for slow path! // rdx: constant pool cache index // rdi: constant pool cache // check if getfield has been resolved and read constant pool cache entry // check the validity of the cache entry by testing whether _indices field // contains Bytecode::_getfield in b1 byte. // Note: constant pool entry is not valid before bytecode is resolved // Need to differentiate between igetfield, agetfield, bgetfield etc. // because they are different sizes. // Use the type from the constant pool cache // Make sure we don't need to mask rdx for tosBits after the above shift __ stop(
"what type is this?");
// All the rest are a 32 bit wordsize // generate a vanilla interpreter entry as the slow path // We will enter c++ interpreter looking like it was // called by the call_stub this will cause it to return // a tosca result to the invoker which might have been // the c++ interpreter itself. // C++ Interpreter stub for calling a native method. // This sets up a somewhat different looking stack for calling the native method // than the typical interpreter frame setup but still has the pointer to // determine code generation flags // rcx: receiver (unused) // rsi/r13: previous interpreter state (if called from C++ interpreter) must preserve // get parameter size (always needed) // rcx: size of parameters // for natives the size of locals is zero // compute beginning of parameters /locals // initialize fixed part of activation frame // Assumes rax = return address // allocate and initialize new interpreterState and method expression stack // IN(state) -> previous frame manager state (NULL from stub/c1/c2) // destroys rax, rcx, rdx // OUT (state) -> new interpreterState // OUT(rsp) -> bottom of methods expression stack // start with NULL previous state // duplicate the alignment rsp got after setting stack_base __ andptr(
rax, -
16);
// must be 16 byte boundary (see amd64 ABI) __ stop(
"broken stack frame setup in interpreter");
// Since at this point in the method invocation the exception handler // would try to exit the monitor of synchronized methods which hasn't // been entered yet, we set the thread local variable // _do_not_unlock_if_synchronized to true. The remove_activation will // make sure method is native & not abstract __ stop(
"tried to execute non-native method as native");
__ stop(
"tried to execute abstract method in interpreter");
// increment invocation count & check for overflow // reset the _do_not_unlock_if_synchronized flag // check for synchronized native methods // Note: This must happen *after* invocation counter check, since // when overflow happens, the method should not be locked. // potentially kills rax, rcx, rdx, rdi // no synchronization necessary __ stop(
"method needs synchronization");
// allocate space for parameters __ andptr(
rsp, -
16);
// must be 16 byte boundary (see amd64 ABI) __ addptr(t,
2*
wordSize);
// allocate two more slots for JNIEnv and possible mirror // call signature handler // The generated handlers do not touch RBX (the method oop). // However, large signatures cannot be cached and are generated // each time here. The slow-path generator will blow RBX // sometime, so we must reload it after the call. // result handler is in rax // get native function entry point // pass mirror handle if static call // copy mirror into activation object __ stop(
"Wrong thread state in native stub");
// Change state to native (we save the return address in the thread, since it might not // be pushed on the stack when we do a a stack traversal). It is enough that the pc() // points into the right code segment. It does not have to be the correct return pc. // result potentially in rdx:rax or ST0 // The potential result is in ST(0) & rdx:rax // With C++ interpreter we leave any possible result in ST(0) until we are in result handler and then // we do the appropriate stuff for returning the result. rdx:rax must always be saved because just about // anything we do here will destroy it, st(0) is only saved if we re-enter the vm where it would // It is safe to do these pushes because state is _thread_in_native and return address will be found // via _last_native_pc and not via _last_jave_sp // Must save the value of ST(0)/xmm0 since it could be destroyed before we get to result handler // save rax:rdx for potential use by result handler. // Either restore the MXCSR register after returning from the JNI Call // or verify that it wasn't changed. // Either restore the x87 floating pointer control word after returning // from the JNI call or verify that it wasn't changed. // Write serialization page so VM thread can do a pseudo remote membar. // We use the current thread pointer to calculate a thread specific // offset to write to within the page. This minimizes bus traffic // due to cache line collision. // check for safepoint operation in progress and/or pending suspend requests // threads running native code and they are expected to self-suspend // when leaving the _thread_in_native state. We need to check for // pending suspend requests here. // Don't use call_VM as it will see a possible pending exception and forward it // and never return here preventing us from clearing _last_native_pc down below. // Also can't use call_VM_leaf either as it will check to see if rsi & rdi are // preserved and correspond to the bcp/locals pointers. // If result was an oop then unbox and save it in the frame // keep stack depth as expected by pushing oop which will eventually be discarded // QQQ Seems like for native methods we simply return and the caller will see the pending // exception and do the right thing. Certainly the interpreter will, don't know about // Seems that the answer to above is no this is wrong. The old code would see the exception // and forward it before doing the unlocking and notifying jvmdi that method has exited. // This seems wrong need to investigate the spec. // handle exceptions (exception handling will handle unlocking!) // There are potential results on the stack (rax/rdx, ST(0)) we ignore these and simply // return and let caller deal with exception. This skips the unlocking here which // seems wrong but seems to be what asm interpreter did. Can't find this in the spec. // Note: must preverve method in rbx // The skips unlocking!! This seems to be what asm interpreter does but seems // very wrong. Not clear if this violates the spec. // do unlocking if necessary // the code below should be shared with interpreter macro assembler implementation // BasicObjectLock will be first in list, since this is a synchronized method. However, need // to check that the object has not been unlocked by an explicit monitorexit bytecode. // Entry already unlocked, need to throw exception // unlock can blow rbx so restore it for path that needs it below // the exception handler code notifies the runtime of method exits // too. If this happens before, method entry/exit notifications are // not properly paired (was bug - gri 11/22/99). // restore potential result in rdx:rax, call result handler to restore potential result in ST0 & handle result __ call(t);
// call result handler to convert to tosca form // invocation counter overflow // Handle overflow of counter and compile method // Generate entries that will put a result type index into rcx // Generate entries that will put a result type index into rcx // deopt needs to jump to here to enter the interpreter (return a result) // deopt needs to jump to here to enter the interpreter (return a result) // deopt needs to jump to here to enter the interpreter (return a result) // deopt needs to jump to here to enter the interpreter (return a result) // deopt needs to jump to here to enter the interpreter (return a result) // deopt needs to jump to here to enter the interpreter (return a result) // deopt needs to jump to here to enter the interpreter (return a result) // an index is present in rcx that lets us move any possible result being // return to the interpreter's stack // Because we have a full sized interpreter frame on the youngest // activation the stack is pushed too deep to share the tosca to // stack converters directly. We shrink the stack to the desired // amount and then push result and then re-extend the stack. // We could have the code in size_activation layout a short // frame for the top activation but that would look different // than say sparc (which needs a full size activation because // the windows are in the way. Really it could be short? QQQ // setup rsp so we can push the "result" as needed. // Address index(noreg, rcx, Address::times_ptr); // __ movl(rcx, Address(noreg, rcx, Address::times_ptr, int(AbstractInterpreter::_tosca_to_stack))); // result if any on stack already ) // Generate the code to handle a more_monitors message from the c++ interpreter // 1. compute new pointers // rsp: old expression stack top // 2. move expression stack contents // now zero the slot so we can find it. // Initial entry to C++ interpreter from the call_stub. // This entry point is called the frame manager since it handles the generation // of interpreter activation frames via requests directly from the vm (via call_stub) // and via requests from the interpreter. The requests from the call_stub happen // directly thru the entry point. Requests from the interpreter happen via returning // from the interpreter and examining the message the interpreter has returned to // the frame manager. The frame manager can take the following requests: // NO_REQUEST - error, should never happen. // MORE_MONITORS - need a new monitor. Shuffle the expression stack on down and // allocate a new monitor. // CALL_METHOD - setup a new activation to call a new method. Very similar to what // happens during entry during the entry via the call stub. // RETURN_FROM_METHOD - remove an activation. Return to interpreter or call stub. // rcx: receiver - unused (retrieved from stack as needed) // [ return address ] <--- rsp // We are free to blow any registers we like because the call_stub which brought us here // initially has preserved the callee save registers already. // Because we redispatch "recursive" interpreter entries thru this same entry point // the "input" register usage is a little strange and not what you expect coming // state are NULL but on "recursive" dispatches they are what you'd expect. // rsi: current interpreter state (C++ interpreter) must preserve (null from call_stub/c1/c2) // A single frame manager is plenty as we don't specialize for synchronized. We could and // the code is pretty much ready. Would need to change the test below and for good measure // modify generate_interpreter_state to only do the (pre) sync stuff stuff for synchronized // routines. Not clear this is worth it yet. // Fast accessor methods share this entry point. // This works because frame manager is in the same codelet // save sender sp (doesn't include return address // const Address monitor_block_top (rbp, frame::interpreter_frame_monitor_block_top_offset * wordSize); // const Address monitor_block_bot (rbp, frame::interpreter_frame_initial_sp_offset * wordSize); // const Address monitor(rbp, frame::interpreter_frame_initial_sp_offset * wordSize - (int)sizeof(BasicObjectLock)); // get parameter size (always needed) // rcx: size of parameters // see if we've got enough room on the stack for locals plus overhead. // c++ interpreter does not use stack banging or any implicit exceptions // leave for now to verify that check is proper. // compute beginning of parameters (rdi) // rdx - # of additional locals // allocate space for locals // explicitly initialize locals // Assumes rax = return address // allocate and initialize new interpreterState and method expression stack // IN(state) -> any current interpreter activation // destroys rax, rcx, rdx, rdi // OUT (state) -> new interpreterState // OUT(rsp) -> bottom of methods expression stack // c++ interpreter does not use stack banging or any implicit exceptions // leave for now to verify that check is proper. // Call interpreter enter here if message is // set and we know stack size is valid // We can setup the frame anchor with everything we want at this point // as we are thread_in_Java and no safepoints can occur until we go to // vm mode. We do have to clear flags on return from vm but that is it // state is preserved since it is callee saved // examine msg from interpreter to determine next action // Allocate more monitor space, shuffle expression stack.... // uncommon trap needs to jump to here to enter the interpreter (re-execute current bytecode) // Load the registers we need. //============================================================================= // Returning from a compiled method into a deopted method. The bytecode at the // bcp has completed. The result of the bytecode is in the native abi (the tosca // for the template based interpreter). Any stack space that was used by the // bytecode that has completed has been removed (e.g. parameters for an invoke) // so all that we have to do is place any pending result on the expression stack // and resume execution on the next bytecode. // Current frame has caught an exception we need to dispatch to the // handler. We can get here because a native interpreter frame caught // an exception in which case there is no handler and we must rethrow // If it is a vanilla interpreted frame the we simply drop into the // interpreter and let it do the lookup. // rdx: return address/pc that threw exception // restore state pointer. // Store exception with interpreter will expect it // is current frame vanilla or native? // We drop thru to unwind a native interpreted frame with a pending exception // We jump here for the initial interpreter frame with exception pending // We unwind the current acivation and forward it to our caller. // unwind rbp, return stack to unextended value and re-push return address // Return point from a call which returns a result in the native abi // A pending exception may be present in which case there is no result present // The FPU stack is clean if UseSSE >= 2 but must be cleaned in other cases for (
int i =
1; i <
8; i++) {
for (
int i =
1; i <
8; i++) {
// Result if any is in tosca. The java expression stack is in the state that the // calling convention left it (i.e. params may or may not be present) // Copy the result from tosca and place it on java expression stack. // Restore rsi/r13 as compiled code may not preserve it // restore stack to what we had when we left (in case i2c extended it) // If there is a pending exception then we don't really have a result to process // get method just executed // callee left args on top of expression stack, remove them // Address index(noreg, rax, Address::times_ptr); // __ movl(rcx, Address(noreg, rcx, Address::times_ptr, int(AbstractInterpreter::_tosca_to_stack))); // An exception is being caught on return to a vanilla interpreter frame. // Empty the stack and resume interpreter // Exception present, empty stack // Return from interpreted method we return result appropriate to the caller (i.e. "recursive" // interpreter call, or native) and unwind this interpreter activation. // All monitors should be unlocked. // Copy result to callers java stack // Address index(noreg, rax, Address::times_ptr); // __ movl(rax, Address(noreg, rax, Address::times_ptr, int(AbstractInterpreter::_stack_to_stack))); // returning to interpreter method from "recursive" interpreter call // result converter left rax pointing to top of the java stack for method we are returning // to. Now all we must do is unwind the state from the completed call // Resume the interpreter. The current frame contains the current interpreter // state == interpreterState object for method we are resuming // result if any on stack already ) // convert result and unwind initial activation // Address index(noreg, rax, Address::times_ptr); BytecodeInterpreter object // return restoring the stack to the original sender_sp value // set stack to sender's sp // OSR request, adjust return address to make current frame into adapter frame // We are going to pop this frame. Is there another interpreter frame underneath // Move buffer to the expected parameter location // We know we are calling compiled so push specialized return // method uses specialized entry, push a return so we look like call stub setup // this path will handle fact that result is returned in registers and not // set stack to sender's sp // Call a new method. All we do is (temporarily) trim the expression stack // push a return address to bring us back to here and leap to the new entry. // stack points to next free location and not top element on expression stack // method expects sp to be pointing to topmost element // don't need a return address if reinvoking interpreter // Make it look like call_stub calling conventions // Get (potential) receiver // method uses specialized entry, push a return so we look like call stub setup // this path will handle fact that result is returned in registers and not __ stop(
"Bad message from interpreter");
// Interpreted method "returned" with an exception pass it on... // We handle result (if any) differently based on return to interpreter or call_stub // We will unwind the current (initial) interpreter frame and forward // the exception to the caller. We must put the exception in the // expected register and clear pending exception and then forward. // determine code generation flags // Deoptimization helpers for C++ interpreter // How much stack a method activation needs in words. const int stub_code =
4;
// see generate_call_stub // Save space for one monitor to get into the interpreted method in case // the method is synchronized // total static overhead size. Account for interpreter state object, return // address, saved rbp and 2 words for a "static long no_params() method" issue. const int extra_stack = 0;
//6815692//methodOopDesc::extra_stack_entries(); // returns the activation size. // What about any vtable? // This gets filled in later but make it something recognizable for now // *current->register_addr(GR_Iprev_state) = (intptr_t) prev; // Make the prev callee look proper // Need +1 here because stack_base points to the word just above the first expr stack entry // and stack_limit is supposed to point to the word just below the last expr stack entry. // See generate_compute_interpreter_state. int extra_stack = 0;
//6815692//methodOopDesc::extra_stack_entries(); "Stack top out of range");
// NOTE this code must exactly mimic what InterpreterGenerator::generate_compute_interpreter_state() // does as far as allocating an interpreter frame. // If interpreter_frame!=NULL, set up the method, locals, and monitors. // The frame interpreter_frame, if not NULL, is guaranteed to be the right size, // as determined by a previous call to this method. // It is also guaranteed to be walkable even though it is in a skeletal state // NOTE: return size is in words not bytes // NOTE: tempcount is the current size of the java expression stack. For top most // frames we will allocate a full sized expression stack and not the curback // version that non-top frames have. // Calculate the amount our frame will be adjust by the callee. For top frame // NOTE: ia64 seems to do this wrong (or at least backwards) in that it // calculates the extra locals based on itself. Not what the callee does // to it. So it ignores last_frame_adjust value. Seems suspicious as far // as getting sender_sp correct. // First calculate the frame size without any java expression stack // Now with full size expression stack int extra_stack = 0;
//6815692//methodOopDesc::extra_stack_entries(); // and now with only live portion of the expression stack // the size the activation is right now. Only top frame is full size /* Now fillin the interpreterState object */ // The state object is the first thing on the frame and easily located // Find the locals pointer. This is rather simple on x86 because there is no // confusing rounding at the callee to account for. We can trivially locate // our locals based on the current fp(). // Note: the + 2 is for handling the "static long no_params() method" issue. // (too bad I don't really remember that issue well...) // If the caller is interpreted we need to make sure that locals points to the first // argument that the caller passed and not in an area where the stack might have been extended. // because the stack to stack to converter needs a proper locals value in order to remove the // arguments from the caller and place the result in the proper location. Hmm maybe it'd be // simpler if we simply stored the result in the BytecodeInterpreter object and let the c++ code // adjust the stack?? HMMM QQQ // locals must agree with the caller because it will be used to set the // caller's tos when we return. // locals = caller->unextended_sp() + (method->size_of_parameters() - 1); // this is where a c2i would have placed locals (except for the +2) /* +1 because stack is always prepushed */ // BytecodeInterpreter::pd_layout_interpreterState(cur_state, interpreter_return_address, interpreter_frame->fp()); #
endif // CC_INTERP (all)