Lines Matching refs:to

16  * 2 along with this work; if not, write to the Free Software Foundation,
257 case T_INT : /* nothing to do */ break;
258 case T_LONG : /* nothing to do */ break;
259 case T_VOID : /* nothing to do */ break;
260 case T_FLOAT : /* nothing to do */ break;
261 case T_DOUBLE : /* nothing to do */ break;
321 // Increment counter in methodOop (we don't need to load it, it's in ecx).
346 // Test to see if we should create a method data oop
350 // if no method data exists, go to profile_method
368 // On return (i.e. jump to entry_point) [ back to invocation of interpreter ]
370 // rdx is not restored. Doesn't appear to really be set.
390 // and jump to the interpreted entry.
415 // bottom). be sure to change this if you add/subtract anything
416 // to/from the overhead area
425 // then we need to verify there is enough stack space remaining
430 // compute rsp as if this were going to be the last frame on
455 // Add stack base to locals and subtract stack size
559 // r14: pointer to locals
593 __ push(0); // reserve word for pointer to expression stack bottom
609 // r13: senderSP must preserver for slow path, set SP to it on fast path
638 // Shift codes right to get the index on the right.
683 // Need to differentiate between igetfield, agetfield, bgetfield etc.
687 // Make sure we don't need to mask edx after the above shift
757 // * In the G1 code we do not check whether we need to block for
767 // of java.lang.Reference) and jump to the slow path if null. If the
769 // and so we don't need to call the G1 pre-barrier. Thus we can use the
770 // regular method entry code to generate the NPE.
776 // r13: senderSP must preserve for slow path, set SP to it on fast path
788 // If the receiver is null then it is OK to jump to the slow path.
799 // Generate the G1 pre-barrier code to log the value of
806 // Generate the G1 pre-barrier code to log the value of
817 __ mov(rsp, r13); // set sp to sender sp
829 // If G1 is not enabled then attempt to go through the accessor entry point
859 // we only add a handful of words to the stack
892 __ stop("tried to execute non-native method as native");
899 __ stop("tried to execute abstract method in interpreter");
905 // would try to exit the monitor of synchronized methods which hasn't
907 // _do_not_unlock_if_synchronized to true. The remove_activation will
998 assert(InterpreterRuntime::SignatureHandlerGenerator::to() == rsp,
1032 // pass handle to mirror
1060 // segment. It does not have to be the correct return pc.
1075 // Change state to native
1087 // NOTE: The order of these pushes is known to frame::interpreter_frame_result
1088 // in order to extract the result of a method call. If the order of these
1089 // pushes change or anything else is added to the stack then the code in
1107 // We use the current thread pointer to calculate a thread specific
1108 // offset to write to within the page. This minimizes bus traffic
1109 // due to cache line collision.
1129 // call_VM_leaf either as it will check to see if r13 & r14 are
1130 // preserved and correspond to the bcp/locals pointers. So we do a
1199 // restore r13 to have legal interpreter frame, i.e., bci == 0 <=>
1208 // Note: At some point we may want to unify this with the code
1230 // synchronized method. However, need to check that the object
1243 // Entry already unlocked, need to throw exception
1262 // restore potential result in edx:eax, call result handler to
1278 __ mov(rsp, t); // set sp to sender sp
1291 // Generic interpreted method entry to (asm) interpreter
1360 __ stop("tried to execute native method as non-native");
1367 __ stop("tried to execute abstract method in interpreter");
1373 // handler would try to exit the monitor of synchronized methods
1375 // _do_not_unlock_if_synchronized to true. The remove_activation
1446 // We have decided to profile this method in the interpreter
1486 // Assuming that we don't go to one of the trivial specialized entries
1487 // the stack will look like below when we are ready to execute the
1493 // the return address is moved to the end of the locals).
1551 // the compiled version to the intrinsic version.
1574 // bottom). be sure to change this if you add/subtract anything
1575 // to/from the overhead area
1600 // The frame interpreter_frame, if not NULL, is guaranteed to be the
1601 // right size, as determined by a previous call to this method.
1602 // It is also guaranteed to be walkable even though it is in a skeletal state
1613 // for the callee's params we only need to account for the extra
1674 // Restore sp to interpreter_frame_last_sp even though we are going
1675 // to empty the expression stack for the exception processing.
1679 __ restore_bcp(); // r13 points to call/send
1702 __ jmp(rax); // jump to exception handler (may be _remove_activation_entry!)
1732 // Check to see whether we are returning to a deoptimized frame.
1749 // Compute size of arguments for saving when returning to
1788 // mutations to those outgoing arguments to be preserved and other
1789 // constraints basically require this frame to look exactly as
1792 // last_sp) and the top of stack. Rather than force deopt to
1794 // fixup routine to move the mutated arguments onto the top of our
1809 // call profiling. We have to restore the mdp for the current bcp.
1850 __ jmp(rbx); // jump to exception
1970 // Call a little run-time stub to avoid blow-up for each bytecode.