Lines Matching defs:to

16  * 2 along with this work; if not, write to the Free Software Foundation,
336 // If 'top' is cached, declare it useful to preserve cached node
353 // list. Consider all non-useful nodes to be useless i.e., dead nodes.
394 // Only need to remove this out-edge to the useless node
476 // Recompiling without allowing machine instructions to subsume loads
547 // This is a pretty expensive way to compute a size,
553 // may be shared by several calls to scratch_emit_size.
555 // expensive, since it has to grab the code cache lock.
663 tty->print_cr("PrintAssembly request changed to PrintOptoAssembly");
759 // This is done by a special, unique ReturnNode bound to root.
765 // to whatever caller is dynamically above us on the stack.
766 // This is done by a special, unique RethrowNode bound to root.
820 // This output goes directly to the tty, not the compiler log.
821 // To enable tools to match it up with the compilation activity,
822 // be sure to tag this tty output with the compile ID.
846 // Check if we want to skip execution of all compiled code.
1013 // First set TOP to NULL to give safe behavior during creation of RootNode
1016 // Now that you have a Root to point to, create the real TOP
1020 // Create Debug Information Recorder to record scopes, oopmaps, etc.
1128 // Calling Node::setup_is_top allows the nodes the chance to adjust
1152 // additional work that needs to be done to identify reachable nodes
1178 // Print the log message to tty
1189 // Print the log message to tty
1236 // Leave a bread crumb trail pointing to the original node:
1305 // space to include all of the array body. Only the header, klass
1357 const TypeInstPtr *to = tj->isa_instptr();
1358 if( to && _AliasLevel >= 2 && to != TypeOopPtr::BOTTOM ) {
1359 ciInstanceKlass *k = to->klass()->as_instance_klass();
1361 if (to->klass() != ciEnv::current()->Class_klass() ||
1366 tj = to = TypeInstPtr::make(TypePtr::BotPTR,to->klass(),false,0,offset);
1369 tj = to; // Keep NotNull and klass_is_exact for instance type
1370 } else if( ptr == TypePtr::NotNull || to->klass_is_exact() ) {
1374 tj = to = TypeInstPtr::make(TypePtr::BotPTR,to->klass(),false,0,offset);
1381 tj = to = TypeInstPtr::make(TypePtr::BotPTR, env()->Object_klass(), false, NULL, offset);
1386 if (to->klass() != ciEnv::current()->Class_klass()) {
1387 to = NULL;
1395 tj = to = TypeInstPtr::make(to->ptr(), canonical_holder, true, NULL, offset, to->instance_id());
1397 tj = to = TypeInstPtr::make(to->ptr(), canonical_holder, false, NULL, offset);
1403 // Klass pointers to object array klasses need some flattening
1407 // to assume the worst case of an Object. Both exact and
1408 // inexact types must flatten to the same alias class so
1426 // to the supertype cache alias index. Check for generic array loads from
1427 // the primary supertype array and also force them to the supertype cache
1428 // alias index. Since the same load can reach both, we need to merge
1449 // Flatten all to bottom for now
1454 case 1: // Flatten to: oop, static, field or array
1625 // %%% (We would like to finalize JavaThread::threadObj_offset(),
1626 // but the base pointer type is not distinctive enough to identify
1658 // Might as well try to fill the cache for the flattened version, too.
1739 // If there is room, try to inline some more warm call sites.
1795 // remove useless nodes to make the usage analysis simpler
1948 // No more new expensive nodes will be added to the list from here
2022 assert( true, "Break here to ccp.dump_nodes_and_types(_root,999,1)");
2029 assert( true, "Break here to ccp.dump_old2new_map()");
2058 // at least to this point, even if no loop optimizations were done.
2099 // In debug mode can dump m._nodes.dump() for mapping of ideal to machine
2110 // In debug mode can dump m._nodes.dump() for mapping of ideal to machine
2145 // for example, to avoid taking an implicit null pointer exception
2158 // Prior to register allocation we kept empty basic blocks in case the
2159 // the allocator needed a place to spill. After register allocation we
2183 // Convert Nodes to instruction bits in a buffer
2252 !n->is_Catch() && // Would be nice to print exception table targets
2310 // This class defines counters to help identify when a method
2352 // possible to eliminate some of the StoreCMs
2370 // Already converted to precedence edge
2400 // Swap to left input. Implements item (2).
2413 // Move "last use" input to left by swapping inputs
2484 // Count call sites where the FP mode bit would have to be flipped.
2493 // Clone shared simple arguments to uncommon calls, item (1).
2565 // Check to see if address types have grounded out somehow.
2584 // Use addressing with narrow klass to load with offset on x86.
2587 // Do this transformation here since IGVN will convert ConN back to ConP.
2604 // Decode a narrow oop to match address
2636 // On other platforms (Sparc) we have to keep new DecodeN node and
2637 // use it to do implicit NULL check in address:
2643 // Pin the new DecodeN node to non-null path on these platform (Sparc)
2644 // to keep the information to which NULL check the new DecodeN node
2645 // corresponds to use it as value in implicit_null_check().
2658 // Do this transformation here to preserve CmpPNode::sub() and
2677 // This will allow to generate normal oop implicit null check.
2703 // At the end the code will be matched to
2874 // The cpu's shift instructions don't restrict the count to the
2875 // lower 5/6 bits. We need to do the masking ourselves.
2945 nstack.pop(); // Shift to the next node on stack
2953 // Go over safepoints nodes to skip DecodeN nodes for debug edges.
2969 // Is it safe to skip?
2992 // (1) Clone simple inputs to uncommon calls, so they can be scheduled late
2994 // optimizations to avoid GVN undoing the cloning. Clone constant
2995 // inputs to Loop Phis; these will be split by the allocator anyways.
2997 // (2) Move last-uses by commutative operations to the left input to encourage
2999 // on RISCs. Must come after regular optimizations to avoid GVN Ideal
3003 // forcing singles to memory (requires extra stores and loads after each
3006 // if the relative frequency of single FP ops to calls is low enough.
3011 // from time to time in other codes (such as -Xcomp finalizer loops, etc).
3024 // Expensive nodes have their control input set to prevent the GVN
3026 // no need to keep the control input. We want the expensive nodes to
3027 // be freely moved to the least frequent code path by gcm.
3036 // Allocate stack of size C->unique()/2 to avoid frequent realloc
3106 // No infinite loops, no reason to bail out.
3159 // Is not eager to return true, since this will cause the compiler to use
3160 // Action_none for a trap point, to avoid too many recompilations.
3208 // Walk the Graph and verify that there is a one-to-one correspondence
3214 // Call recursive graph walk to check edges
3246 // to backtrack and retry without subsuming loads. Other than this backtracking
3247 // behavior, the Compile's failure reason is quietly copied up to the ciEnv
3360 // Make sure all jump-table entries were sorted to the end of the
3389 // Align size up to the next section start (which is insts; see
3479 // We can use the node pointer here to identify the right jump-table
3497 // table_base_offset() we need to subtract the table_base_offset()
3498 // to get the plain offset into the constant table.
3584 // Take this opportunity to remove dead nodes from the list
3596 // Then sort the list so that similar nodes are next to each other
3617 // Sort to bring similar nodes next to each other and clear the