object BCodeHelpers
- Source
- BCodeHelpers.scala
- Alphabetic
- By Inheritance
- BCodeHelpers
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Type Members
- final class InvokeStyle extends AnyVal
- final class TestOp extends AnyVal
The Scala compiler and reflection APIs.
Summary on the ASM analyzer framework --------------------------------------
Summary on the ASM analyzer framework --------------------------------------
Value
Interpreter
merge
function that computes the least upper bound of two values. Used by
Frame.merge (see below).Frame
top
index stores the index of the current stack topexecute(instruction)
method.merge(otherFrame)
methodAnalyzer
merge
function takes an instruction and a frame, merges the existing frame for that instr
(from the frames array) with the new frame passed as argument.
if the frame changed, puts the instruction on the work queue (fixpoint).merge
queue
array, top
index for next instruction to analyze)frames[instr]
into a local frame current
current.execute(instr, interpreter)
, mutating the current
framecurrent
frame
(this enqueues the destination instr if its frame changed)newControlFlowEdge
(see below)newControlFlowEdge
can be overridden to track control flow if requiredMaxLocals and MaxStack ----------------------
At the JVM level, long and double values occupy two slots, both as local variables and on the stack, as specified in the JVM spec 2.6.2: "At any point in time, an operand stack has an associated depth, where a value of type long or double contributes two units to the depth and a value of any other type contributes one unit."
For example, a method class A { def f(a: Long, b: Long) = a + b } has MAXSTACK=4 in the classfile. This value is computed by the ClassWriter / MethodWriter when generating the classfile (we always pass COMPUTE_MAXS to the ClassWriter).
For running an ASM Analyzer, long and double values occupy two local variable slots, but only a single slot on the call stack, as shown by the following snippet:
import scala.tools.nsc.backend.jvm._ import scala.tools.nsc.backend.jvm.opt.BytecodeUtils._ import scala.collection.convert.decorateAsScala._ import scala.tools.asm.tree.analysis._
val cn = AsmUtils.readClass("/Users/luc/scala/scala/sandbox/A.class") val m = cn.methods.iterator.asScala.find(_.name == "f").head
// the value is read from the classfile, so it's 4 println(s"maxLocals: ${m.maxLocals}, maxStack: ${m.maxStack}") // maxLocals: 5, maxStack: 4
// we can safely set it to 2 for running the analyzer. m.maxStack = 2
val a = new Analyzer(new BasicInterpreter) a.analyze(cn.name, m) val addInsn = m.instructions.iterator.asScala.find(_.getOpcode == 97).get // LADD Opcode val addFrame = a.frameAt(addInsn, m)
addFrame.getStackSize // 2: the two long values only take one slot each addFrame.getLocals // 5: this takes one slot, the two long parameters take 2 slots each
While running the optimizer, we need to make sure that the maxStack
value of a method is
large enough for running an ASM analyzer. We don't need to worry if the value is incorrect in
the JVM perspective: the value will be re-computed and overwritten in the ClassWriter.
Lessons learnt while benchmarking the alias tracking analysis -------------------------------------------------------------
Profiling
ASM analyzer insights
interpreter.merge
. This can be observed easily by running a test program that either runs
a BasicValue analysis only, versus a program that first runs a nullness analysis and then
a BasicValue. In an example, the time for the BasicValue analysis goes from 519ms to 1963ms,
a 3.8x slowdown.To benchmark an analysis, instead of benchmarking analysis while it runs in the compiler backend, one can easily run it from a separate program (or the repl). The bytecode to analyze can simply be parsed from a classfile. See example at the end of this comment.
Nullness Analysis in Miguel's Optimizer ---------------------------------------
Miguel implemented alias tracking for nullness analysis differently [1]. Remember that every frame has an array of values. Miguel's idea was to represent aliasing using reference equality in the values array: if two entries in the array point to the same value object, the two entries are aliases in the frame of the given instruction.
While this idea seems elegant at first sight, Miguel's implementation does not merge frames correctly when it comes to aliasing. Assume in frame 1, values (a, b, c) are aliases, while in frame 2 (a, b) are aliases. When merging the second into the first, we have to make sure that c is removed as an alias of (a, b).
It would be possible to implement correct alias set merging in Miguel's approach. However, frame merging is the main hot spot of analysis. The computational complexity of implementing alias set merging by traversing the values array and comparing references is too high. The concrete alias set representation that is used in the current implementation (see class AliasingFrame) makes alias set merging more efficient.
[1] https://github.com/scala-opt/scala/blob/opt/rebase/src/compiler/scala/tools/nsc/backend/bcode/NullnessPropagator.java
Complexity and scaling of analysis ----------------------------------
The time complexity of a data flow analysis depends on:
I measured the running time of an analysis for two examples:
I measured nullness analysis (which tracks aliases) and a SimpleValue analysis. Nullness runs roughly 5x slower (because of alias tracking) at every problem size - this factor doesn't change.
The numbers below are for nullness. Note that the last column is constant, i.e., the running time is proportional to #ins * #loc^2. Therefore we use this factor when limiting the maximal method size for running an analysis.
#insns #locals time (ms) time / #ins * #loc2 * 106 1305 156 34 1.07 2610 311 165 0.65 3915 466 490 0.57 5220 621 1200 0.59 6525 776 2220 0.56 7830 931 3830 0.56 9135 1086 6570 0.60 10440 1241 9700 0.60 11745 1396 13800 0.60
As a second experiment, nullness analysis was run with varying #insns but constant #locals. The last column shows linear complexity with respect to the method size (linearOffset = 2279):
#insns #locals time (ms) (time + linearOffset) / #insns 5220 621 1090 0.645 6224 621 1690 0.637 7226 621 2280 0.630 8228 621 2870 0.625 9230 621 3530 0.629 10232 621 4130 0.626 11234 621 4770 0.627 12236 621 5520 0.637 13238 621 6170 0.638
When running a BasicValue analysis, the complexity observation is the same (time is proportional to #ins * #loc^2).
Measuring analysis execution time ---------------------------------
See code below.
The Scala compiler and reflection APIs.