Skip to content

V8 (Google JavaScript engine)

Created: 2019-01-15 16:10:38 -0800 Modified: 2019-01-18 17:01:38 -0800

  • The garbage collector is not event-loop-aware; it will run whenever it feels like it (reference).
  • You can profile your NodeJS code by attaching to Chrome. Note that it doesn’t seem to show functions that take less than 0.05 ms.
  • Deoptimizations themselves are very quick (<1 ms). They typically happen when assumptions around an optimization are proven false. For example, a simple “sum” function may get optimized if it only ever sees numbers. Then, when it finally gets a string, it needs to be deoptimized in order to follow JavaScript’s complex string addition rules. However, optimizations can take a while (sometimes >10ms), and deoptimizing a function usually means it will get optimized again later.

Particulars of the codebase itself

Section titled Particulars of the codebase itself
  • Flags like “traceopt” are defined with a macro, DEFINE_BOOL, which converts to FLAG, which converts to FLAG_FULL, which prefixes the flag’s name with “FLAG”. So “trace_opt” becomes “FLAG_trace_opt” in the codebase. Flags always have underscores rather than hyphens.
    • Some flags, like single_threaded, get defined as FLAG_single_threaded but never used that way. Instead, simply having them set will set other flags/options as shown here.
  • —trace-opt and —trace-deopt
    • Never run these while logging to stdout. Doing so seems to increase execution length to ~16X what it would normally be because of how noisy these are. Instead, pipe to a file, and that performance hit will be gone.
  • —trace-turbo
    • This will log to many JSON files (in addition to logging to stdout).

Optimizations with respect to Bot Land

Section titled Optimizations with respect to Bot Land

At one point, I was trying to investigate why I was seeing random spikes in execution times in Bot Land. Most code would run at X ms/turn on average, but every once in a while, I was seeing a jump to something like 200X. I’m nearly positive that it was a combination of the garbage collector and optimizations done by V8. However, the garbage collector itself seemed to run in 0.0 ms when I ran “—trace_gc”, so I think that optimizations (and reopts and deopts) were causing the majority of the spikes.

This note includes some of the investigation I did about it.

Here’s some sample code to optimize all of the non-constructor methods in a class. This is not something you should ever copy/paste out of here without understanding all of the implications, the most significant being that this likely doesn’t do what you want it to do.

const _ = require('lodash');
function printStatus(fn) {
const statusFlags = %GetOptimizationStatus(fn);
let strRep = '[';
// https://github.com/v8/v8/blob/31142ba49d197b01c1952a53b9a7b00c51a1d38f/src/runtime/runtime.h#L760
if (statusFlags & 1) strRep += 'function,';
if (statusFlags & 2) strRep += 'never_opt,';
if (statusFlags & 4) strRep += 'always_opt,';
if (statusFlags & 8) strRep += 'maybe_deopt,';
if (statusFlags & 16) strRep += 'optimized,';
if (statusFlags & 32) strRep += 'turbo_fan,';
if (statusFlags & 64) strRep += 'interpreted,';
if (statusFlags & 128) strRep += 'marked_for_opt,';
if (statusFlags & 256) strRep += 'marked_for_conc_opt,';
if (statusFlags & 512) strRep += 'opt_concurrently,';
if (statusFlags & 1024) strRep += 'is_executing,';
if (statusFlags & 2048) strRep += 'topmost_frame_turbo_fan,';
strRep += ']';
console.log('Optimization status: ' + strRep);
}
class Person {
constructor(name) {
this.name = name;
}
sayName() {
console.log("My name is " + this.name);
}
toString() {
return this.name;
}
}
const props = Object.getOwnPropertyNames(Person.prototype);
const adam = new Person('Adam');
_.forEach(props, (prop) => {
if (prop === 'constructor') return;
console.log(prop);
printStatus(Person.prototype[prop]);
Person.prototype[prop].call(adam);
Person.prototype[prop].call(adam); // we need to call this twice due to this: https://github.com/NathanaelA/v8-Natives#notes
%OptimizeFunctionOnNextCall(Person.prototype[prop])
Person.prototype[prop].call(adam);
printStatus(Person.prototype[prop]);
return false; // stop execution after one loop just to cover "sayName"
});

The above code needs to be run with “—allow-natives-syntax” in order for “%OptimizeFunctionOnNextCall” to work. It’s a good idea to run with additional trace statements to be able to see what V8 is doing:

node —trace_opt —trace_deopt —allow-natives-syntax main.js

Running the code will print this

sayName

Optimization status: [function,]

My name is Adam

My name is Adam

[manually marking 0x03c7aa754b41 <JSFunction sayName (sfi = 00000114B5ADA2A9)> for non-concurrent optimization]

[compiling method 0x03c7aa754b41 <JSFunction sayName (sfi = 00000114B5ADA2A9)> using TurboFan]

[optimizing 0x03c7aa754b41 <JSFunction sayName (sfi = 00000114B5ADA2A9)> - took 1.213, 0.401, 0.042 ms]

My name is Adam

Optimization status: [function,optimized,turbo_fan,]

The times that show in the “optimizing” line that V8 prints come from OptimizedCompilationJob::RecordCompilationStats. They represent:

double mscreategraph = time_taken_to_prepare.InMillisecondsF();

double msoptimize = time_taken_to_execute.InMillisecondsF();

double mscodegen = time_taken_to_finalize.InMillisecondsF();

From what I understand, if you sum up all of those times, you’ll get the total time spent reoptimizing a single function. In this case, that function was one line of code (“console.log”) and took 1.656 ms to optimize.