V8 (Google JavaScript engine)
Created: 2019-01-15 16:10:38 -0800 Modified: 2019-01-18 17:01:38 -0800
Basics
Section titled Basics- The garbage collector is not event-loop-aware; it will run whenever it feels like it (reference).
- You can profile your NodeJS code by attaching to Chrome. Note that it doesn’t seem to show functions that take less than 0.05 ms.
- Deoptimizations themselves are very quick (<1 ms). They typically happen when assumptions around an optimization are proven false. For example, a simple “sum” function may get optimized if it only ever sees numbers. Then, when it finally gets a string, it needs to be deoptimized in order to follow JavaScript’s complex string addition rules. However, optimizations can take a while (sometimes >10ms), and deoptimizing a function usually means it will get optimized again later.
Particulars of the codebase itself
Section titled Particulars of the codebase itself- Flags like “traceopt” are defined with a macro, DEFINE_BOOL, which converts to FLAG, which converts to FLAG_FULL, which prefixes the flag’s name with “FLAG”. So “trace_opt” becomes “FLAG_trace_opt” in the codebase. Flags always have underscores rather than hyphens.
- Some flags, like single_threaded, get defined as FLAG_single_threaded but never used that way. Instead, simply having them set will set other flags/options as shown here.
Various flags
Section titled Various flags- —trace-opt and —trace-deopt
- Never run these while logging to stdout. Doing so seems to increase execution length to ~16X what it would normally be because of how noisy these are. Instead, pipe to a file, and that performance hit will be gone.
- —trace-turbo
- This will log to many JSON files (in addition to logging to stdout).
Optimizations with respect to Bot Land
Section titled Optimizations with respect to Bot LandAt one point, I was trying to investigate why I was seeing random spikes in execution times in Bot Land. Most code would run at X ms/turn on average, but every once in a while, I was seeing a jump to something like 200X. I’m nearly positive that it was a combination of the garbage collector and optimizations done by V8. However, the garbage collector itself seemed to run in 0.0 ms when I ran “—trace_gc”, so I think that optimizations (and reopts and deopts) were causing the majority of the spikes.
This note includes some of the investigation I did about it.
Here’s some sample code to optimize all of the non-constructor methods in a class. This is not something you should ever copy/paste out of here without understanding all of the implications, the most significant being that this likely doesn’t do what you want it to do.
The above code needs to be run with “—allow-natives-syntax” in order for “%OptimizeFunctionOnNextCall” to work. It’s a good idea to run with additional trace statements to be able to see what V8 is doing:
node —trace_opt —trace_deopt —allow-natives-syntax main.js
Running the code will print this
sayName
Optimization status: [function,]
My name is Adam
My name is Adam
[manually marking 0x03c7aa754b41 <JSFunction sayName (sfi = 00000114B5ADA2A9)> for non-concurrent optimization]
[compiling method 0x03c7aa754b41 <JSFunction sayName (sfi = 00000114B5ADA2A9)> using TurboFan]
[optimizing 0x03c7aa754b41 <JSFunction sayName (sfi = 00000114B5ADA2A9)> - took 1.213, 0.401, 0.042 ms]
My name is Adam
Optimization status: [function,optimized,turbo_fan,]
The times that show in the “optimizing” line that V8 prints come from OptimizedCompilationJob::RecordCompilationStats. They represent:
double mscreategraph = time_taken_to_prepare.InMillisecondsF();
double msoptimize = time_taken_to_execute.InMillisecondsF();
double mscodegen = time_taken_to_finalize.InMillisecondsF();
From what I understand, if you sum up all of those times, you’ll get the total time spent reoptimizing a single function. In this case, that function was one line of code (“console.log”) and took 1.656 ms to optimize.