Do browsers parse javascript on every page load?

JavascriptBrowser CacheJavascript Engine

Javascript Problem Overview


Do browsers (IE and Firefox) parse linked javascript files every time the page refreshes?

They can cache the files, so I'm guessing they won't try to download them each time, but as each page is essentially separate, I expect them to tear down any old code and re-parse it.

This is inefficient, although perfectly understandable, but I wonder if modern browsers are clever enough to avoid the parsing step within sites. I'm thinking of cases where a site uses a javascript library, like ExtJS or jQuery, etc.

Javascript Solutions


Solution 1 - Javascript

These are the details that I've been able to dig up. It's worth noting first that although JavaScript is usually considered to be interpreted and run on a VM, this isn't really the case with the modern interpreters, which tend to compile the source directly into machine code (with the exception of IE).


Chrome : V8 Engine

V8 has a compilation cache. This stores compiled JavaScript using a hash of the source for up to 5 garbage collections. This means that two identical pieces of source code will share a cache entry in memory regardless of how they were included. This cache is not cleared when pages are reloaded.

Source


Update - 19/03/2015

The Chrome team have released details about their new techniques for JavaScript streaming and caching.

  1. Script Streaming

> Script streaming optimizes the parsing of JavaScript files. [...] > >Starting in version 41, Chrome parses async and deferred scripts on a separate thread as soon as the download has begun. This means that parsing can complete just milliseconds after the download has finished, and results in pages loading as much as 10% faster.

  1. Code caching

> Normally, the V8 engine compiles the page’s JavaScript on every visit, turning it into instructions that a processor understands. This compiled code is then discarded once a user navigates away from the page as compiled code is highly dependent on the state and context of the machine at compilation time.

> Chrome 42 introduces an advanced technique of storing a local copy of the compiled code, so that when the user returns to the page the downloading, parsing, and compiling steps can all be skipped. Across all page loads, this allows Chrome to avoid about 40% of compile time and saves precious battery on mobile devices.


Opera : Carakan Engine

> In practice this means that whenever a script program is about to be > compiled, whose source code is identical to that of some other program > that was recently compiled, we reuse the previous output from the > compiler and skip the compilation step entirely. This cache is quite > effective in typical browsing scenarios where one loads page after > page from the same site, such as different news articles from a news > service, since each page often loads the same, sometimes very large, > script library.

Therefore JavaScript is cached across page reloads, two requests to the same script will not result in re-compilation.

Source


Firefox : SpiderMonkey Engine

SpiderMonkey uses Nanojit as its native back-end, a JIT compiler. The process of compiling the machine code can be seen here. In short, it appears to recompile scripts as they are loaded. However, if we take a closer look at the internals of Nanojit we see that the higher level monitor jstracer, which is used to track compilation can transition through three stages during compilation, providing a benefit to Nanojit:

> The trace monitor's initial state is monitoring. This means that > spidermonkey is interpreting bytecode. Every time spidermonkey > interprets a backward-jump bytecode, the monitor makes note of the > number of times the jump-target program-counter (PC) value has been > jumped-to. This number is called the hit count for the PC. If the hit > count of a particular PC reaches a threshold value, the target is > considered hot. > > When the monitor decides a target PC is hot, it looks in a hashtable > of fragments to see if there is a fragment holding native code for > that target PC. If it finds such a fragment, it transitions to > executing mode. Otherwise it transitions to recording mode.

This means that for hot fragments of code the native code is cached. Meaning that will not need to be recompiled. It is not made clear is these hashed native sections are retained between page refreshes. But I would assume that they are. If anyone can find supporting evidence for this then excellent.

EDIT: It's been pointed out that Mozilla developer Boris Zbarsky has stated that Gecko does not cache compiled scripts yet. Taken from this SO answer.


Safari : JavaScriptCore/SquirelFish Engine

I think that the best answer for this implementation has already been given by someone else.

> We don't currently cache the bytecode (or the native code). It is an
option we have considered, however, currently, code generation is a
trivial portion of JS execution time (< 2%), so we're not pursuing
this at the moment.

This was written by Maciej Stachowiak, the lead developer of Safari. So I think we can take that to be true.

I was unable to find any other information but you can read more about the speed improvements of the latest SquirrelFish Extreme engine here, or browse the source code here if you're feeling adventurous.


IE : Chakra Engine

There is no current information regarding IE9's JavaScript Engine (Chakra) in this field. If anyone knows anything, please comment.

This is quite unofficial, but for IE's older engine implementations, Eric Lippert (a MS developer of JScript) states in a blog reply here that:

> JScript Classic acts like a compiled language in the sense that before any JScript Classic program runs, we fully syntax check the code, generate a full parse tree, and generate a bytecode. We then run the bytecode through a bytecode interpreter. In that sense, JScript is every bit as "compiled" as Java. The difference is that JScript does not allow you to persist or examine our proprietary bytecode. Also, the bytecode is much higher-level than the JVM bytecode -- the JScript Classic bytecode language is little more than a linearization of the parse tree, whereas the JVM bytecode is clearly intended to operate on a low-level stack machine.

This suggests that the bytecode does not persist in any way, and thus bytecode is not cached.

Solution 2 - Javascript

Opera does it, as mentioned in the other answer. (source)

Firefox (SpiderMonkey engine) does not cache bytecode. (source)

WebKit (Safari, Konqueror) does not cache bytecode. (source)

I'm not sure about IE[6/7/8] or V8 (Chrome), I think IE might do some sort of caching while V8 may not. IE is closed source so I'm not sure, but in V8 it may not make sense to cache "compiled" code since they compile straight to machine code.

Solution 3 - Javascript

As far as I am aware, only Opera caches the parsed JavaScript. See the section "Cached compiled programs" here.

Solution 4 - Javascript

It's worth nothing that http://www.dartlang.org/">Google Dart explicitly tackles this problem via "Snapshots" - the goal is to speed up the initialization and loading time by loading the preparsed version of the code.

InfoQ has a good writeup @ http://www.infoq.com/articles/google-dart

Solution 5 - Javascript

I think that the correct answer would be "not always." From what I understand, both the browser and the server play a role in determining what gets cached. If you really need files to be reloaded every time, then I think you should be able to configure that from within Apache (for example). Of course, I suppose that the user's browser could be configured to ignore that setting, but that's probably unlikely.

So I would imagine that in most practical cases, the javascript files themselves are cached, but are dynamically re-interpreted each time the page loads.

Solution 6 - Javascript

The browser definitely makes use of caching but yes, the browsers parse the JavaScript every time a page refreshes. Because whenever a page is loaded by the browser, it creates 2 trees 1.Content tree and 2.render tree.

This render tree consists of the information about the visual layout of the dom elements. So whenever a page loads, the javascript is parsed and any dynamic changes by the javascript will like positioning the dom element, show/hide element, add/remove element will cause the browser to recreate the render tree. But the modern broswers like FF and chrome handle it slightly differently, they have the concept of incremental rendering, so whenever there are dynamic changes by the js as mentioned above, it will only cause those elements to render and repaint again.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionSteve JonesView Question on Stackoverflow
Solution 1 - JavascriptJivingsView Answer on Stackoverflow
Solution 2 - Javascriptcha0siteView Answer on Stackoverflow
Solution 3 - JavascriptgsneddersView Answer on Stackoverflow
Solution 4 - JavascriptigrigorikView Answer on Stackoverflow
Solution 5 - JavascriptZachary MurrayView Answer on Stackoverflow
Solution 6 - JavascriptAbhidevView Answer on Stackoverflow