Question

Many of the big players recommend slightly different techniques. Mostly on the placement of the new <script>.

Google Anayltics:

(function() {
  var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
  ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
  var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();

Facebook:

(function() {
  var e = document.createElement('script'); e.async = true;
  e.src = document.location.protocol +
    '//connect.facebook.net/en_US/all.js';
  document.getElementById('fb-root').appendChild(e);
}());:

Disqus:

(function() {
    var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
    dsq.src = 'http://' + disqus_shortname + '.disqus.com/embed.js';
    (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
})();

(post others and I'll add them)

Is there any rhyme or reason for these choices or does it not matter at all?

Was it helpful?

Solution

These are all effectively the same approach in spirit. The idea is to defer scripts so they don't block each other or document complete.

It's common practice to load extra outside resources after your site's content. When doing so, you want to both a) prevent blocking the onload event so your page is "finished" faster, and b) load resources in parallel, which the above do.

Steve Souders claims that "progressive enhancement" is the most important concept for site performance today. This concept suggests that you deliver your base page as fast as possible and then deliver extra content/services as needed, either on the load event or when the user asks for it.

There are frameworks nowadays that help. See http://headjs.com/

OTHER TIPS

I think this is not about the way you add the script to your page, but the script that retrieves content from another domain can only be added this way i-e dynamically writing the script tag and adding it to you document. All above scripts are doing the same in their own way, so you can do it as you like.

There are a few different reasons for doing this... first, some browsers will download dynamically added script tags asynchronously. Second the script can handle when the target page is http vs. https to avoid content errors.

As joe mentioned head.js is useful... s is domain splitting of your own scripts. For your own script resources, it's best to design your site with as little js as possible in the top (html5shiv and browser/js tagging for css)... then have your js in good old fashioned <script src=""> tags right before the closing body element.

Browsers will download the page's needed content first and more later.. this will give the fastest perceived load in a non-blocking fashion. Modularizing your scripts for a workflow together and initializing in-page only what needs to run allows for better use of caching.

Keep your script recources at or under 6 js files. and close to the same size as each other as reasonable.

Souder's book "Even Faster Websites" is a great read on this.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top