Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: Your social widgets are losing you visitors right now
306 points by rarestblog on Oct 8, 2010 | hide | past | favorite | 73 comments
Recently a lot of sites are getting slow for me. The reason? They add a lot of social submit widgets that use non-async scripts.

If you see a substring <script src="http:// or <script src="https:// in your HTML source - you are killing your site slowly.

Using Twitter's official ReTweet button? You've slowed your site down by 60 seconds per each page (I don't know how many people are affected by this hiccup that lasts more than 5 days now for me, but you can easily fix it for everybody, see below)

Just to be clear. I'm on 35Mbps line in Russia near Moscow (4.3MBytes/s - very fast! 4ms ping to national traffic exchange point in Russia).

Yet some sites load up to 2-5 minutes for me? Why?

According to Chrome Dev Tools I receive main blog content, including all images within 1-2 seconds. (It's a 35Mbps!), but I don't see anything from your site on screen (even though it has finished loading), because...

"platform.twitter.com" responds in 49-62 seconds! Uses <script src="http://... for their "retweet" button. Your site is STUCK until "platform.twitter.com" loads (1 minute).

Facebook's CDN responds within 30-50 seconds. The site doesn't load until it's loaded.

"www.stumbleupon.com"'s button loads in 20 seconds.

I'm not sure what the problem is, but I can tell you for sure - it takes minutes to load some sites with those buttons, it takes less than a blink after I add "127.0.0.1 platform.twitter.com" and others to /etc/hosts (that's not a way to solve it, it's a way to diagnose the problem, see below for solution).

Many of you use a lot of those buttons in hope that they will bring you visitors. But while they load - they lose you visitors that have to wait 2 minutes for your page to load.

WordPress' social submit plugins are often have the same effect on your site.

The solution? Use async code and ask your plugin developer to move to async code.

It's not some futuristic HTML5 goodie that works only in modern browsers. It works everywhere.

Facebook has async code - use it! Google Analytics has async - use it!

Twitter doesn't give out async, but it's easy to do it, based on FaceBook and Google Analytics code:

  <a href="http://twitter.com/share" 
  class="twitter-share-button" data-count="horizontal" 
  data-via="WhitePostsCom">Tweet</a> 
  
  <script> 
    (function() {
    var src = document.createElement('script'); 
    src.async = true;
    src.src = document.location.protocol + '//platform.twitter.com/widgets.js';
    document.getElementsByTagName('head')[0].appendChild(src);
    }());                        
  </script> 
Replace the first part with your own code instead of WhitePostsCom one.

StatCounter doesn't give async, adapt it from the code StatCounter gives you: (There was a day when StatCounter didn't load in 2 minutes! Your site is stuck again if you don't do async)

  <!-- Start of StatCounter Code -->
  <script type="text/javascript">
  var sc_project=[YOUR CODE HERE]; 
  var sc_invisible=[YOUR CODE HERE]; 
  var sc_security="[YOUR CODE HERE]"; 
  (function() {
      var ga = document.createElement('script'); 
      ga.type = 'text/javascript'; ga.async = true;
      ga.src = document.location.protocol + '//www.statcounter.com/counter/counter.js';
      var s = document.getElementsByTagName('script')[0]; 
      s.parentNode.insertBefore(ga, s);
  })();
  </script>
  <noscript><div
  class="statcounter"><a title="web analytics"
  href="http://statcounter.com/" target="_blank"><img
  class="statcounter"
  src="[YOUR CODE HERE]"
  alt="web analytics" ></a></div></noscript>
  <!-- End of StatCounter Code -->
StumbleUpon? Adapt it from the above codes.

Seeing someone asking you to insert '<script src="http://' into your code? Tell them to do better engineering and stop slowing down your site.

P.S. The reasons for hiccups of Twitter and FB's CDN might be poor peering, bad servers, anything really. You can't fix Twitter's and Facebook's software and servers, but you can let your visitors see your site without depending on how good do engineers at those companies do their job.

P.P.S. There is a good question in comments about how to diagnose your own site's problems: http://news.ycombinator.com/item?id=1771755

Problematic hosts list (the srcs that cause sometimes huge slowdowns for me) - just in case you need it:

  widgets.digg.com
  platform.twitter.com
  static.ak.fbcdn.net
  www.stumbleupon.com
  i.ixnp.com



In general I find these buttons obnoxious as hell (they are tasteful on some sites like the NYTimes). This just adds insult to injury.


Or perhaps this adds injury to insult.


I wonder if injuries and insults commute...


Incredibly annoying. I have a quad core machine, 16GB RAM, SSD, 100Mbit broadband yet it takes 14 seconds for TechCrunch to fully load due to the Post Up widget they use. Once it loads the page scrolling is not smooth at all. It definitely makes me less likely to visit.


I just adblock that shit. I know I should just not visit their site or whatever, but if everything I wanted to read had to live up to my standards, there would be nothing to read.

(Incidentally, my own blog didn't live up to my own standards, and it's down pending me fixing that :)


This is my primary reason for running adblock. I can't tell you how many clients have e-mailed me saying 'My site is down!' or 'My site takes a few minutes to load', even though the site loads in about a second in Safari. When I switch over to my (un-extended) Firefox install, I suddenly see the problem - literally dozens of Javascripts, image requests, tracking pixels, and other junk.

It's ridiculous. At one company we had a client who spent the last year contracting us to add widgets and pixels and add-ons and buttons and everything, until suddenly they found out that Google would be factoring in page load times into their search results. Suddenly it was a rock and a hard place - do they keep adding content without any proof of its value? But that would hurt their search rankings - but but but, it adds value! It encourages engagement!

It's ridiculous. They didn't even have any metrics showing that any of the stuff they were adding was helping, or even being used. Likewise with the search widgets. People are told that these things will 'drive user engagement' and 'encourage social interaction', so they throw it in and assume that it's made their site better.


I have written a very simple tester for these problems that you can use to diagnose your own site: http://whiteposts.com/not-async

Now also shows the code to be replaced to make calls async.

If you like the tool - please upvote it at http://news.ycombinator.com/item?id=1772169


Google and Amazon have published research to the effect that even small increases in page load time have a massive effect on traffic.

Social sharing is intended to boost traffic virally, but I wonder if these stupid widgets are actually reducing traffic by slowing the page load so much.

Many of these submit tools could be replaced (with less functionality) with an image button or text link.


Absolutley. When I set up caching and a cdn on one of my sites, I saw a 25% increase In visits literally overnight. When I asked the developer of the caching plugin, he said that many people report the same thing.


Use good URLs (this is not equivalent to shortened URLs) if you want people to refer other users to your site.


Warning: while the above async loading code works for Twitter, it won't work for any JavaScript containing calls to document.write() as they'll append the data to the end of the page depending on the browser (or even replace the whole page!)

All is not lost though, as you can patch document.write to do the right thing and write to an element's .innerHTML or equivalent. If you are a JavaScript API provider though, please take heed and don't use document.write(). I'm looking at you, PollDaddy!


How would you handle waiting simultaneously on multiple scripts that all want to use document.write?

I use some ad networks which want to call document.write to insert the ads. First I patch the document.write to a function that writes to the innerhtml for the first ad tag, then after that ad loads I change the function and start loading the next one.

Obviously slower than doing it simultaneously. I suppose I could look at the contents of how the function is called to determine which ad it came from, but did you have some better idea?


That's a tricky one. I haven't tested this, but I think you should be able to listen for the 'load' event on each of the script elements that you inject, and patch document.write differently each time.

Let me know how you get on! I plan on doing something like this soon for some of my sites that have AdSense (another document.write culprit).


I just pushed up a class I've been using to do this to github. It's a little rough around the edges, but works in Firefox and IE. I've trialed with the AddThis. http://github.com/joshduck/Injector.js


Yes, thank you, I've forgot to mention it. It will work for StatCounter and Twitter though.


I've experienced this before and let me just say that it was hell to debug! TribalFusion seems to have done the same - using document.write() and document.writeline().


Meebo made a presentation at Velocity this year about truly asynchronous loading, among other performance secrets of the meebo bar. (FF can still block using the suggested technique)

Well worth the watch: http://www.youtube.com/watch?v=b7SUFLFu3HI


There is another widget which shows a bottom toolbar that looks like the Facebook one (and it offers sharing and chats and other crap). It makes the website fucki*gly slow and sometimes interferes with the website JavaScript and add nuisance like bad scrolling, selecting...

I finished up closing these sites as I open them. I would also recommend that everyone that has a blog, implement the most minimalistic widgets, less JavaScript and a simple design. The purpose of a blog is to read, if I want to chat, I'll open Skype.

Coding Horror is a good example.


AdBlock does a good job on these. I have meebo.com and wibiya.com completely blocked, as well as facebook.com, fbcdn.net, tynt.com, etc. I use a different browser installation altogether for things like Facebook when I actually want to download their stuff.

Basically, any time a website takes ages to come up, I turn on Firebug's Net panel and monitor the laggy requests; and then, if warranted, kill them off with AdBlock. Similarly for offensive widgets.


Have you considered using NoScript? It might be easier to whitelist the few widgets you care about, vs. tracking down miscreants.


With the relentless XSS attacks against large websites, including the Paypal XSS the other day, and the recent Twitter XSS attacks, why are there any techies left not using NoScript?

The web is not safe to use without NoScript.


I uninstalled NoScript after it interfered with my e-commerce transactions one too many times. Lack of Javascript caused the transaction to halt abruptly, sometimes causing me to lose the ticket I was booking (some travel tickets are very time sensitive here). Even after I added my own bank to the whitelist, the middleman sites between the retailer and the bank were getting affected and blocked.

I just installed RequestPolicy addon after going through this thread, and am hoping it will be a good tradeoff (other reason for installing this instead of NoScript are those annoying ad-filled pages NoScript shows after its frequent updates, and its author's attempts to fiddle with ABP sometime back, making his integrity questionable).


I uninstalled it myself for the same reasons. It does cause problems some times, but after you've fine tuned it those times become very infrequent. You can even synchronise your settings between Firefox installations now by telling NoScript to store its config in a bookmark.

I reinstalled NoScript a while back. IMO, the problems it prevents outweigh the problems it causes.


Techies using Google Chrome perhaps? (not me!)


I don't use Chrome, but I do use some non-Firefox browsers (Arora, w3m, sometimes uzbl), so I take a more barbaric approach: a giant /etc/hosts file pointing various offensive domains at 255.255.255.255, and iftop/firebug/Webkit's inspector/suspicious cookies to determine where horrors are coming from.


I tried once, but the whitelisting got too tedious; almost every website I interact with needs scripting. I've also never been hit with an XSS attack; similarly I haven't gotten a virus since a boot sector on a floppy 15 years ago. I haven't found the risks to be high.


I have my NoScript set to allow %site.com and *.%site.com by default. It leaves most sites usable, but blocks most of the bs. I only have to white list somethinsomethingCDN.com rarely. (less than five sites) That and googleapis, jquery, not much else. Read the docs and adjust the settings accordingly.


Is it the Meebo bar?

http://bar.meebo.com/


Meebo is very proud of this bar, for reasons I don't entirely understand. I can see a fair amount of use for it, on some websites, but I really wish they would optimize it so that those that opt to implement it on their pages don't get such a performance hit to pageload.


Yeah, that one!


The bar from www.wibiya.com is even worse. Not only is it slow to load but it has notification pop ups and a bunch of other crap.

I'm sick and tired of all the sites that have decided to add bars to the top and bottom of their pages and a billion social network buttons all over the page. I'm at the point where even though I'm blocking most of them via adblock new ones keep showing up so I just give up and stop going back to sites that use them.


LABjs is a useful JavaScript library that lets you load JavaScript on-demand and specify dependencies: http://labjs.com/


LABjs is great for making javascript not block the rendering of your page. It's not just a simple switch though. Depending on how your code is structured, and the various assumptions you've made there may need to be some code changes. The biggest thing, is that if you load everything in LABjs - including jQuery - then the document.ready event fires BEFORE ALL THE JAVASCRIPT IS LOADED/EXECUTED. This is incredibly important because many many people code with $(document).ready being the signal to run all sorts of stuff. However, with LABjs the meaning of document.ready changes so you can't assume that things like plugins have been declared when document.ready fires.


This is really good advice. I'm surprised these big sites aren't using something like CloudFront to serve their files.


Well, they kind of do. It's just that sometimes their CDNs have serious hiccups and ALL sites that use non-async code are affected (when they shouldn't be).

I'm really surprised at Twitter. They have good engineers, why are they giving you non-async code, when the above code does the same thing exactly?


Did I miss something, or why are these in your head tag? Why not put them in the body tag so they don't load until after the browser has rendered the page?


Sorry, but you aren't right. Those are mostly located in body tag. They are requested as soon as encountered by browser, because they might issue "document.write", so the execution of rest of HTML is delayed until that "src" is loaded. That's exactly what causes the problem. Putting script into body doesn't defer its execution.

I'm not sure how browsers render <script src in head, but probably the same.


Works same in the head. All scripts if not set to asynchronous will prevent further rendering of the page until the script has finished loading and running.


oh, ok, thanks! i would have made the assumption that this would have fixed it and been frustrated.


Good point, but you'll still benefit from using async script loading if you have multiple third-party scripts loading at the end of your page. If the Twitter script is included before the others, and it lags, it will hold them up too.


I've experienced similar issues from here in the UK. While the Internet is typically pretty fast for me, it seems that some CDNs sites use have verrrrry flaky POPs in Europe. Tumblr is an example - frequently I can't load any images on Tumblr sites even though the rest of the sites are speedy and fine.


I've just spent several days improving the startup time of my ASP.NET based business. Some things to note:

* Install both the 'Google PageSpeed' and 'YSlow' plugins for Firefox. They provide great metrics and tell you what's actually slowing down your page.

* Ensure that all images are sent out with a long expiry time. This is not the default setting for IIS. Just setting a long expiry for images, CSS and JS will easily give you a power-boost in your performance.

* Minimize JS and CSS using the 'Chirpy' plugin.

* Make sure to retrieve any library code for its respective CDN (ie. Microsoft AJAX<, Facebook etc.)

* And of-course, as above, make sure your plug-ins load asynchronously. The default code given to me for the 'Add-This' plug-in was synchronous and took about 1.5 seconds to retrieve. Quite silly that they don't give you asynch code by default for these plugins!


I'm pretty sure you don't need to write the element with JavaScript. You should be able to do the following:

  <script async src="//..."></script>
(Use 'async="async"' for XHTML.)


Not that simple, won't work on most browsers. Also if script has document.write inside - will fail.

http://stackoverflow.com/questions/1834077/browser-support-f...


More and more support for it every day:

http://webkit.org/blog/1395/running-scripts-in-webkit/

So there's also "defer," which is better-supported. The document.write issue is also an issue for JavaScript element building, but the assumption is that you won't blindly use these methods without testing, first.


It turns out (just read an article in Russian) that recently there was a huge DDOS attack (100+ Gbit/s) that dropped major providers in Europe, DARPA(.mil), major Russian search engine. 100GBit/s is enough to fill major landlines and backbones (even in US, darpa.mil failed too). That might have been the cause of delays. Nobody is sure whether the attack is over, but major datacenters and DDOS protection providers failed to do anything against it.

Still, making your site independent with async scripts is good idea.


Those kinds of sites haven't gotten slow for me at all, and I'm only on 8Mbps broadband. Thanks for bringing it to everyone's attention though, hopefully it'll get fixed.


Broadband penetration in the US was at 20% in 2007. 8Mbps is positively luxurious. (When I lived in Ohio I was at 3Mbps advertised, which of course translates to significantly less.)

Now I'm on a shared satellite link which is something like 8Mbps for 30 users.


Delays of 30-90 seconds are rarely at the network level. Asynchronous is generally better than synchronous, but it seems more detective work should be done before declaring war on synchronous scripts.

Are other users in Russia affected? Are users on other ISPs affected? How widespread is this problem? Are your ping times the same to "www.digg.com" as to "widgets.digg.com"?

If there is any truth to this post, the problem should be fixable.


It's not at network level, it's probably at service level. Service takes too long to respond (overloaded, not enough bandwidth, real-time replication, deadlocks - you guess it). I don't know how widespread is the problem, but I'm on a major provider and connection to everything else is very fine.

The ping is fine BTW - 50ms for widgets.digg, 200ms for digg. Which just helps to prove that it's a service problem.


I built some fairly clumsy code to fix this on one of the blogs I run. It loaded the buttons in the footer of the page (so you can be reading the article before all the nonsense has loaded) then moves them to the correct places (the foot of the article) once ready. By the time you came to retweet or what have you, it was loaded.

It's not that dignified, but did what I needed it to do.


I have problems with our adserver tags loading much slower than our pages. I often use a CSS hack to that the adserver code loads at the bottom of the page but is position in CSS at the top. However this can't be applied in 100% of situations......

Has anyone managed to load adserver code with this method successfully? I'll be trying myself this weekend and will report back....!


I have these problems developing with backend Facebook calls. In theory facebook servers and our servers shouldn't have a load of latency between them but it really makes the site noticeably slower. Definitely slow enough that I would have to remove all the back end calls on page load if we were to push facebook connect more.


Does page caching help? I have that setup for my personal wordpress blog. I really don't wanna have to modify all the javascript on my page. I've got bigger fish to fry.

I also don't want to have crap slow page loads. It loads for me in under a second.


Page caching doesn't matter if you're making a request to an external server.


Even worse, the buttons hosted on wd.sharethis.com set frequent timeouts that use up a measurable amount CPU (and laptop battery life).

(This is particularly bad when I open a large number of articles in tabs, to read later.)


I've seen some metrics drop by as much as 10% because of this issue. The drop was pretty bewildering at first, but then I A/B tested including (but not using) the javascript and saw the difference.


Do the ads load quickly? If the ads load quickly but the content loads slowly, people are more likely to click the ads.

Of course, they'll never visit your site again, but hey... that's what "social media" is for!


Is there a service that let you check your site loading time and whether you are using asynchronous loading? Would be useful to solve this problem.


Download Google Chrome, open your site, hit Ctrl+Shift+J, click Resources, look for loooooong lines and see what they are. Everything to the right of blue line is mostly safe (async).

Also open your HTML source and search for src="http, look if any of those are <script tags. Async loading scripts don't use src="http part.

Update: http://whiteposts.com/not-async


If you're using Firefox, the Firebug extension's Net panel (when active) shows the Gantt chart of downloads.


You can also team Firebug up with Yahoo's fantastic benchmarking utility YSlow.


Safari lets you profile a web site too. It even has nice little graphs to show you what is so slow.



Are the delays in Russia due to a satellite? Those seem like very long delays. I am usually annoyed by 5-15 second delays in the US.


No. I have 180ms stable ping to news.YC, stable 40ms ping to Google. It's not a line problem.

Chrome Dev Tools show me that I receive content within 300-400ms of hitting the page with all the images loading in about 2-3 seconds... and then there's one huge 1 minute line associated with "platform.twitter.com", "static.ak.fbcdn.net", "www.stumbleupon.com" (everything else being a short ticks with 50-100ms receive time - it's a real 35Mbps line)

Any site that I measure with Chrome Dev Tools in any part of the world is fast (seconds, milliseconds), the only parts that are slow quite sometimes are those guys - T, FB, SU.

And yes - those are very long delay. Maybe Twitter and FB have some peering problems - I don't know. I don't need to know. It's just that those can be fixed easily so that we don't have to find out why T/FB have peering problems.


For what it's worth, I often see this problem in Canada - it's not always the offenders you list, but third-party widgets do significantly slow down page loads relatively often.


platform.twitter.com runs on akamai. Maybe there is a problem with the russian akamai mirror(s)? I don't know if facebook and stumbleupon use akamai, too.


Or just put them after </body> but before </html>


In most cases for social media buttons supplied by these web services, you can't. You will typically have to reference a JS library at the position of the document you want to render the button at. You put them after </body> and they'll render in that position in the document. The above tip is smart and works around that by asynchronously updating the DOM when the scripts are ready (i.e., downloaded).

If you meant "put the OP's scripts after the body element" -- yeah, you could do that (at least for sure with GA async code, you can), and that would be a good idea so that you serve and render CSS first, making your pages feel more responsive. But that's not what your comment reads as.


could it be china's great firewall? these sites all load snappy for me and i am on a 45mbps local ethernet loop in san francisco. sounds like a network edge problem.


I'm not behind China firewall. I'm in Russia (near Moscow).


yes, and there is not a direct connection between the US and Russia, the internet will take the shortest traced route. So, if one site traces well through china. Then an IP that is nearby, might actually start the connection using that route. It's hard for me to believe that Russia has opted not to allow any traffic what so ever to route through China.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: