Current time: 10-23-2014, 05:54 PM Hello There, Guest! (LoginRegister)

Post Reply 
 
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Inline JavaScript Experiment
07-13-2010, 12:27 AM
Post: #21
RE: Inline JavaScript Experiment
That was my fear with the Google crawler that I was talking about. I was not sure if it was true or not but if it is true then saving 100 milliseconds on average is simply not worth Google not crawling the entire document. Do you have any idea the exact distance Google will read into a document before it stops? Does it make a difference that all the JavaScript is on one line? Does it make a difference if the main document is gzipped?
Find all posts by this user
Quote this message in a reply
07-13-2010, 12:49 AM (This post was last modified: 07-13-2010 01:01 AM by jklein.)
Post: #22
RE: Inline JavaScript Experiment
I'm not sure how far the Googlebot will crawl, but minifying the JS and gzipping the page will definitely help. I assume the bot bails after a certain number of characters (or KB) and in either case you would want to put the JS on one line and gzip the page. Gzipping also allows the bot to fetch the page faster which means it will crawl more pages on your site which is always a good thing.

If you look in webmaster tools you can see how much time the Googlebot spent downloading your pages and how many pages per day the bot is crawling. On our sites we have seen a very clear inverse relationship between those two graphs - as time spent downloading a page goes down the number of pages that the bot crawls goes up significantly.
Speaking of SEO, we have found that having dashes in your domain name is extremely negative from Google's perspective. You may want to consider purchasing greenwatch.org and moving your site over to that domain. We are clearly not alone in this conclusion:

http://www.webcopywriter.com.au/2008/09/...main-name/
Visit this user's website Find all posts by this user
Quote this message in a reply
07-13-2010, 05:42 AM
Post: #23
RE: Inline JavaScript Experiment
FWIW, I made a switch on a test system to flush the DNS instead of disabling the cache but because of how you have the chains set up I can't tell if it is actually working: http://www.webpagetest.org/result/100712...1/details/

images9 and images10 are both done in parallel so the fact that they are chained together won't buy you anything. Since they are lower in the chain than cdn and images8 they also won't buy you any savings with them.

Do you have the sharding as a toggleable setting? Might be interesting to see how much it actually buys you because managing it (particularly sharded this heavily) is quite a pain in the butt.
Visit this user's website Find all posts by this user
Quote this message in a reply
07-13-2010, 08:46 AM
Post: #24
RE: Inline JavaScript Experiment
I just noticed style.cfm is downloaded in different places when comparing IE7 vs IE8

IE7 - http://www.webpagetest.org/result/100712...1/details/
IE8 - http://www.webpagetest.org/result/100712...2/details/

I know this has to be because of the JavaScript preloading I am doing right now.

Is it possible to preload a CSS file? I might play around with this idea later tonight.

Sharding is not a toggleable setting unfortunately. I will create some test pages later tonight as well.

I have to run for now. Bowling league tonight Smile
Find all posts by this user
Quote this message in a reply
07-13-2010, 09:09 AM
Post: #25
RE: Inline JavaScript Experiment
Given how parallel some of the new browsers are (up to 16 connections and no blocking javascript for Firefox 3.6) I think the domain chaining is going to be too difficult to get working consistently (unless your image domain was just a CNAME for the same host as your base page).
Visit this user's website Find all posts by this user
Quote this message in a reply
07-13-2010, 07:32 PM
Post: #26
RE: Inline JavaScript Experiment
(07-13-2010 12:19 AM)jklein Wrote:  Couple of things:
2. Sharding static content across 10 sub-domains is probably overkill. Depending on what browsers people are typically using on your site this could cause way too much thrashing. A browser like Firefox that makes 8 connections per domain doesn't need that level of sharding (in fact it will likely make your site load SLOWER). The recommendation from Yahoo is to shard across two domains and no more. That post is also old, written when many more people were using IE 6 and 7 and we needed to be sharding more aggressively.

The Yahoo blog post stated 2 things which were limiting parallel download/sharding: CPU utilisation and DNS lookup.

The CNAME hack should hopefully mitigate the DNS Lookup penalty, though the data that has been coming back in this thread has been very interesting about its limitations. I am assuming here that this is the cause of slow down for websites with aggressive amounts of sharding?

With respects to the CPU utilisation aspect the blog poster was seeing 25% cpu utilisation downloading 2 resources and 40% for 4 parallel resources. The modern browsers have agressively upped the amount of parallel connections to 8 or more. So if CPU utilisation through parallel downloads was going to be an issue I would suggest it would be a hotter topic that it is, this is to say I don't think it is a hot topic atm but am happy to be proved wrong though. I am assuming this is the thrashing that you are referring to in your post?

Since modern browsers are very capable, and growing more so, my feeling is a technique like this maybe more applicable to improve matters for the army of the living dead (ie 6 users), ie7 users and other old browser users, who are limited to a small number of parallel connections, when accessing pages with a large number of resources. But while modern browsers may not experience as optimal performance as they could in a perfect world, due to potentially incurring more DNS lookups, they will still be performing faster that the old browsers, when accessing pages (using the CNAME hack) with a large number of resources....and overall your users should be getting a better experience. A lot of ecommerce sites have around 80-100 resources on a page so there is plenty of scope there for parallel downloads.

Potentially the CNAME hack is only a suggestable technique for sharding up to 4-5 domains (similar to the old sharding limitation but without the added DNS penalty) and only if you are sharding on the same IP that is serving the main page. Maybe there is trick that can be done to force the browser to prefetch the DNS results for a CNAME hacked, sharded CDN when it grabs the main page? Is this a case for a blocking resouce on a page just to perform DNS resolution to give benefit for non blocking resources on the rest of the page? <chuckle>
Find all posts by this user
Quote this message in a reply
08-05-2010, 05:10 AM
Post: #27
RE: Inline JavaScript Experiment
(07-13-2010 07:32 PM)calumfodder Wrote:  With respects to the CPU utilisation aspect the blog poster was seeing 25% cpu utilisation downloading 2 resources and 40% for 4 parallel resources. The modern browsers have agressively upped the amount of parallel connections to 8 or more. So if CPU utilisation through parallel downloads was going to be an issue I would suggest it would be a hotter topic that it is, this is to say I don't think it is a hot topic atm but am happy to be proved wrong though. I am assuming this is the thrashing that you are referring to in your post?

Yes, that was my concern with sharding across ~10 domains. I think the reason why we don't hear about it is because very few people are sharding, and those that are usually only shard across 2-4 domains where CPU utilization is probably not an issue. Not to mention the fact that you basically have no visibility into the CPU of your clients' machines unless you are using something like WebPagetest.

(07-13-2010 07:32 PM)calumfodder Wrote:  Since modern browsers are very capable, and growing more so, my feeling is a technique like this maybe more applicable to improve matters for the army of the living dead (ie 6 users), ie7 users and other old browser users, who are limited to a small number of parallel connections, when accessing pages with a large number of resources. But while modern browsers may not experience as optimal performance as they could in a perfect world, due to potentially incurring more DNS lookups, they will still be performing faster that the old browsers, when accessing pages (using the CNAME hack) with a large number of resources....and overall your users should be getting a better experience. A lot of ecommerce sites have around 80-100 resources on a page so there is plenty of scope there for parallel downloads.

I think one important thing to know is what percentage of your users are the "living dead" as you say. If you have 80%+ using IE8/FF3+/Chrome/Safari like most sites then you really don't need to shard that much. Even if you run a complex ecommerce site 80-100 resources per page is on the high side - I work at a large online retailer and our most complex pages are in the 50-70 range. By doing too much sharding you are probably hurting the newer browsers as you help the older ones, which could be a pretty bad plan if most of your users have upgraded. Like you stated the CNAME hack has some limitations - you can run into a race condition where the nested CNAME lookup from your first DNS lookup is racing to finish while the page is racing to download the next resource on the page and making it's own DNS lookup. This would be more of a problem for newer browsers that download 6+ resources at once. In this case you really might not see the benefit from the CNAME hack.

(07-13-2010 07:32 PM)calumfodder Wrote:  Is this a case for a blocking resouce on a page just to perform DNS resolution to give benefit for non blocking resources on the rest of the page? <chuckle>

I don't have any data on this, but I'm guessing that intentionally adding a blocking resource just to allow a DNS lookup to be performed isn't a great idea. If you already have a blocking file at the top of the page though then it probably wouldn't hurt to use the CNAME hack and make that first DNS lookup fetch back the rest of the static domains on your page.
Visit this user's website Find all posts by this user
Quote this message in a reply
08-11-2010, 04:59 AM
Post: #28
RE: Inline JavaScript Experiment
(08-05-2010 05:10 AM)jklein Wrote:  Yes, that was my concern with sharding across ~10 domains. I think the reason why we don't hear about it is because very few people are sharding, and those that are usually only shard across 2-4 domains where CPU utilization is probably not an issue. Not to mention the fact that you basically have no visibility into the CPU of your clients' machines unless you are using something like WebPagetest.
There is visibility on the cpu usage of a browser (though not at the webpage level) through the browser providers/reputation/forums. Take for example the issues that get raised with Mozilla firefox about memory usage and stability. Whilst i appreciate that extreme sharding is going to be rare I cannot (anecdotally) remember seeing cpu usage being raised as a major concern for browsers. The cpu utilisation for creating connections and handling the streams from downloads (even 50-60) i would suggest is minor compared with parsing & executing the JS and CSS for the page. I had thought there used to be an option on webpagetest to set the number of connections that a browser made for a test cycle and was going to run a test to see the cpu load but I can't find it anymore....memory obviously playing tricks...time for the check :-)
(08-05-2010 05:10 AM)jklein Wrote:  Even if you run a complex ecommerce site 80-100 resources per page is on the high side
I agree that it is high but it is something I've seen all to often.

(08-05-2010 05:10 AM)jklein Wrote:  I don't have any data on this, but I'm guessing that intentionally adding a blocking resource just to allow a DNS lookup to be performed isn't a great idea. If you already have a blocking file at the top of the page though then it probably wouldn't hurt to use the CNAME hack and make that first DNS lookup fetch back the rest of the static domains on your page.
If you are using a single host to serve content or a hiding behind a loadbalancer then the html document is your blocking resource that can grab all the DNS aliases when utilising the CNAME hack.
Some people are placing dummy link elements in their html head to try and force a browser DNS prefetch, though this won't work very well in a non-blocking parallel download world.
I was thinking along the links of a script element that attempted to download an image from a webserver that would serve a 204 response. so you would pay the price of the lookup and creating a connection but wouldn't get hit by tcp slow start as the response would fit in the first tcp window. placing this as the first element to be downloaded would hopefully initiate the desired DNS fetch making the records/aliases available for the other connections down the line. you could flush the head of the html document to mitigate the blocking nature of the element.
Again the future is brighter with browser DNS prefetch coming along in future browsers, and i think it is there to some degree in some of the current ones.
Find all posts by this user
Quote this message in a reply
08-11-2010, 05:05 AM
Post: #29
RE: Inline JavaScript Experiment
(08-11-2010 04:59 AM)calumfodder Wrote:  I had thought there used to be an option on webpagetest to set the number of connections that a browser made for a test cycle and was going to run a test to see the cpu load but I can't find it anymore....memory obviously playing tricks...time for the check :-)

You're not going (completely) insane. It used to be there but it was a hassle to maintain in the pagetest code so I removed it 6-10 months ago. I can put it back in if there is enough demand but I was largely using it for testing IE8 vs IE7 on an equivalent number of connections.
Visit this user's website Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump:


User(s) browsing this thread: 1 Guest(s)