07-03-2009, 11:12 PM
If you got a 500 or something like that it will show up as the result code for the page (which is normally 0 for success or 99999 for a content error) and a screen shot could be taken because it was an error (I assume you figured out how to grab screen shots on errors). If the page returned a 200 but gracefully failed then it wouldn't.
Right now pagetest can only either grab screen shots on errors or do a full dump of screen shots and graphics (used for one-off testing). Wouldn't be hard to add the ability to grab a screen shot for every page but be warned that the storage requirements could get out of hand pretty quickly if you're crawling a complicated site.
Are you talking about the page-level or request-level data? For bulk processing of the request-level data I wrote an app that parses the log files and splits the results out by domain so we could see what all of the broken requests were for a given domain, regardless of property. Most of the bulk analysis has been targeted at looking for specific things (broken content, missing gzip, etc) so it wasn't too hard to throw together a script that could just look for all incidences.
For non-crawled testing we database the results and have a front-end for plotting the results and doing drill-downs. That's a fairly large and complex system though.
Right now pagetest can only either grab screen shots on errors or do a full dump of screen shots and graphics (used for one-off testing). Wouldn't be hard to add the ability to grab a screen shot for every page but be warned that the storage requirements could get out of hand pretty quickly if you're crawling a complicated site.
Are you talking about the page-level or request-level data? For bulk processing of the request-level data I wrote an app that parses the log files and splits the results out by domain so we could see what all of the broken requests were for a given domain, regardless of property. Most of the bulk analysis has been targeted at looking for specific things (broken content, missing gzip, etc) so it wasn't too hard to throw together a script that could just look for all incidences.
For non-crawled testing we database the results and have a front-end for plotting the results and doing drill-downs. That's a fairly large and complex system though.