Current time: 10-15-2019, 11:46 PM Hello There, Guest! (LoginRegister)

Post Reply 
 
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Comparing results from specific test runs
09-08-2010, 08:47 AM
Post: #1
Comparing results from specific test runs
When doing a visual comparison of multiple tests, is there a way to tell WebPageTest to use a specific run from each test? Right now, it seems to be picking a random run, which may not necessary be the one closest to the average.
Find all posts by this user
Quote this message in a reply
09-08-2010, 09:00 AM
Post: #2
RE: Comparing results from specific test runs
It is currently picking the median run but you can specify the exact run to use by modifying the url. I need to get around to documenting it but the tests to be compared are comma separated and in each section you can specify the run, label and cached/first view information.

For example:

http://www.webpagetest.org/video/compare...00905_47E1
Compares the industry benchmark AOL and Yahoo portals. They are displayed in the order they are listed (100905_47E0 is the test ID for the AOL test for example).

You can specify the test run to use with a -r:X where X is the test run:
http://www.webpagetest.org/video/compare...5_47E1-r:3

To specify repeat view you would use -c:1 (-c:0 is the default and is for first view). The options can be combined:
http://www.webpagetest.org/video/compare...E1-r:3-c:0

To specify the label to use you use -l:<label> (make sure to url-encode any spaces or other special characters):
http://www.webpagetest.org/video/compare...c:0-l:Them

(the industry benchmark tests only keep one video so if you change the run number it will not work in these samples but it will work fine for any tests you run manually)

Thanks,

-Pat
Visit this user's website Find all posts by this user
Quote this message in a reply
11-25-2010, 06:15 PM
Post: #3
RE: Comparing results from specific test runs
Can you compare videos of scripted test runs?
Find all posts by this user
Quote this message in a reply
11-26-2010, 01:12 AM
Post: #4
RE: Comparing results from specific test runs
Yep, they behave exactly like normal tests. The (rather large) caveat is that you can only capture (and compare) a single step of a script so you can't string together a multi-stage transaction and compare the full sequence.

I have an idea on how to treat the full sequence as a single step that would be easy to implement so I'll see if I can get it done in the next week or so. There will be 2 seconds between individual step actions but it will be consistent for each step (this is the time it takes to detect that network activity is done and is usually removed from the end of a step but I won't be able to if I string them together).
Visit this user's website Find all posts by this user
Quote this message in a reply
11-26-2010, 04:05 AM (This post was last modified: 11-26-2010 05:34 AM by hdtvrocks.)
Post: #5
RE: Comparing results from specific test runs
Thanks Patrick. Personally I'm not really interested in comparison between sites, but single step comparison between different locations.

Actually, side by side waterfall comparisons from different locations would be great too. Like a sdiff Big Grin
Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump:


User(s) browsing this thread: 2 Guest(s)