Current time: 12-12-2017, 01:15 AM Hello There, Guest! (LoginRegister)

Poll: What would you like to see implemented next?
Compare multiple tests against each other
Zoom in on waterfall
Simpler optimization results
Custom Headers and Cookies
Commenting on tests
Add more optimization checks
Improve the documentation
Something else (comment below)
[Show Results]
Note: This is a public poll, other users will be able to see what you voted for.
Post Reply 
 
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
What do you want to see next?
04-07-2012, 08:20 AM
Post: #61
RE: What do you want to see next?
For a Private Instance setup: ability to turn on firebug, yslow, pagespeed and dynatrace on the FireFox test agent and configure them to automatically send their results to a private showslow server?

We are working on automation of tests through WPT and it would be great if the WPT agents also ran the other tools & report those results to ShowSlow ...
Find all posts by this user
Quote this message in a reply
09-14-2012, 08:32 PM
Post: #62
RE: What do you want to see next?
Something I'd like to see is a more straight forward config system as I managed to screw it up at least once for every install, perhaps something along these lines...
- single config file for the agent for both urlblast and wptdriver.
- JSON config for the server, especially for locations.ini

Andy

Using WebPageTest - http://usingwpt.com/
Visit this user's website Find all posts by this user
Quote this message in a reply
09-15-2012, 01:48 AM
Post: #63
RE: What do you want to see next?
Thanks Andy. The plan is to eliminate urlblast and have wptdriver support all 3 browsers (was working for a while but I must have broken something and just had to finish getting to feature parity). That way a single agent codebase could support all 3 browsers and the functionality would be identical.

JSON is a good idea for locations.ini. The tree structure has been a pain in the ass to explain and gets very confusing. I should be able to implement a json file with fallback to the ini so it could be a seamless change.

One thing I am considering that may help is a "test server install" page that checks the php version, gd library, filesystem permissions and dumps a tree of the locations from the config and shows the locations where agents have connected. That should catch all of the common configuration errors.
Visit this user's website Find all posts by this user
Quote this message in a reply
09-16-2012, 07:03 PM
Post: #64
RE: What do you want to see next?
Thanks Pat,

Sounds good...

I've currently writing up the process I go through to create 'all in one' instances for someone and will post it up when it's done.

I've also got some customer waterfall options that I need to finish (options to control how the URLs are presented e.g. remove domain but keep URL path etc) and will contribute back when I'm done with them

Andy

Using WebPageTest - http://usingwpt.com/
Visit this user's website Find all posts by this user
Quote this message in a reply
03-08-2013, 08:58 AM
Post: #65
RE: What do you want to see next?
I am trying to make my customers use webpagetest rather than Keynote or Gomez and the biggest pain point they have is that WPT is an instance based test, so I am going to write a cron job that runs the API based tests against my private instances.

I guess I am asking for a Keynote/Gomez alternative that actually runs wpt agents so that I can get all the goodness
Find all posts by this user
Quote this message in a reply
03-08-2013, 09:02 AM
Post: #66
RE: What do you want to see next?
Something like this: http://www.wptmonitor.org/ ? I believe there are also some commercial services that use WPT agents under the covers.

It was built on top of the WPT API.
Visit this user's website Find all posts by this user
Quote this message in a reply
03-19-2013, 03:14 AM
Post: #67
RE: What do you want to see next?
(03-08-2013 09:02 AM)pmeenan Wrote:  Something like this: http://www.wptmonitor.org/ ? I believe there are also some commercial services that use WPT agents under the covers.

It was built on top of the WPT API.

Hello Pat,

I went looking into the wptmonitor, to test it and see what might be best practices to implement kinda like services into our own control panel, but those releases aren't available anymore? The SVN seems to be unavailable.

Are there any other examples for scheduling WPT tests?
Find all posts by this user
Quote this message in a reply
03-21-2013, 10:50 PM
Post: #68
RE: What do you want to see next?
You should install from SVN - that path is correct. The tar files from the other SVN repository are ANCIENT.

There are also code examples for using the API here: http://webpagetest.googlecode.com/svn/trunk/batchtool/ and here: http://webpagetest.googlecode.com/svn/trunk/bulktest/ but they are more for one-time tests and they aren't packaged apps.
Visit this user's website Find all posts by this user
Quote this message in a reply
03-26-2013, 09:37 AM
Post: #69
RE: What do you want to see next?
Great work. Some suggestions on the stat's:

1. With multiple runs, the main results page states:
"Performance Results (Median Run)"
The results appear (?) to be arithmetic means, not median values. Perhaps change the title?

2. I see that there is an attempt to select one of the (10) test results that represents the "mean", but how is this determined? On what result (or aggregation) is this result selected? Perhaps provide a doc explaining the method?

3. If "Visually Complete" is available across all runs, promote that up to this summary...averaged.

4. For the timings, it would be nice to get further statistics here, though I'm not sure how to determine these, or what type of distributions these all are. For example:
- mean, median, std. dev, variance, etc...conformant to the expected distribution. For last mile, all things being equal, distributions are relatively 'normal' due to the wide range of independent variables. The problem, though, is that 10 trials is probably insufficient to overcome the variance...erg.

5. For the 'static values' (bytes, request #, Dom Elements), it might be nice to get a range.
Find all posts by this user
Quote this message in a reply
03-27-2013, 12:22 AM
Post: #70
RE: What do you want to see next?
(03-26-2013 09:37 AM)echorink Wrote:  Great work. Some suggestions on the stat's:

1. With multiple runs, the main results page states:
"Performance Results (Median Run)"
The results appear (?) to be arithmetic means, not median values. Perhaps change the title?

The results are the values from the run that had the median load time (floor() in the case of an even number of tests). It isn't the median for each value independently.

Quote:2. I see that there is an attempt to select one of the (10) test results that represents the "mean", but how is this determined? On what result (or aggregation) is this result selected? Perhaps provide a doc explaining the method?

It picks the median from the load time (though you can modify the metric used through a query param if you want). It gets a little less "correct" in the case of an even number of runs in which case it picks the run that is on the faster side of the technical median.

Quote:3. If "Visually Complete" is available across all runs, promote that up to this summary...averaged.

4. For the timings, it would be nice to get further statistics here, though I'm not sure how to determine these, or what type of distributions these all are. For example:
- mean, median, std. dev, variance, etc...conformant to the expected distribution. For last mile, all things being equal, distributions are relatively 'normal' due to the wide range of independent variables. The problem, though, is that 10 trials is probably insufficient to overcome the variance...erg.

If you access it through the XML or json api then you get averages and standard deviations for all of the metrics.

Quote:5. For the 'static values' (bytes, request #, Dom Elements), it might be nice to get a range.

Below the table you can click "plot full results" which will plot out all of the metrics across all of the runs. Not quite the same but it gives you a quick way to check the variability.
Visit this user's website Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump:


User(s) browsing this thread: 1 Guest(s)