WebPagetest Forums

Full Version: TTFTB slow
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi All I hope someone can help me.
I am a tree surgeon and have built my website myself so please go gentle Blush
My website is hosted by vidahost who are very helpful, I had a slow ttftb of 1.4s so have migrated my site to their cloud. However my TTFTB is now almost 2.5s, does anyone have any suggestions? (it was only migrated this morning so maybe I should leave it for a bit?)
I have attached the .csv file, is this helpful?
thanks,
martin
I guess the slow TTFB is related to Wordpress generating the webpage on-the-fly each time someone visits your website. So the speed of your site are primarily dependant on [1] the build quality of the template you are using (or Wordpress in total) and [2] the hardware (processing power to build the webpage) Vidahost is using.

[1] the template loads 23+ CSS and JS files. Some can probably be eliminated. If you do have some technical skills maybe you can edit the template and remove unused CSS and JS files.
Downloading all those files separately also has a performance impact. You can concatenate all those files to one file (I guess WP has some plugins for that) or use the Vidahost Let's Encrypt SSL Support and go HTTPS + HTTP/2 (if Vidahost supports this) which more efficiently handles multiple files download.

[2] Find yourself a Wordpress Caching plugin so the pages don't have to be generated every time someone visits a page. Your TTFB will be (blazing) fast Smile
http://www.webpagetest.org/result/170504_77_11Z5 - asset #1 is a bit long serving, at nearly 2 secs.

And this is still very fast compared with most client sites I work on.

http://www.webpagetest.org/result/170426_8J_13D8 - shows how fast asset #1 on a site I host.

1) Removing files will help some + won't effect time to serve asset #1.

Since first visit is slow to serve asset #1 + subsequent visits are fast, this suggests...

Your WordPress caching may or may not be working, you'll have to test this + see.

Same with your PHP Opcache. You'll have to test to ensure it's working correctly + has enough memory to work in all cases.

2) Your hosting provider has Keep Alive turned off, so request they fix this.

If they say no, switch hosting. You can Google why Keep Alive is essential.

3) If your WordPress + PHP caching is correct, then next tuning will target Apache + MySQL/MariaDB.

Since your site appears to be a virtual site running with many other sites, this means your site speed + stability (able to serve fast under load) will be effected by other sites on this machine.

If your site's generating small profits, just leave it as is.

If your site's generating large profits, switch to WordPress optimized hosting.

4) Get rid of all http://myzone.96agsbqcsnyrel6wl.maxcdn-edge.com references.

CDNs tend to slow down well tuned sites.

In this case, maxcdn is serving assets very slowly.

Look at time required to serve asset #3 off maxcdn - 500ms (so 1/2 a second).

Look at time required to serve the same file off one of my servers... I just copied this file from maxcdn to one of my servers for a quick speed test.

http://www.webpagetest.org/result/170504_RF_12QQ show difference.

So maxcdn == 500ms.

My server == 236ms.

If you run this time reduction across all your assets, you can see moving to Hosting tuned for WordPress sites will make a huge difference.

And keep in mind, when you're serving static files like the .css file I chose you're really testing how Linux Filesystem tuning, rather than anything to do with WordPress.

If underlying Linux Filesystem tuning is slow, then all assets will tend to serve slow.
(05-05-2017 02:26 AM)dfavor Wrote: [ -> ]4) Get rid of all http://myzone.96agsbqcsnyrel6wl.maxcdn-edge.com references.

CDNs tend to slow down well tuned sites.

I'd like to nuance this statement about CDNs.

CDN can certainly help make your site faster. Especially when you're having visitors from all over the world. The CDN will then serve your assets from an Edge node closer to the user then your hosting.

In your example your self-hosted asset is indeed faster if you test US <-> US. treeiup's asset is hosted in the UK <-> US so you can expect longer TTFB.
If I would test your self-hosted asset from the EU I would also get longer TTFB.
If you have a CDN the assets would then be cached in the EU and served quicker to subsequent users.

But if your users are mainly located in the UK and your hosting is to it won't make that much of a difference.
Best approach to all site tooling, including CDNs, is to understand what problem a specific technology tends to address.

If you think about CDNs + the entire "Edge Server Proposition", all an "Edge Server" can possibly do is to reduce the latency of connections.

If you're using HTTP2, then all assets multiplex over HTTP2, if the HTTP2 config is correct + Keepalive is enabled + working correctly. WPT has a special report card slot just for Keepalive, which is a good indicator Keepalive is working.

This means all you can possibly save is a few milliseconds of time for each of the 5 connections a browser makes (all major browsers currently use 5 threads/connections).

This only applies to first visit.

Subsequent visits, for correctly tooled sites, will only serve the HTML component, as all other assets should be cached from first visit (.css + .js + common images). If there are other assets, then you might have the same few milliseconds saved for each of your other 4 connections.

Considering the headaches CDNs cause (your view + visitors views tend to differ), CDNs tend to be a debugging nightmare.

Especially, when you have high traffic + the CDN slows down or glitches out... and they do... a lot of the time.

Problem is trying to debug CDN related issues, when conversions drop or zero out for no apparent reason.

Better than using a CDN, better to tune your LAMP stack till your site is blazing fast.

When I take on new clients, one of the first activities I go through is removing all cruft - CDN + Proxy (NGINX, Varnish, Squid, etc.) + load balancers + DOS/DDOS hardware mitigators.

All this can be done far better at the LAMP level. Better meaning, setups are stable + can be debugged by mere mortals which conversions circle the drain.
(05-11-2017 02:21 AM)dfavor Wrote: [ -> ]Best approach to all site tooling, including CDNs, is to understand what problem a specific technology tends to address.

If you think about CDNs + the entire "Edge Server Proposition", all an "Edge Server" can possibly do is to reduce the latency of connections.

If you're using HTTP2, then all assets multiplex over HTTP2, if the HTTP2 config is correct + Keepalive is enabled + working correctly. WPT has a special report card slot just for Keepalive, which is a good indicator Keepalive is working.

This means all you can possibly save is a few milliseconds of time for each of the 5 connections a browser makes (all major browsers currently use 5 threads/connections).

This only applies to first visit.

Subsequent visits, for correctly tooled sites, will only serve the HTML component, as all other assets should be cached from first visit (.css + .js + common images). If there are other assets, then you might have the same few milliseconds saved for each of your other 4 connections.

Considering the headaches CDNs cause (your view + visitors views tend to differ), CDNs tend to be a debugging nightmare.

Especially, when you have high traffic + the CDN slows down or glitches out... and they do... a lot of the time.

Problem is trying to debug CDN related issues, when conversions drop or zero out for no apparent reason.

Better than using a CDN, better to tune your LAMP stack till your site is blazing fast.

When I take on new clients, one of the first activities I go through is removing all cruft - CDN + Proxy (NGINX, Varnish, Squid, etc.) + load balancers + DOS/DDOS hardware mitigators.

All this can be done far better at the LAMP level. Better meaning, setups are stable + can be debugged by mere mortals which conversions circle the drain.

I understand that HTTP/2 removes allot of RTT latency with multiplexing but while multiple assets can be send and received over the same connection the download time (traveling distance) will still play a (little) roll here I think.

But you are absolutely right that using a CDN isn't solving the whole problem and not having one isn't the biggest problem for slow sites by far. And for smalls, local sites probably not necessary. LAMP, good caching (headers), good coding, etc... will gain much more profit.

PS: I'm not trying to argue just just for the sake of it, just think it's an interesting discussion Smile
Reference URL's