homepage Welcome to WebmasterWorld Guest from 23.22.128.96
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Home / Forums Index / WebmasterWorld / Accessibility and Usability
Forum Library, Charter, Moderators: ergophobe

Accessibility and Usability Forum

    
Usability: How many page loading seconds are acceptable?
FBIB of 2-3 seconds is OK?
zoltan




msg:1583173
 7:11 am on Mar 14, 2006 (gmt 0)

Many sites are resource intensive and generated dinamically using PHP, Perl or any other scripting language + using a database. Also, some of them use the gzip or deflate function of apache to compress the content, this is definitely reducing the overall page loading time.
The problem is with FBIB (first byte in browser). What is the acceptable FBIB time? Is 2-3 seconds OK?

In my site, the FBIB of my script / database / gzip generated pages is 1 second or less than 1 second. In 98% of the cases. But, on the remaining 2%, the FBIB is 2-3 seconds or sometimes (on peak times) even more. Is this too much?

 

percentages




msg:1583174
 7:22 am on Mar 14, 2006 (gmt 0)

>Usability: How many page loading seconds are acceptable?

My rule of thumb is that you have 7 seconds to sell a prospect that they have reached a page of interest to them. You don't have to make a sale in 7 seconds, you just have 7 seconds to keep them for a while longer.

Here is the skinny, if your initial page takes 6 seconds to load, you have 1 second of selling time. If your initial page takes 1 second or less to load you have 6-7 seconds of selling time.

Therefore to maximize sales you should give yourself maximum selling time. That means pages that load as fast as possible.

According to Alexa, my slowest page to load is 0.88 seconds.....works for me! Gives me a good 6 seconds of time to sell the prospect they should stay a while longer!

Gosh, don't we live in a rapid world these days? Just 10 years ago I would have been very happy to have waited 20 seconds for someone to answer the phone, now all I have is seven seconds to sell them that I have what they need!

twist




msg:1583175
 8:08 am on Mar 14, 2006 (gmt 0)

Also, some of them use the gzip or deflate function of apache to compress the content, this is definitely reducing the overall page loading time.

Talking about gzip is a big can of worms, but i'll give you my 2 cents on the subject.

Lets say one of my pages is 100,000 bytes or 800,000 bits.
Lets say the average broadband user gets 3Mbs or 3,000,000 bits.
It will take them ~.2 seconds to download the page.
You compress the same page down to 300,000 bits.

Now the same broadband user can download the page in ~.1 seconds, but you've added 1-2 seconds to his wait time while your server compresses the page.

The idea of compression was to help the 56k users. The problem is that now many of them are using the so-called "high-speed" 56k. Which just means that their ISP is compressing the pages for them. Their getting their pages compressed anyway, so now they have to wait for your server to compress unnecessarily.

So the only people that compression actually benefits are the few left who still use standard 56k modems. It actually slows down your pages for everyone else.

If your problem is server load and page generation time you could try focusing on the following,

If your pages are updated rarely, server-side caching can make a big difference and make sure your databases have well designed indexes.

I was having a problem with a slow page recently. I had a page that needed to grab 3 columns from ~12,000 rows. This wasn't a really a problem. I decided to do a Left Join on 2 columns on another database with ~2,500 rows. This shot my page generation time to 5-8 seconds. I restructed both databases with new indexes. The exact same search now takes .2-.4 seconds.

zoltan




msg:1583176
 9:17 am on Mar 14, 2006 (gmt 0)

No problem when you only talk about 12 - 15,000 rows. The problems arise when you have hundreds of thousands of rows.
Index is used wherever is possible and needed.

Anyway, MySQL or db optimization is another topic. The question is: how much is too much in terms of FBIB?

lammert




msg:1583177
 1:42 pm on Mar 14, 2006 (gmt 0)

FBIB of 2 to 3 seconds is long. If I encounter such a website my mouse is already hovering above the back button when the screen is white for more than 2 seconds. A few years ago I wouldn't have had any problem with waiting 15 seconds for a page to open, but I now expect a remote website to open faster than my local word-processor can open a new document.

Besides the FBIB there are some more statistics interesting for your pages:

  • When does the first real content arrive in the browser. I am not talking about menus or ads, but about content which will keep the reader busy while the rest of the page loads. As soon as the visitor can start reading, the remaining time to load images etc. will not be seen as annoying anymore.
  • When are all external js files loaded. Many websites rely for internal navigation on javascript menus. When these menus are loaded from an external .js file, it can take a few extra seconds before the site is navigateable.

pageoneresults




msg:1583178
 4:01 pm on Mar 14, 2006 (gmt 0)

When does the first real content arrive in the browser.

Years ago when I switched over to using CSS-P (Absolute Positioning) and SOC (Source Ordered Content), the problem of page load time became pretty much non-existent. Why? Well, my content is source ordered meaning that whatever I have there right after the <body> tag is going to be the first thing the visitor sees while the rest of the page is loading. In my case, it is my entire content area that comes first. Then comes navigation, etc. I plan my pages so that if a user is surfing with CSS turned off, they get core content in their face and the rest of the fluff is below the fold (BTF).

P.S. Google AdSense is a culprit in page load times. Sometimes that external JavaScript call is delayed and fortunely for me, I have my AdSense absolutely positioned and it comes in almost at the end of the page. So, while the reader is viewing core content, everything else around that is loading in the background.

twist




msg:1583179
 5:47 pm on Mar 14, 2006 (gmt 0)

Thats a great idea using absolute positioning pageoneresults. I would still prefer to load my header first because I don't use images in the headers so it should load almost instantly unlike some of my content pages.

As for the FBIB question,

If I was looking through a site and had to stare at whitespace for 1-2 seconds before each page loaded, I would leave also.

SlimKim




msg:1583180
 12:03 pm on Mar 18, 2006 (gmt 0)

i wish someone at msn would ask this same question about there new product at live.com

i live in a rural area of Arkansas where dial-up is the only option and after 3 minutes waiting for live.com to fully load --- i closed the window

There are more people still stuck on dial-up connections than you might think and I'm thinking we are good for 5 to 8 seconds load times with no problems.

The best advice is as was mentioned above, get your header loaded, or just some minimal graphics quickly and that may greatky improve your abandonment rate.

if you can just say loading and show one of those bars showing the page load progress, you will keep everybody unless you have an awfully long load time.

if you need me to test your load time, just pm me --- if it's bearable for me, then it's likely ok for most

another consideration is my new computers are twice as fast (even on dial-up) as those 4 years old

Hope it helps

twist




msg:1583181
 6:52 pm on Mar 18, 2006 (gmt 0)

The best advice is as was mentioned above, get your header loaded, or just some minimal graphics quickly and that may greatky improve your abandonment rate.

The exact reason that compression is a bad idea. Page is sent as one big lump.

pageoneresults




msg:1583182
 7:20 pm on Mar 18, 2006 (gmt 0)

The exact reason that compression is a bad idea. Page is sent as one big lump.

Are you sure about that? My understanding of HTTP Compression leads me to believe otherwise.

Would you care to tell WebmasterWorld why their use of HTTP Compression is a bad idea? :)

HTTP Performance Overview
[w3.org...]

Speed Web delivery with HTTP compression
[www-128.ibm.com...]

HTTP compression is defined in RFC 2616 specification as a negotiation between the Web browser and the Web server. If a Web browser sends a request with an "Accept-Encoding: gzip" header to the server, it tells the server that it understands compressed (encoded) responses. If the server is capable of sending compressed data, it marks its response with a "Content-Encoding: gzip" header. When a Web browser receives data from a server with this header information, it decompresses the data transparently and displays the uncompressed content.

HTTP Compression saves transfer data volume and speeds up Web page load time.

twist




msg:1583183
 7:47 pm on Mar 18, 2006 (gmt 0)

I must have missed the boat, I see that WW is now using gzip.

[webmasterworld.com...]

Brett Tabke
Yes, gzip is installed on the host, but we do not take advantage of it at this time.

>Google could crawl WW about 4X faster

That is one very good reason I do consider going to gzip. I totally salute the idea of using gzip - I really wish we could make it work here for users.

Some of the this issues:

System overhead: The page must be generated in perl within memory and then compressed by gzip before beginning to transmit the page to the user. This can add a significant amount of time to page generation.

In the current system, the First Part of page has started to be delivered to the user before last part of page generated.
The BestBBS software generates pages on the fly. By the time this sentence is put in your browser, the above part of this page
is already generated and out the door. The last half of this page has yet to be determined. The software has no idea what the next message
is, or what kind of code is going to be needed to be generated. There may be some files yet to be opened - seeks to be made, and user files
to be updated. All that takes a significant amount of additional time. All that time would be "at the top of the page" if we used gzip.

For example (numbers are all relative guesses):

Nongzip:

Time Slices: Action
0.5 find required files.
0.5 find user files.
0.5 generate header.
0.1 send out top of page.
0.5 find post
0.5 generate start of post.
0.1 send out top of post.
0.5 find posters user files.
...

Total time slices or approximate time to generate message to browser: 4
Time to put something Visible In Browser: 1.5

With gzip

Time Slices: Action
0.5 find required files.
0.5 find user files.
0.5 generate header.
0.5 find post
0.5 generate start of post.
0.5 find posters user files.
1.0 gzip post.
x.x send out page.

Total time slices or approximate time to generate page to browser: 5
Time to put something Visible In Browser: 5

We go from 1.5 to 5 to put something in your browser. So if a page has 4 seconds of page generation time, we are spreading that generation out over the life of page delivery. Where as with gzip, the entire 4 seconds of page generation is at the top of the page. The moral is that, in a dynamic environment, the slow part is not page delivery, but page generation.

What we are doing here, is taking advantage of network latency. We can pump out the code on to the web - which is really going to run through a massive set of routers and switches (hey, you've done a tracert right?). Apache doesn't set there and wait for that code to show up in your browser. It sends it out on it's merry way, and then gets back to the rest of the code, while your isp net work does the rest. That is twice as true if you are on a isp that uses transparent caching such as AOL. So, while the first 10k of a page is being delivered in your browser, the next set of 20k is being worked on by Apache.

Add in the additional "gzip" overhead (it is not massive load, but it becomes significant in such a page view happy environment).

gzip on a system like this In the real world? It looks like Lag in the browser. It is a delay in the browser before it does something and starts to move. Thus, we would get complaints about why the slow down in webmasterworld?

To my way of thinking, I want to make that little download indicator do something in your browser as fast as possible. I removed every last bit of overhead code that I can from BestBBS before that first byte is generated. I do everything I can to generate and start sending that first 1.5 to 4k (a general size of most network buffers) as fast as possible. It is the key to the constant stream of comments that BestBBS is the fastest forum software on the web today. We may only be in the top 300 Alexa sites, but I would put our "byte per box" ration up against anyone on the web today - no one gets more bang per box than we do. Other comparable forums are using networks of 8 servers to our 1.

Can you sum up what changed his mind and exactly how WW is implementing gzip?

pageoneresults




msg:1583184
 7:49 pm on Mar 18, 2006 (gmt 0)

Can you sum up what changed his mind and exactly how WW is implementing gzip?

Bandwidth. Faster page loads. Quicker browsing through the forums. All sorts of benefits!

The rest of the question will need to be answered by Brett or one of his assigns. ;)

twist




msg:1583185
 7:54 pm on Mar 18, 2006 (gmt 0)

I am also wondering about shared servers. Would the extra consumption used doing compression be more time consuming then just sending out the page uncompressed?

What about the popular "high-speed" 56k connections where the ISP does the compression with no cost to you, wouldn't using your own resources to compress be redundent?

What about FBIB, how did WW overcome this issue? It's hard to notice anything on broadband, to me it all pretty much seems the same on WW.

On my site, which is on a shared server, I noticed a huge increase in FBIB after I stopped using compression. My pages loaded in sections again instead of "white to page".

I'm not against gzip at all, but is their a new way to implement it, because I would love to try it again.

encyclo




msg:1583186
 7:59 pm on Mar 18, 2006 (gmt 0)

I can't answer for the technicalities of gzip here at WebmasterWorld, but I do know that one big difference was using Apache 2.x rather than Apache 1.3, as the former's compression module is far more efficient. However there are still a large number of hosting companies out there using Apache 1.3. If you have a dedicated server, moving to Apache 2.x is highly recommended, not just for compression but for overall performance.

twist




msg:1583187
 8:51 pm on Mar 18, 2006 (gmt 0)

there are still a large number of hosting companies out there using Apache 1.3

Then i'll wait until my host upgrades before I try gzip again. Thanks for the info.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / WebmasterWorld / Accessibility and Usability
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved