Welcome to WebmasterWorld Guest from 18.104.22.168
Following recommendations here, I was concerned that my E-Commerce website pages may be too large. They average 100K in size, which maps to 22 secondws download on a 56K domen.
So I added logging elements to detect when pages were abandoned before being completely loaded.
- which works well. I can detect the fact that the page completed loading through the log entry, and I can measure the time the page took to load by comparing the log entry time with its immeditate predecessor (which logged the page being accessed).
1. My logs seem to show that 94% of people completely load the pages, suggesting that page size is NOT a factor in my website conversion ratio.
2. The average page load time is 14.5 seconds, which implies an average link speed exceeding 56Kbps.
Any comments on this? Specifically - will onLoad do what I am expecting here reliably? Any other explanations on the apparent speed implausibility?
I can imagine one way that they "think" they can do it - and that would be by placing some code in the end of the HTML that calls the function. However, I do not believe there is any way to controll what objects are being loaded first or last so the script don't have to be loaded last just because it's last in the code.
The reason I am digging into this is that your numbers do not sound right to me. I very much doubt your average user can download with that speed.
Denmark (and Scandinavia) is one of the places with the highest broadband penetration and we don't get anywhere close to that (measure on network level).
Also, even if most users will download your entire pages do not expect spiders to do so. 100 k is lot - if you multiply that by 2.5 billion pages... and growing! You should not expect spiders to eat all that :)
I have so far only seen two ways to determine that - and one one that is bullet proof.
You can compare the downloaded file sizes with the actual file sizes. If the file returned from the server is smaller than the actual file you know they didn't get all of it - but you don't know why (don't the session terminate? Did they hit http stop request? Or did something else happen...?)
Using Network Package Sniffing you can grab the stop request and also analyse the raw data packages to compare what is requested and what is returned - not what the server return, but what is actually transmitted as data to the client.
I believe the last solution is the best (but also very pricey!)
When there are no more open GET requests, the page is fully loaded.