Forum Moderators: open
function Over(x,y,n) {
document.getElementById('selection').style.left = x;
document.getElementById('selection').style.top = y;
document.playerportrait.src = portrait[n].src;
document.playername.src = name[n].src;
if (n==20){ document.getElementById('gillselect').style.visibility = '';}
else { document.getElementById('gillselect').style.visibility = 'hidden';}
}
<a href="sean/sean.html" onMouseover="Over(422,232,5);"
style="position:absolute; left:422px; top:232px; width:82px; height:56px; z-index:4">
</a>
The nbsp is in very large font and is a hack to fix IE6 (it works fine without it in IE5, Opera 5+, Moz, etc)
There is a kind of grid image in the background that provides the locations and vidual references of the links (it's essentially a page that just serves sets of images, so we're really not too bothered about accessibility).
The link above is one of about 20 on the page. When mouseover occurs, a small animated circle ('selection') is moved to the given location (x,y), and highlights the target area.
playerportrait and playername are then swapped to an image from an array (n). The latter of these is a small image, and the former is medium-sized: a 485x246 pixels png, which gets resized by the browser to 844x492 (it's about 20kb, and is one of the centerpieces of the page).
Although most of the function is carried out pretty instantly, it's this slightly larger portrait image that takes all the time (well over a second, even after it's loaded). There are other images behind and in front of it, and it's all a complex layout.
So is there anything I can add/delete/change in order to speed things up a little? Rollovers like this have been around for donkey's years, so is there something simple I've overlooked? I've thought about using one huge image with tiles and then moving it into place, but don't know how much it would help...
Thanks for any advice.
[Added: I think I've seen mention here somewhere that using browser resizing of graphics isn't ideal, either....]
browser resizing of graphics isn't ideal, either
I'd say that depends on the graphic. If the image is a gif/png with only vertical and horizontal elements, and no curves or diagonals, then and resize is beautiful and not pixelated.
About the rollover script - I don't see any preload for the images there. This would create a time lag when the user hovers and requires another trip to the server. On the other hand, if there are 20 images being preloaded in another chunk of script somewhere, that would also slow things down. I would not include a large preload script in the head section - I would do that at the very end of the html document.
As far as images go, there is also time added after the download for de-compressing a jpg image. Depending on the degree of compression and the speed of the processor, it can be noticable - like it's 5 seconds after the packets are received or more, and the image still isn't rendered.
vkaryl- yes, there is a line of CSS for making the link font-size: 56px; and to remove the underline. Each link is essentially a rectangular area, and I want the text to be invisible (it's all over a grid-like image behind, and that's what the user points at). The nbsp isn't even needed in Opera/Mozilla, since they allow the empty link to be displayed, and draw it at the size specified in the <a>. I think if I use extra <p>s then I would have problems with overflow or whatever. And I figured that after the links are drawn, nothing else ever happens to them (they stay the same size on hover, and are not processed in any way)
As for the large images, yes, they are highly detailed in both directions (drawn human faces). They are all loaded small (via a preload loop script onLoad, which I'm pretty sure is triggered when everything else is finished?) And then they are stretched to double their size (ie almost to fill the screen). I'm quite happy with the way they look- the only problem is that even after they have all loaded (about 450k), there is a very noticable delay (eg, if you keep switching back and forth between two images, they both take a while).
'time added after the download for de-compressing a jpg image'. Yeah, this is the sort of thing that I'd forgotten about... Hidden overheads with image display. Maybe the processing for the resizing doesn't help, either (I guess it's quicker if it's a neat factor of two?). I suppose there's no way around this. In general, PNG is quicker to decode than GIF and JPG, right?
Would the idea work of having one large tiled image that gets moved around to display one part of it? I think that's how skinned GUI toolbar buttons work in Mozilla...
Or would it help to display them all at once in the same place, and then just change visibility/z-index?
one large tiled image that gets moved around
How large would that large image be, filesize-wise? It's obvious there is little concern for users who have not got Javascript enabled, so this might actually be an option for you.
If the total aggregate size of the images you are displaying is less than 20% (off the top of my head) of the combined 'large' image, you may see the download time of the large image is not worth the benefit. However if the filesizes are close, then one image, loaded and clipped is definitely faster re: displaying a portion at the proper size. There's no resize rendering overhead, at least.
I suppose the large image would be somewhere around 400kb, maybe a little less (I haven't made it, I assume it's linear with dimension? They're quite low color depth, so maybe a lot less). There are 20 images, and most of them are about 20kb, so I suppose some of the overheads would be reduced with only one file. However, the large tiled image would still need to be resized, unless it was saved at double-resolution. I guess then you have a tradeoff between download time and render time...
PS: I just checked with Linux Mozilla, and it's slightly different to Win/Moz: the image is not displayed all at once- As they change, they appear in two or three parts from top to bottom. Nothing too drastic, but something I could do without...
I just checked with Linux Mozilla, and it's slightly different to Win/Moz: the image is not displayed all at once- As they change, they appear in two or three parts from top to bottom.
Are these jpg files? It sounds like a "progressive" jpg - some browsers (including IE) display them all at once, instead of progressively as was the original intent of the file format. I haven't been paying attention to this issue lately -- since I know IE has it wrong, I started saving all my jpg files as standard format so that something renders early on for dial-up users.
However, I assumed that Moz/Win had it right. From what you say, it sounds like only Moz/Linux does -- even though you don't like it
function Over(x,y,n) {
document.getElementById('selection').style.left = x;
document.getElementById('selection').style.top = y;
document.playerportrait.src = portrait[n].src;
document.playername.src = name[n].src;
if (n==20){ document.getElementById('gillselect').style.visibility = '';}
else { document.getElementById('gillselect').style.visibility = 'hidden';}
}
// global vars
var el_sel = document.getElementById('selection');
var el_gillsel = document.getElementById('gillselect');
function Over(x,y,n) {
el_sel.style.left = x;
el_sel.style.top = y;
document.playerportrait.src = portrait[n].src;
document.playername.src = name[n].src;
if (n==20){ el_gillsel.style.visibility = '';}
else { el_gillsel.style.visibility = 'hidden';}
}
Whether this is a useful speed gain remains to be seen, but that change does also make the code slightly easier to maintain and read. (Although global variable are frowned upon by some people)
You can make similar optimisations to the above throughout your code, replacing any instances of getElementById with a reference to a global object that is calculated once at page load (this obviously doesn't apply if you are passing a variable into getElementById - if it's dynamic in such a way, you quite obviously can't keep a wide scoped reference to it so easily (did I need to say that?)).
Furthermore, you should also gain some performance (again it may be trivial) by passing less parameters into functions. For each parameter passed it will cost you one push to the stack before going into the function, and one pop from stack once inside the function. This will likely cost less than using an index into an array. i.e. you could do something like:
// global vars
var el_sel = document.getElementById('selection');
var el_gillsel = document.getElementById('gillselect');
var x_coords = [10,20,30,40,50,60,70,80];
var y_coords = [10,20,30,40,50,60,70,80];
function Over(n) {
el_sel.style.left = x_coords[x];
el_sel.style.top = y_coords[y];
document.playerportrait.src = portrait[n].src;
document.playername.src = name[n].src;
if (n==20){ el_gillsel.style.visibility = '';}
else { el_gillsel.style.visibility = 'hidden';}
}
Now, obviously whilst both of these optimisations may appear sound in theory: I've not made any tests of them. To ensure that any optimisations you make actually make the code faster (and not slower due to some strange interpreter quirk), you should always perform some real world benchmarks to confirm your changes are for the better.
The hassles of properly benchmarking JavaScript are beyond the scope of this post (luckily for me).
btw, regarding the rest of the thread, I think using a single image is probably the way to go.
Hope some of this helps?
[In case you can't guess, I'm a programmer / software engineer! fwiw, I learnt to optimise code when programming games (10+ years experience), and since then I spent 3yrs at a company who produced a server-side scripting language (which taught me about parsing, execution of interpreted code, VMs and all that kind of stuff). Probably not everyone's cup of tea.]
Actually, you can probably go one step further with the globabally cached object references, by doing something this instead:
var elSelStyle = document.getElementById('selection').style;
...
elSelStyle.left = x;
elSelStyle.top = y;
...
One would probably need to do some benchmarks to see the difference -- and it might not be very much of a gain.
I've found it can speed up the final display when pre-loading images into the cache if I use the final dimensions in the preload script --image01=new Image(844,492)