|Inside the Coming Chrome Browser - Accelerating the Graphics Processor|
Google's Matt Cutts tweeted today about this design doc. It lays out how Chrome is going to accelerate on-screen rendering using graphics processing units (GPUs) rather than just depending on the computer's CPU.
It's fascinating look into a world I barely know, browser code, and it comes complete with nicely colored flow charts.
|Traditionally, web browsers relied entirely on the CPU to render web page content. With capable GPUs becoming an integral part of even the smallest of devices and with rich media such as video and 3D graphics playing an increasingly important role to the web experience, attention has turned on finding ways to make more effective utilization of the underlying hardware to achieve better performance and power savings. |
Google is not the first, Microsoft also announced it as discussed here [webmasterworld.com...] a few weeks ago.
Hardware acceleration is the next logical step to speed up browser rendering. For graphics specific purposes GPUs have tens to thousands times more processing power than ordinary CPUs. The main problem is that the instruction set of those graphics processors is limited and there are hundreds of different cards each with their own (number of) processors, capabilities etc.
Thanks lammert - I'm definitely a fish out of water in this territory. Is the area of graphic processors and instructions a potential are for future standardization?
Hardware acceleration is already a number of years in use in programs like CAD programs. It is also supported by most video playback software. But often those programs are limited to a specific set of graphics cards on which they will work. With a universal free program like a browser, Google can't dictate which hardware people have to put in their computer. Instead they have to rely on their hardware detection routines to decide which activities can be safely run in the graphics processor, and which computations should be performed by the CPU. Every graphics card developer uses a proprietary GPU design, and designs and capabilities change with almost every new graphics card which is brought to the market. We are not yet in a situation that a full standardization is taking place.
Gladly the two large GPU manufacturers Nvidia and ATI have released software development kits (Cuda and ATI Stream) for their graphics systems for everyone who wants to develop GPU enabled applications. This is a sign that some stabilization is taking place on the GPU development front and that we may see more standardization in this area in the near future. This could also be the reason that Microsoft and Google are now jumping on the hardware acceleration bandwagon. It may have been too risky in the past due to the many different hardware designs, with often only partly documented features in it.
From what I can tell Microsoft was the first to start working with 3D acceleration and I suspect that they were already a few months in to working on IE9 before IE8 went RTM. I know that other browser vendors did not delay in trying to add support for 3D acceleration.
I don't know much about how software is written though I know there are a lot of sub-systems in browsers that can effect of kinds of performance. In example while Opera 10.6 is the fastest at the SunSpider benchmark it takes a backseat to Firefox 0.7~3.6 (I am not joking about those version numbers) on my (currently private) benchmark that tests what I've been told is a wide array of subsystems. I've seen the first build of Opera 10.5 take Opera from a total loss to running half of the benchmark and varying degrees of increased performance through to the latest 10.7 builds. Firefox 4 goofed something up and the whole benchmark fails. I once took a look at the source code for Firefox...it's simply overwhelming. While IE9 is hardware accelerated it still utterly fails my benchmark though then again so do Chrome and Safari.
How much a browser can take advantage of a GPU will be dependent on many things. An example in gaming is the GPU's frame buffer; the more GDDR memory the greater the radius surrounding the player that various levels of detail can be displayed. Just an out-of-thin-air example let's say that 512 MB could emulate high details up to 150 feet from where your character is standing and 1,024 MB could emulate up to 240 feet away, somewhere the software has been programmed to determine when objects should be rendered in the GPU and browsers will have their respective share of determining the capability of the resources available...if they do actually end up implementing such detections and in a lot of ways browsers do not.
At one point CSS2's positioning properties were cutting edge and at some point 3D acceleration will be common place. It's hardly the end-all and it will come down to browser software having numerous performance optimizations for an exhaustive list of benchmarkable tests. Large steps are still ultimately steps. ;)
sounds like goog is following, not leading.
IE was the first browser to come with hardware acceleration. You can download the IE9 platform preview and test the results at [ie.microsoft.com...]
Mozilla firfox has also released hardware accelerated browser.
Google is no-3 now..
Sorry, this is just spin. Browsers don't have direct access to hardware which means they have to go though the OS and rightly so. In order to access the hardware directly, they would have to install their own device drivers which might then stop games working properly, cause crashes, etc.
What they can do is make better use of acceleration features that the OS provides, however, this shouldn't really be necessary since the OS itself should do this by passing calls to old and slow functions to newer faster functions (but, surprise, surprise) Windows is weak in this area. For example...
Suppose I want to draw some smooth text - I just call the old functions and it magically comes out smooth, but suppose I want to draw a smooth circle - for this I have to call new functions because calling old ones will result in a jagged circle.
So, what they really mean is that they will use more new drawing functions and fewer old drawing functions but if they actually said that people would ask "why are you using old functions when newer ones are available?" SPIN, SPIN, SPIN.
@kaled The OS does not interpret the needs of the programmer, the OS interprets the command of the programmer. IF the programmer doesn't take advantage of the logic provided, it is not the fault nor the purpose of the OS to make an inference thereof.
That's a reasonable point of view, however, my main point was that talk of using hardware acceleration is just spin, and I stand by that for the reasons given.
In the example I gave, it seems unlikely that using sub-pixel anti-aliasing to draw circles and lines, etc. would cause any more problems than using sub-pixel anti-aliasing for text. Incidentally, using it for text created many a display bug because redrawing text without clearing the background first causes the text to become increasingly bolder.