It's the kind of signal that can disambiguate a page. The more clearly a page is disambiguated semantically, the more easily can it be assigned a proper place in the taxonomy of document types and then assigned appropriately to the right taxonomy of query types
...from somewhere which is not inside their heads. What is that somewhere?
A huge pile of data, calculated automatically on a regular (but not daily) basis, based on actualy language use, and distributed around Google's 1 million plus servers. The phrase-based indexing patents go into a lot of detail about how the related computations work.
This is one area where Google has really taken some giant steps. Earlier in the process I saw some rather funny growing pains, like when a page about a "Rolls" (the car) ranked well for a query about "bread".
There have only been a handful of SEOs following this development... a small handful. And as I've said before, it's not the kind of thing you can manipulate directly, nor should you try. But it can really free up your content to be more human and more directly serve your market.
Forget about the "is it as strong as a PR 5 backlink" thinking. That's too linear to give you an appropriate mental model. I've seen one situation where an SEO tried to get every related phrase he could find into his page - and the rankings took a nosedive. The patents explain why... the number of related phrases on the page went too far beyond the statistical norm. And unless you are analyzing all text on the web that Google does, you'll never be able to compute that norm.