So, I'm [slowly] trying to wrap my head around the concept of SubPubHubbub, and the only reason I looked at a protocol with the name like this is because tedster recommended it :)
But perhaps both MC (at 2011 Pubcon) and tedster had something other than the default implementation in mind when they suggested that it might give your site a preference in terms of content authorship as opposed to your scrapers, especially those scrapers with some authority if you immediately "fat ping" Google (i.e. push through a hub) the entire content of your new post.
Perhaps I am not getting some important detail here but it looks like the most popular WP plugin for "fat pings" (not "fat pigs", silly spell checker! ) pushes it to two "default" hubs - Demo hub on Google App Engine and SuperFeedr.
Both hubs (as well as any other PSHbbb hub for that matter) are completely open and anyone can subscribe to your pings just like you hope Google will. In other words, whether or not Google will actually subscribe to read your "fat pings" through the open hub is an open question, but you can be sure the scrapers would LOVE to get your full article content the second it gets published. More so considering that you normally only publish excerpts in the RSS feed, which is what they used to aggregate before.
So, I think I'm missing an important bit of info here: how to make sure that Google gets the fat ones and the scrapers (pardon, aggregators) don't?
Can anyone more experienced with fat pigs chime in?