This discussion seems related to TrustRank, which is an improvement on PageRank by starting with a seed set of trusted sites then calculating a score with a high/low value, with high a score indicating a likeliness of being spam-free. So getting links from sites located in high trust buckets can gain you entry into those trusted buckets yourself.
But then there is also a concept called Topical TrustRank wherein the seed sets are segregated into topical buckets from which the trust rank is then calculated. This supposedly results in a substantial improvement in weeding out spam sites. What this implies for us is that obtaining inbound links from topically related sites that are in high trust sets is important.
But then there are algorithms that calculate statistically typical link patterns. These algorithms are meant to identify pages with abnormal sets of inbound links. So now "quality" takes on the added nuance of relating to natural freely-given citations. The goal becomes to either mimic the natural pattern of inbound links or actually build a set of natural inbound links.
But then that kind of statistial analysis then might include social signals. So now "quality" might now mean a certain amount of statistically typical quantity of social/sharing signals. However, won't these signals also need to be filtered?