While it can be fun to think about how a SE might handle 'super spammy' backlinks other than their stated behaviour of 'ignoring' aka dampen/devalue some or all values (a link is almost certainly not a singular input) I think it important to realise that often such links are NOT ignored but treated as legitimate. How much of this is simply statistics: false positives/negatives, only the SE can answer.
And while I agree that link sabotage is not 'common' neither is it unknown. Several years back an SEO company targeted me on behalf of a client. The effect was not site wide, only affecting specific pages, and many webdevs, without similar analytics experience, data history, and methods may well have misattributed the cause. Further, I was uncommonly fortunate in being able to bring sufficient pressure to bear to have the SE take a manual look and make appropriate changes.
That said the biggest problem that a SE has in regard to links is attribution. By that I mean that theoretically a site links out to another as a benefit to it's visitor and itself, the link as thoughtful truthful testimonial being, in essence, the very foundation for PageRank. While that still exists, the hype around PR and the value of links aka testament aka popularity, means that links as value are widely gamed. Unfortunately, it can be difficult to discern who is doing the gaming with what motives.
That crap with crap backlinks floats, even clings, to the top of query results is the other side of the coin from my anxious experience: the SE unable to differentiate senseless generated/mashed content from substantive, garbage backlinks from substantive. Statistics is a hard master: 99.99% success still means 10 million exceptions in a billion pages. Algorithms, even backed by machine learning, has no intelligence no insight and often misses even the truly egregious.
Thus, even with the best of intentions, if a spammer points 100,000 links at a site, at 99.99% accuracy 1,000 will be accepted as legitimate. Of course, if they were all from the same referrer or all from footers or all from other language sites or some other recognised flag I'd expect the algo to toss them all. That it apparently can not do such recognition on it's own, without human categorisation input, is surprising at this point (although it may only be a requirement for human checking and authorisation; if so it is some slow).
So, while I think, as I said up front, it is fun to think about how spam links might be handled the real problem is not what is done on identification but on how and to what degree the identification is correctly made. After that a simple ignore removes the need for considering attribution. Of course if attribution can be determined then future proactive behaviour may be possible... not that I'd like that to be determined by algo...
Note: the advent of the webdev generated disallow list was (1) a sign of how serious their failure in identification and (2) their need for outside 'clean' training data. The subsequent admonishments against broad 'scatter gun' disavowals showed that the uploaded data was not actually 'clean' enough for training. Rather amusing.