|Self-Publishing - Post-Pinterest Internet|
| 2:40 pm on Oct 11, 2013 (gmt 0)|
Given the following unfortunate developments on the web:
- Pinterest creates pages aggregrating all the images pinned from your website making any graphical-based website obsolete.
- Following in Pinterest's ground-breaking, copyright-annihilating practices, search engines hotlink your images in full size bypassing your website entirely.
- Monetary incentives to create graphics and graphic-based websites are shrinking every day.
The following measures should be routinely taken.
- Think twice before embarking on an adsense-model, graphic-heavy website or a webcomic. Consider alternatives such as print sales and PDFs.
- All your "money" images should be in a separate folder from which you exclude search engines entirely. You may adopt a strategy where you allow search engines a minuscule fraction of your images to draw the occasional visitor.
- Treat worthless navigation graphics and logos as before.
- Create headers that can be easily modified across the site (use php include or some such) because you'll have to constantly add new proprietary meta-tags to exclude pinners and other crowdscraping volunteer workforces.
- Make sure the vast majority of your content is textual. Note that Pinterest does have plans to grab the text along with the graphic in the future. Ehow has been doing this for over a year. It's only a matter of time before text is no longer sacred, either.
- I am currently working on a recipe website. In anticipation of greater breaches of copyright in the future, the long-form content will be downloadable as PDF, with the page being a teaser for the PDF. Even if PDFs are easily downloadable, they don't get crowdscraped as easily. I'm planning for the future, and I see a lot of PDFs in it.
| 3:14 pm on Oct 11, 2013 (gmt 0)|
|Make sure the vast majority of your content is textual. |
Text hasn't been safe from scraping since the beginning of scrapers. No. The only way you can "watermark" text is by filling it with references to yourself and/or your brand, and even then it's not perfect. No, I would modify this by saying, make sure the vast majority of your content is hard to scrape, like tools.
For something like recipes, there are special rules in copyright law. I mean, how many apple pie recipes are there? So it's something that's likely to be a target for scraping anyway. However, if you were to build a tool that helped suggest ideal recipes for people based on their dietary needs and preferences, that would be genuinely useful, and impossible to replicate without a ton of work, because the source code would be hidden. I don't think anyone has built something like that yet, but it would be incredibly link-worthy if they did.
Code, not content, is where added value can be both created and protected. We're swimming in content, but we're really short of ways to intelligently filter that content for different niches.
| 11:51 pm on Oct 11, 2013 (gmt 0)|
Yes, the recipes are geared towards a specific medical condition and are unusual. It is shaping up to be a ton of work but the need is there.
I don't want to dwell to much on this particular website idea, other than to say that PDF downloads are one way to discourage traditional scraping and crowdscraping as they are practiced today.
My observations are that there is currently much less text scraping than image crowdscraping. The latter is a veritable plague and removed much of the old incentives to create graphic-oriented websites.
Pinterest garbage is beginning to crowd out original websites in SERPs, and search engines themselves are bypassing websites when displaying graphical content.
Neither happens to text; in fact, search engines are really good at detecting duplicate text content, and drowning them in the gutters of the SERPs.
Back to more words, fewer pictures, and pictures out of the reach of search engine bots.