Welcome to WebmasterWorld Guest from 184.108.40.206
Forum Moderators: bakedjake
Specifically i want the search engine to tell me which website have a reference like this:
Its a way to know how many websites are using your COMPETITORS webservice :-)
Anyway, search google for [code search] and you'll find a few code search engines. However, these are usually not searching through source code on common web pages, specifically Google's own "codesearch" is not.
What about these these Options:
1. link searchers. maybe there are search engines that do "page-link searches" and these may process <script src=>?
2. companies who do custom crawling of the internet?
It gives you programmatic access to loots of the data Alexa have crawled.
You can use that and write a program that goes through each page and look for a pattern.
Donít know if they give you access to the raw html, that you will need, or only allow you to access preparsed text.
they have a "links" , but by the example they gave it "smells" like the normal anchor link. so no concrete reason to put the effort in testing it. (also , in alexa , link-search did not give what i wanted)
Thanks, Any more ideas, anyone?
Your program could then search each page for your pattern, and save the url for each page that has it.
For example this sample
[alexa.com...] uses AWS and Ruby to access image headers to make them searchable.
More samples her: [alexa.com...]
What you can do is write your own such program to create such a service.
joined:Mar 31, 2007
This has been suggested by others (see "http://tadhg.com/wp/2007/01/30/walking-the-html-dom-without-a-browser/")
but nobody seems to have done this yet?
Other examples: get an estimate of the number of pages that runs a specific script or a specific ad-programme, or use "nofollow" on links, or have the word "sex" in meta descriptions, or had an inline style class called "joe".
Really, there is an enourmous amount of useful stuff you could do with a source search engine.