| 10:44 am on Nov 30, 2001 (gmt 0)|
the challenge is not where the links come from, but rather how they are formatted. You can build a page from a database and include links quite easily, and if they are hard linked as in somesite.com/somepage.htm thet you are OK. But if they are formatted as yoursite.com/link.asp?ID=number, then that is less effective.
Some spiders may follow the links with ?= etc, but they will probably not apply any linking factor in the search algorightm. As an example, have a look at either site in my profile, all the links in the directory sections are geneerated from a databse in real time but they are formatted as hard links.
| 5:02 pm on Dec 3, 2001 (gmt 0)|
Sorry it's taken so long to get back to you.
| 2:15 pm on Dec 4, 2001 (gmt 0)|
An ASP page is can be a combination of HTML code and script. VBScript starts with <% and ends with %>
If you have your links in the HTML portions of the page, search engines don't seem to have much difficulty in finding them.
If you are generating them on the fly based upon input from cookies, the search engine will probably not find them.
| 6:04 pm on Dec 4, 2001 (gmt 0)|
>>But if they are formatted as yoursite.com/link.asp?ID=number, then that is less effective.
Not necessarily. I was actually just investigating how multiple pages of a client's ASP site got crawled by Google.
Try this search [google.com] on Google - you'll see toward the end of the second page URLs ending with .asp?AppID=number, and multiple pages of these were spidered & indexed.
The next step, though, to allow the content to be found by searchers more easily - I'm thinking that when these types of items are input into the database, the <TITLE> and <description> tags should be somehow filled in automatically, so they reflect the content of the page more accurately.
| 1:49 am on Dec 5, 2001 (gmt 0)|
Skiguide, you are correct in that Google does spider "?" urls, infact some of my pages with "?" some up high than those without based purely on content. However, some of the other engines definitely do have problems.
| 3:57 pm on Dec 5, 2001 (gmt 0)|
the ability of SE spiders reading dynamic pages all depends on how you build them to write the files, so that dynamic content appears to be static (ie, pulling it into html pages in a short amount of time)
spiders are stupid, but that doesn't mean FAST, inktomi don't have the ability to read past the ? - it is just a matter of resource consumption - that's why you'll see that using PFI programs like Ink's or Fast's will get more dynamic content spidered.
The only reason google will do it is if they deem the pages a 'quality' resource.
| 3:59 pm on Dec 5, 2001 (gmt 0)|
Or in some cases directly submitted.
| 12:15 pm on Dec 21, 2001 (gmt 0)|
most of the guys here build dynamically driven websites and find ways to flummux the engines into thinking that those pages appear statically.
Simply replace the poisonous symbols in the URL.
| 1:45 pm on Dec 21, 2001 (gmt 0)|
I'm going to have to spend a bit of time learning this subject properly over the Christmas break.
Have a good Christmas