homepage Welcome to WebmasterWorld Guest from 54.198.139.141
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Subscribe and Support WebmasterWorld
Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 41 message thread spans 2 pages: 41 ( [1] 2 > >     
Hijacking - Some Advice for Webmasters
Some advice on google hijacking,how to find, what to do etc..
Pirates




msg:3184967
 10:50 am on Dec 11, 2006 (gmt 0)

Protecting yourself against hijacks.?

The best way in my opinion to try to attempt to stop a site beeing hijacked is to ensure pages are only served one way by dealing with any canonical issues. There some great advice on this from matt cutts
[mattcutts.com...] Basicly redirect non www. to www and "/index.html" to "/".

What is a hijack?

"A page hijack is a technique exploiting the way search engines interpret certain commands that a web server can send to a visitor. In essence, it allows a hijacking website to replace pages belonging to target websites in the Search Engine Results Pages ("SERPs"). "
from [clsc.net...]

How can I tell if I have been hijacked?

Use googlesitemap creator to crawl your site. You should then make a note of the number of pages on your site. Now run site:www.yoursite.com on google. If the number of pages in google is vastly higher than the actual number of pages on your site you "may" have some hijacked content.

How can I find who's hijacking me?

You can use your logs to do this. Because a hijack relies on mirroring your pages they will start comming up as a refferer to your site. So have a good look at the people reffering traffic to your site and compare them to a few months ago. Now check out the new sites referring by visiting each one. A hijacker site in my experience often use's a numeric or gibberish domain name and typically run with extension of .com and .net . Also look for adult content sites reffering and treat them as suspect.

Ok I have my list What Do I do Next?

Report them. Explain what tests you have made and list the refferers you find suspicious to [google.com...]


 

Pirates




msg:3188172
 1:01 am on Dec 14, 2006 (gmt 0)

Some more things you can do to protect against hijackers.
As mentioned by the article
[clsc.net...]

Use the <base href=""> meta tag on all your pages

Change any 302 redirects on your site to 301 redirects

Things to do in .htaccess

Ok so you have found hijackers in refferals reported them what else can you do?

Sorry this only works on apache but hopefully a windows expert can post how to do equivalent...

Create a .htaccess file in root.

Now use a DNS lookup tool to find the ip address of the site hijacking you.

Once you have that in .htaccess code:

order allow,deny
deny from {IP Address of hijacker site - Without any brackets}
allow from all

But also remember there is probably a cron job to copy your pages on the hijacker server so lets deny that site access as well.

RewriteEngine on
RewriteCond %{HTTP_REFERER} hijacker\.com [NC]
RewriteRule .* - [F]

Ok but maybe there are some desktop tools to scrape the site.No worries..........

RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR]
RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:craftbot@yahoo.com [OR]
RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [OR]
RewriteCond %{HTTP_USER_AGENT} ^Custo [OR]
RewriteCond %{HTTP_USER_AGENT} ^DISCo [OR]
RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [OR]
RewriteCond %{HTTP_USER_AGENT} ^eCatch [OR]
RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [OR]
RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [OR]
RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [OR]
RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [OR]
RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [OR]
RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [OR]
RewriteCond %{HTTP_USER_AGENT} ^FlashGet [OR]
RewriteCond %{HTTP_USER_AGENT} ^GetRight [OR]
RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [OR]
RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [OR]
RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [OR]
RewriteCond %{HTTP_USER_AGENT} ^GrabNet [OR]
RewriteCond %{HTTP_USER_AGENT} ^Grafula [OR]
RewriteCond %{HTTP_USER_AGENT} ^HMView [OR]
RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR]
RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [OR]
RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [OR]
RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR]
RewriteCond %{HTTP_USER_AGENT} ^InterGET [OR]
RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [OR]
RewriteCond %{HTTP_USER_AGENT} ^JetCar [OR]
RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [OR]
RewriteCond %{HTTP_USER_AGENT} ^larbin [OR]
RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [OR]
RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [OR]
RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [OR]
RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [OR]
RewriteCond %{HTTP_USER_AGENT} ^Navroad [OR]
RewriteCond %{HTTP_USER_AGENT} ^NearSite [OR]
RewriteCond %{HTTP_USER_AGENT} ^NetAnts [OR]
RewriteCond %{HTTP_USER_AGENT} ^NetSpider [OR]
RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [OR]
RewriteCond %{HTTP_USER_AGENT} ^NetZIP [OR]
RewriteCond %{HTTP_USER_AGENT} ^Octopus [OR]
RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [OR]
RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [OR]
RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [OR]
RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [OR]
RewriteCond %{HTTP_USER_AGENT} ^pavuk [OR]
RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [OR]
RewriteCond %{HTTP_USER_AGENT} ^RealDownload [OR]
RewriteCond %{HTTP_USER_AGENT} ^ReGet [OR]
RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [OR]
RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [OR]
RewriteCond %{HTTP_USER_AGENT} ^SuperBot [OR]
RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [OR]
RewriteCond %{HTTP_USER_AGENT} ^Surfbot [OR]
RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [OR]
RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [OR]
RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [OR]
RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [OR]
RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [OR]
RewriteCond %{HTTP_USER_AGENT} ^WebAuto [OR]
RewriteCond %{HTTP_USER_AGENT} ^WebCopier [OR]
RewriteCond %{HTTP_USER_AGENT} ^WebFetch [OR]
RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [OR]
RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [OR]
RewriteCond %{HTTP_USER_AGENT} ^WebReaper [OR]
RewriteCond %{HTTP_USER_AGENT} ^WebSauger [OR]
RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [OR]
RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [OR]
RewriteCond %{HTTP_USER_AGENT} ^WebStripper [OR]
RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [OR]
RewriteCond %{HTTP_USER_AGENT} ^WebZIP [OR]
RewriteCond %{HTTP_USER_AGENT} ^Wget [OR]
RewriteCond %{HTTP_USER_AGENT} ^Widow [OR]
RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [OR]
RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [OR]
RewriteCond %{HTTP_USER_AGENT} ^Zeus
RewriteRule ^.* - [F,L]

Source: [javascriptkit.com...]

[edited by: Pirates at 1:37 am (utc) on Dec. 14, 2006]

tedster




msg:3188200
 2:03 am on Dec 14, 2006 (gmt 0)

Thanks for the info, Pirates. In recent times I see much less hijacking than the past - and often what I do see is accidental, not malicious. Do you have a sense of why the hijacks that still work are getting through?

People may find some interesting reading in this following thread from last year. Particularly note mesages from GoogleGuy beginning at message #732728 (about number 108) -- this exchange, for instance:

crobb305:
Are provisions being made to allow sites to return to the serps after being penalized for problems beyond their control?

GoogleGuy:
...in many of the cases that I've examined, a spam penalty comes first. That spam penalty causes the PageRank of a site to decrease. Since one of the heuristics to pick a canonical site was to take PageRank into account, the declining PageRank of a site was usually the root cause of the problem. That's what happened with your site, crobb305. So the right way forward for people who still have problems is to send us a reinclusion report.

[webmasterworld.com...]

<added later>
Here's another good thread
about proxy hijacks [webmasterworld.com]

[edited by: tedster at 1:40 am (utc) on June 25, 2007]

Pirates




msg:3188212
 2:30 am on Dec 14, 2006 (gmt 0)

I think you hit the nail on the head Tedster. A site without penalty is probably strong enough to withstand a hijack. Howether if the hijacker has got in early they will start to create a penalty with the hijacked pages, by adding spam to them. The hijacker then is able to penalise the site, through the hijacked content. And guess what now the site is penalised they can and will hijack any page they want.

Tami




msg:3188392
 8:16 am on Dec 14, 2006 (gmt 0)

Pirates,

I noticed in the list of scrappers for the .htaccess file most have the ^ but a few do not, is this on purpose?

Also, to get it straight, I can copy and paste your list, as is, into my htaccess and it will prevent at least those scrapers from scraping my site?

Thanks,
Tami

Pirates




msg:3188910
 8:48 pm on Dec 14, 2006 (gmt 0)

I noticed in the list of scrappers for the .htaccess file most have the ^ but a few do not, is this on purpose?

Also, to get it straight, I can copy and paste your list, as is, into my htaccess and it will prevent at least those scrapers from scraping my site?

Hi Tammy

Yes you can cut and paste straight into htaccess and it will prevent these scrapers. The few that are missing "^" are on purpose. Basically "^HTTrack" would look for an exact match and "HTTrack" would look anywhere in the query string.

Here's what "Superman" said way back in 2002 on this....

99 percent of these things always use the exact useragent name. If there are anomolies (like httrack), then by all means make it case-insensitive. Same with the ^ character. They always start the same way

Here are the three threads on WebmasterWorld that discuss the code in great detail.

A Close to perfect .htaccess ban list (part 1 and 2 and 3)
[webmasterworld.com...]
[webmasterworld.com...]
[webmasterworld.com...]

romerome




msg:3189208
 5:19 am on Dec 15, 2006 (gmt 0)

I am trying to figure out if someone is trying to hijack my page. When I look through my log files I find alot of entries where

Referrer : http://stupid_spam_url.com
referred to -> www.mysite.com/stupid_spam_text.html

Now stupid_spam_text.html does not exist on my server.

Also http://stupid_spam_url.com does not have a link to me. It is frequently an adult site or a bogus pills selling page.

[edited by: tedster at 6:41 am (utc) on Dec. 15, 2006]

ashear




msg:3189233
 6:21 am on Dec 15, 2006 (gmt 0)

Great post! I love to see new thinking that's out of the box.

tedster




msg:3189244
 6:45 am on Dec 15, 2006 (gmt 0)

Welcome to the forum, romerome.

To me, what you describe sounds more like plain-old referer spam, more than a hijack attempt.

mcskoufis




msg:3189374
 11:31 am on Dec 15, 2006 (gmt 0)

Hello to all...

I had one of my sites heavily hijacked... I did over a trillion things to find a way out of it, as google traffic fell from several hundred referrers a day to just 5.

It took me months to recover the site...

One thing I want to add, is that it is ULTRA important to add a base href meta tag to all your pages.

With that in place, the hijacker I think gets penalised, as google sees that he is trying to present content as his, when in fact the base url is my site and not his.

I think this, together with a proper 404 page (actually outputting a 404 server response code) can do the job.

However, because there is a gazillion things I've tried to recover I can not be 100% certain that this is what made things right.

Also I don't know (and nobody here does) what Google's counter-measures on this are. I've notified them several times with examples of both hijackers and of 302 spam, so I am not at all sure what helped with my site.

I can say that the 404 page is an important measure... Make sure it is proper... I have also added a "noindex, nofollow" on my 404...

If you think about it, hijacking is presenting content of a legit site as if it was done by the hijacker... If you think a bit more, you might find that those who are most interested in this are premium domains sites... Check the domains sending you traffic. Not just the pages that appear in your logs but their homepage as well... Does it have a link back to the domain auction page/site? Does the offending domain include some of your targetted keywords?

Things like this... Once you've researched, file a DETAILED spam report to google, but try to identify the origins of the attack...

caryl




msg:3189493
 2:22 pm on Dec 15, 2006 (gmt 0)

Does anyone know how to do the .htaccess stuff on Windows Servers?

RichTC




msg:3189636
 4:12 pm on Dec 15, 2006 (gmt 0)

"ULTRA important to add a base href meta tag to all your pages"

Would you mind expanding on what a "base href meta tag" is, how you code for this and where exactly this would go your page?

i.e Page starts:-

<html>
<head>
<title>My Widget Site</title>
<meta name="description" content="Widget Site on the net">
<meta name="keywords" content="Widgets, More Widgets">

Whats the start line look like and is it after <meta name="keywords" or is it the last line before /html?

Many thanks in advance

Rich

Jordo needs a drink




msg:3189662
 4:28 pm on Dec 15, 2006 (gmt 0)

Would you mind expanding on what a "base href meta tag" is, how you code for this and where exactly this would go your page?

For details [w3.org...]

In your example:

<html>
<head>
<title>My Widget Site</title>
<BASE href="http://www.example.com/">
<meta name="description" content="Widget Site on the net">
<meta name="keywords" content="Widgets, More Widgets">

where example.com is your site url

[edited by: Jordo_needs_a_drink at 4:29 pm (utc) on Dec. 15, 2006]

mattg3




msg:3189673
 4:36 pm on Dec 15, 2006 (gmt 0)

I use a squid reverse proxy to prevent hijack attempts.

something like that

acl unwantedbrowser browser ^[Ww][Gg][Ee][Tt].*$
and so on

I would also deny lynx as it's easy to integrate into a neat shell script for the medium talented script kiddie.

tedster




msg:3189829
 6:30 pm on Dec 15, 2006 (gmt 0)

Note this about the base element:

Relative URIs are resolved according to a base URI, which may come from a variety of sources. The BASE element allows authors to specify a document's base URI explicitly.

I bring this up because I have seen sites make errors here that generate spidering issues. If the urls in a page's links are all root-relative [that is they all begin with a slash] then plugging in the domain name for a base href seems to work well. The issues come in with completely relative urls that occur within a sub-directory

Let's look at what happens if you have a document at this address:
http://www.example.com/directory/page1.htm

If a link on that page points to page2.htm the intended full address of such a target page is:
http://www.example.com/directory/page2.htm

But if you set the base href to be http://www.example.com/, then you are telling the spider to go to:
http://www.example.com/page2.htm. See what happened? The directory name got dropped from the url, and that means that the link breaks!

For this reason, the best practice for a base element is to make the href point to the full absolute url of the page itself.

See the base element examples on the W3C [w3.org] for more clarification.

jdMorgan




msg:3189865
 6:59 pm on Dec 15, 2006 (gmt 0)

Basically "^HTTrack" would look for an exact match and "HTTrack" would look anywhere in the query string.

This is standard regular-expressions notation. In the example code above,
"^HTTRACK" would match any user-agent whose name starts with "HTTRACK"
"HTTRACK$" would match any user-agent whose name ends with "HTTRACK"
"^HTTRACK$" would match only a user-agent whose complete name exactly matches "HTTRACK"

The [NC] on the end of a RewriteCond makes the comparison case-insensitive. The way the example code is structured, the [OR] is required on all except the last RewriteCond. Any mod_rewrite code will not work properly if [OR] is included on the last RewriteCond.

Does anyone know how to do the .htaccess stuff on Windows Servers?

See the ISAPI Rewrite module for IIS.

Jim

Pirates




msg:3190133
 12:36 am on Dec 16, 2006 (gmt 0)

Some more things you can do to prevent and break hijacks...

First of all thanks for all the posts and keep them comming. I have been spending too much time working in this area of late and to be honest its not work I enjoy. I like building websites that I hope are of value to the internet and the client. I would prefer this problem didn't exist at all so with that in mind I am going to continue to post info untill we got this one solved, Thanks RICH for keeping quiet but decided to share this now.

The random factor, how to do, how it works........

Ok we all know that hijacks rely on copying your content. So lets introduce random factor into your content that makes it a lott harder to copy. There are various ways this can be achieved howether after much thought and experimentation I am going to use the server time.

In the footer of your site include the time on your server for instance <p>Current time on this site (include server time,and time in seconds)</p>

How is this achieved?

There are various ways to achieve this here's how I do it in php

<? echo date('l dS \of F Y h:i:s A');?>

Got a static site thats been hijacked, no worries you can use this too....

In htaccess

RemoveHandler .html .htm
AddType application/x-httpd-php .php .htm .html

If your running a control panel such as plesk or cpanel you should consult your host before you make this change.

Then you can include the php in a static site.

How it works?

The hijacker relys on copying your content exactly, now you have introduced a random factor the hijacker does not only have to match the content but he has to match your content at the same time to the second on each page google crawled your site. And everytime google crawls the site.

Its I think checkmate to hijackers if google use an exact match on your site.

mcskoufis




msg:3190146
 1:17 am on Dec 16, 2006 (gmt 0)

Hey Tedster.... Thanks for bringing this up...

The system I deploy to build my sites has the base href element in a variable, thus adjusting it accordingly for directories...

mcskoufis




msg:3190175
 1:56 am on Dec 16, 2006 (gmt 0)

Pirates, I am not sure if that is what will help that much on this front. My hijacked site had continuously updated content on every page. 10 links aggregated from various newsfeeds on the web, plus a list of the recent published headlines on my (hijacked) site.

What you refer to here is scrapping (unless I didn't understand properly). Scrapping is another story (I think).

Scrapping can be done by duplicating your pages' code, but it can also be done by copying portions of a page of yours.

I've come across several pages on various spammy sites copying a paragraph from my site and quoting me as a source, which is totally legit...

Those pages don't only have my page's paragraph, but also paragraphs from other sites.

Another recent trend, is for scrappers to open a blog and start grabbing a google news rss feed and featuring the top 2 articles on a blog post but without linking to those articles.

A really valuable tool to monitor such activity is Google Alerts. Each time a site mentions my example.com or just example (provided it is kinda unique to your site) either in a blog post or a webpage, I get an email.

Each time I gather a relatively good list of scrappers (the same with spammers of any kind) for which I have done reverse-ip to see what other domains are hosted on that IP address and together with IP block owner information, whois records and whatever else I can't remember right now, I fill in a spam report to google, with evidence (some extracts from my server logs).

Then again, the problem is that the spammers will always try and will always discover other ways to fool Google's (and any search engine's) algos. It is just a matter of the webmaster to be vigilant and follow up any leads that look suspicious. Weird referrers, inflated pageviews by a single IP address, sites mentioning your site's name... Stuff like that..

Sending feedback to google about any suspicious activity is a good step forward (in my opinion)... They might not pick up on it the next day, but I think they must be paying attention to these reports. After all they get some invaluable information out of these spam reports and dissatisfied results.

Sorry for the long post folks. This issue took me about 9 months to get to grips with. We are talking about a long period of intense paranoia about anything moving on my site(s). Lost too much money and a project which we had invested loads of positive energy research and efforts went into the waste bin.

And the funny thing is that still you can not be certain about a damn thing, unless you work for Google. And despite their effors and goodwill, a bit of advice from them wouldn't disclose so much about their algos. You know... Things like "what to do if you are in that trouble", instead of the usual "there is ALMOST nothing a competitor can do..."

This despite the fact I am still thankfull for Google's SINCERE effors and have congradulated their sitemaps as a significant step forward. One think I say to them... So much talent and positive energy is wasted into uncovering #%$#%&*# scums like these, let alone limited funding going wasted. It is a clear SHAME that for the sake of an algo Google is allowing this to happen (intentionally).

Pirates




msg:3190183
 2:14 am on Dec 16, 2006 (gmt 0)

mcskoufis

Read what I have said and wake up mate. There is no need to be in the pit of deppression your now in. These guys can be beat by each and every website effected. Thats the reason I started the thread.

mcskoufis




msg:3190344
 9:16 am on Dec 16, 2006 (gmt 0)

Pirates,

I used to have problems... All such problems are a thing of the past for me and for all the sites I am operating...

I am fully woken up... Believe me... I've spent endless hours to bring back Google traffic...

And some of my comments are for the community as well and not specific to your site.

None of my pages are static. They are all dynamic, with continuously updated content (on each and every page)...

If you don't want to listen from someone who recovered from hijacks/302s/spam referrals/scrapping, then that's up to you...

The end result for me was like a 3000% increase of Google traffic (according to analytics) in just a few weeks. It has not stopped increasing ever since (October) and I do rank in the top 40 for a generic and highly competitive single word in google, among some huge and long established sites (like the BBC).

Pirates




msg:3190593
 5:18 pm on Dec 16, 2006 (gmt 0)

Mcskoufis I think you make some excellent points. The idea of using Google Alerts to keep tabs on your content is a great idea and also good advice to make use of google sitemaps.

Making Use of Google Webmaster Tools to fix problems on your website....
As Mcskoufis points out Google have made available webmaster tools that can help iron out problems on your site.

https://www.google.com/webmasters/tools/docs/en/about.html

Some of the features of google webmastertools include giving google site verification,creating a google sitemap,determining whether the site uses www or non www. It will show crawl information and help pin point bad links, 404 pages on your site etc

Google Alerts

[google.com...]
As Mcskoufis points out if you use your website name as a topic of choice you can recieve email notifications on where its beeing used in google's index. A great way to get an early notification of content theft.

mcskoufis




msg:3190629
 6:32 pm on Dec 16, 2006 (gmt 0)

Just an attribute note...

Mcskoufis learned all this from the invaluable advice published on numerous threads repeatedly from G1smd and Tedster primarily and TheBear and all the others I can't remember right now... Just to keep the facts right :)

Pirates




msg:3190876
 4:14 am on Dec 17, 2006 (gmt 0)

G1smd and theBEAR publish some great stuff, But Tedster I've never heard of........

Just Joking, Ted. For me too your advice has been invaluable.

Thankyou

kidder




msg:3190924
 8:25 am on Dec 17, 2006 (gmt 0)

Funny this has just come up as I was about to post about something that sounds like this.

I found my site copied on another site? Exactly, links the colours everything. The copied site came up as supplimental results so from what I can tell Google has already dealt with it. The site that copied us is some type of web directory. Our site is pretty much running page for page (the html part) in one of their sub directories they have called proxy. It's pretty disgusting. We did take a big hit in the serps a few months back so this may just have been part or all of the reason.

I am not quite sure what to do next?

mcskoufis




msg:3190958
 10:01 am on Dec 17, 2006 (gmt 0)

For the queries that show this site as supplemental, I would click on the "Dissatisfied" link at the bottom and let Google know. Then I would also submit a spam report at:

[google.com...]

The fact that it is supplemental doesn't mean that Google penalises them for this. It could be for other reasons...

I would also submit a spam report to the other major search engines...

wrkalot




msg:3191084
 2:27 pm on Dec 17, 2006 (gmt 0)

I need to know if this will work, OR is it the correct way to display the BASE URl.

in .ASP

<%
Dim BaseURL
BaseURL = "http://www.mydomain.com" & Request.ServerVariables("URL")
%>
<base href="<%=BaseURL%>"/>

The entire URL is written:
<base href="http://www.mydomain.com/default.asp"/>
<base href="http://www.mydomain.com/shop/itemdetail.asp"/>
etc...

It write the entire path but doesn't display any query strings, like <base href="http://www.mydomain.com/shop/itemdetail.asp?ItemID=386726348"/>

Does this look acceptable or should it just show the directory and not the actual page?

wrkalot




msg:3191943
 1:33 pm on Dec 18, 2006 (gmt 0)

Anyone?

mcskoufis




msg:3191968
 2:05 pm on Dec 18, 2006 (gmt 0)

Personally I have never used ASP, so I can't help you on this one...

Maybe if you post this in the related forum...

wrkalot




msg:3191985
 2:33 pm on Dec 18, 2006 (gmt 0)

The ASP part doesn't matter, it's the way the Base URL is displayed.

This is how I show the base URL now:
<base href="http://www.mydomain.com/info/about-widgets.asp"/>
<base href="http://www.mydomain.com/shop/itemdetail.asp"/>
etc...

Is the above OK or does it need to be at the directory level like this:
<base href="http://www.mydomain.com/info/"/>
<base href="http://www.mydomain.com/shop/"/>
etc...

This 41 message thread spans 2 pages: 41 ( [1] 2 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About
© Webmaster World 1996-2014 all rights reserved