homepage Welcome to WebmasterWorld Guest from 54.242.200.172
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Subscribe and Support WebmasterWorld
Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 277 message thread spans 10 pages: < < 277 ( 1 2 3 4 5 6 7 8 [9] 10 > >     
How to Remove Hijacker Page Using Google Removal Tool
8,058,044,651 page indexed (now minus 1)
Idaho




msg:756514
 6:19 pm on Mar 17, 2005 (gmt 0)

Continued from: [webmasterworld.com...]


With the help of posts from crobb305 and others, I was able to remove a hijacker's page from the Google index.

My site was doing very well in the SERPs. For over 2 years it had been on the first page for a competitive term (1.2 million listings). Then during the first week in January my site disappeared and traffic tanked for no obvious reason.

When searching for "site:www.mydomain.com" I noticed that my index page often wasn't listed or it appeared on about page 3 or 4 of the results after all my supplimental pages.

A search for "allinurl:mysite.com" often didn't show my index page at all but instead showed somebody else's domain (located in Turkey). When I clicked on this link, my site came up. When I clicked on the cached version of the site, it showed a very old cache of the page. This same site also showed up after all my results when doing a "site:www.mydomain.com"

Using a header checker tool on the site's URL I was able to see it was using a 302 link to my site.

Last night after reading some posts by crobb305 and others I went to Google.com and clicked on "About Google." Then I clicked on "Webmaster Info." Then I clicked on "I need my site information removed." Then I clicked on "remove individual pages." Where I found instructions on how to remove the page.

(Here's the exact page where I ended up. If mod needs to remove then snip away:) [google.com...]

I then clicked on the "urgent" link.

Then:
1. I signed up for an account with Google and replied back to them from an email they sent me;
2. I added the "noindex" meta tag according to their instructions and uploaded it to my site;
3. Using the instructions to remove a single page from the Google index, I added the hijacker's URL that was pointing to my site. (copy and paste from the result found on "allinurl" search)

This didn't work the first time because I had to remove a space from the url to get it to work.

4. I got a message back saying that the request would be taken care of within 24 hours. The URL that I entered showed on the uppper right hand part of the screen saying "removal of (hijacker's url)pending."
5. I then removed the "noindex" meta tag from my page and re-uploaded it to my site.

This morning the google account still shows the url removal as "pending" but when I do "site:" and "allinurl" searches the offending URL is gone and my index URL is back.

Conclusions and Speculations:
At some point last September, Google cached the hijack page's url pointing to my site. In January, Google penalized my site for duplicate content because it found both URL's and compared them. Mine got penalized because it was the only page that really existed. The hijacker's page didn't get penalized because it only existed as a re-direct to my site.

Because my index page was now penalized, it dropped almost completely from the SERPs. (Some of my suppliement pages showed up for obscure searches) but none of my money terms.

Because I haven't been able to get a response from the hijacker's webmaster, the 302 is still in place but it is buried deep in his site and the last Google cache of the page was sometime in September. Therefore with some luck Google won't re-index it any time soon.

Will my site return to the SERPs? I don't know. Any thoughts?

 

Reid




msg:756754
 8:05 pm on Mar 24, 2005 (gmt 0)

adding to that - I really frown on any type of redirect to mysite using a META refresh blank page. I think that might be the key to the hijack problem.

A normal tracking 302 just runs through a script. Someone clicks a link on notbadguy.xyz and the script takes the id# and replaces the url with your url and records the click. Nothing wrong there.

When the script sends user to a blank page with META refresh (without a noarchive) thats when google assigns victims content to that blank page.

accidentalGeek




msg:756755
 8:16 pm on Mar 24, 2005 (gmt 0)

Thank you for the quick replies and for taking the time to think about the proposed solution.

aleksl, you're right. My solution would break down in the scenario you describe, but how likely is it? By sheer coincidence, the same googlebot (or a second googlebot with the same user-agent and same IP address) would have to hit victim.xyz first and badguy.xyz second at very close to the same time.

Reid, thank you for the description of how googlebots work in concert. This is new information to me. I don't understand how it affects the solution I proposed, however. The filter on victim.xyz does not know why it is being hit by a robot (whether from an immediately followed 302 redirect or a stored 302 redirect or for some other unknown reason) and it does not care. It cares only whether it has seen this robot very recently. If it does not recognize the robot, it will issue the 301 pointed at itself to insure that the robot has the correct URL. Am I missing something important?

accidentalGeek




msg:756756
 8:26 pm on Mar 24, 2005 (gmt 0)

At risk of getting out of sync, here's a reply to Reid's concern about meta-tags.

I was not thinking about implementing this solution using any sort of HTML. I agree with you that this would be clumsy and problematic for all sorts of reasons. I was thinking of a pure HTTP solution implemented by the web server itself. It has nothing to do with content.

HTTP 301 redirects are very common on the Web. Most of us don't think about them because they happen transparently. The most common example is when a client requests a directory without the trailing slash. The client asks for "http://mysite.xyz" and the server responds with a 301 to "http://mysite.xyz/" I found a whole bunch of these in an Apahce httpd access log yesterday. One of them involved a googlebot which immediately followed the 301 to the correct URL.

g1smd




msg:756757
 9:07 pm on Mar 24, 2005 (gmt 0)

>> If it does not recognize the robot, it will issue the 301 pointed at itself to insure that the robot has the correct URL. Am I missing something important? <<

Your "correct URL", however many times you "correct" it, has the same content as the URL that is throwing out the 302 redirect, and you are about to be dropped as a duplicate of it.

accidentalGeek




msg:756758
 9:59 pm on Mar 24, 2005 (gmt 0)

g1smd, if I understand you, you're saying that [badguy.xyz...] produces the same content as [victim.xyz...] From the standpoint of an ordinary web browser operated by a human being, that certainly is true. The user types "http://www.badguy.xyz" and sees the content from victim.xyz.

However, there are some http conversations going on beneath the surface that (I believe) a robot will find significant. It lies in the distinction between a 301 and a 302, the same distinction on which the exploit is based.

For easy reference, here's a link to the official definitions of the http status codes: [w3.org ]

HTTP 301 indicates that the content has been moved permanently and "any future references to this resource SHOULD use one of the returned URIs." In other words, clients are expected to stop looking for this content using the old URL.

HTTP 302 indicates that the content has been moved temporarily and clients are expected to continue looking for the content using the old URL.

The solution that I'm proposing attempts to insure that robots will always be given a 301 redirect to the full, proper name of the web host before they are permitted to read content from it. This tells the robot that it should update its records and always use the full,correct URL for future requests. My hope is that the robot will recognize this and start indexing under the correct URL (victim.xyz) rather than the incorrect one (badguy.xyz).

-------------
robot: [badguy.xyz...]

badguy: 302 Location: [victim.com...]

robot: [victim.com...]

victim: 301 Location [victim.com...]

robot: [victim.com...]

victim: 200 <HTML...
-----------------

g1smd




msg:756759
 10:09 pm on Mar 24, 2005 (gmt 0)

Yes, that is all very well, but the search engine spider is not coming from the bad domain when it looks at your site. It is just running through a list of URLs to spider, a list that has been previously stored in its "to do list" database by other bots.

The indexer already has your content attributed to the bad domain. When it sees the same content at a different URL, then that duplicate content gets marked down. You can redirect at your site as much as you like, you're still a duplicate of some other remote URL.

accidentalGeek




msg:756760
 10:28 pm on Mar 24, 2005 (gmt 0)

Thanks for continuing to bear with me as I muddle through my explanations. I think we're getting closer.

but the search engine spider is not coming from the bad domain when it looks at your site.

I don't see why that makes any difference. The filter sees only that it can not verify where the request is coming from, so it issues a 301.

The indexer already has your content attributed to the bad domain.

Where did it find the content? It certainly didn't get it from victim.com unless the filter has some confidence that it's been its 301 hoop first.

To answer my own question, I suppose that badguy could have copied (or, if he's really clever, mirrored) the page from victim that he's trying to take over. In fact, he could copy (or mirror) the victim's entire web site. If he did that, there's no way in the world for the robot to tell which site is the original and which is the copy. This approach would be far worse than the current HTTP 302-based one. There would be no defense at all against it. But I digress...

accidentalGeek




msg:756761
 10:33 pm on Mar 24, 2005 (gmt 0)

Or, if he's really really clever, badguy could set himself up as a reverse http proxy to victim's web site. This would require no additional storage on his part and the two sites would essentially be identical.

See what happens when I start thinking like a badguy?

aleksl




msg:756762
 10:53 pm on Mar 24, 2005 (gmt 0)

I have a feeling Google is trying to fix this, and it breaks apart everyone else.

Here's example.
We have a site that doesn't have an index page
it used to be 302-redirected to a /somefolder/index (so that folder's page was made "main" page of the site). I suspect that here I "duplicate content" myself.

"Duplicate content" filter clearly has to be temporary turned off untill the 302 issue is fixed. As I recall, copyright issue is protected by copyright owners, which they will go back to doing. I'd rather see 10-15 copies of our pages on the net, then have my site completely disappear from SERPs.

accidentalGeek, think about what happened after line 2 of your conversation. GBot thinks that badguy.xyz/someredirectscript is actually a page that has been "temporarily moved" to "victim.com". after that redirect all you want. As long as GBot eventually fetches the page, it thinks that badguy.xyz/someredirectscript is a copy of that page.

accidentalGeek




msg:756763
 11:12 pm on Mar 24, 2005 (gmt 0)

aleksl,

I suspect you're right about Google trying to fix this. I'd be willing to wager that their new batch of competitors are doing the same in the hope that there will be some well-publicized incident that can be used to shift some users their way.

As far as the issue you raise about my proposal, it really comes down to how a particular robot implements that all important "SHOULD" in the HTTP 1.1 specification for response code 301. If the robot behaves like a good citizen, it will scrap the old reference it had on file (badguy.xyz) and replace it with the new one (victim.xyz) and the defense will be successful. However, in RFC speak, SHOULD is only a strong recommendation. It is not a requirement. The client is free to do whatever it pleases and there's no way that the server can control what the client does with the information it gathers.

Right now, however, I'm getting more and more nervous about the ramifications of that reverse proxy idea I threw out. Part of me worries that I said too much. Another part argues that someone more clever and more evil than me has already thought of it and is in the process of hijacking bank accounts.

idoc




msg:756764
 11:16 pm on Mar 24, 2005 (gmt 0)

I am hesitant to say anything because as of right now the whole world does *not* apparently know how this is exactly done. I am simplifying this here for brevity's sake. But, integral to this working IMHO, is a link on site"c" as follows: site"b"/id=?foo the bot spiders that link from the respected high pr site"c". Some time later, it really doesn't matter how much later... the bot calls site"b"/id=?foo to index it. The link exists just as site"c" said it does... except when that link is called from the bots ip address range site"b" issues a refresh or other server side directive on site"b" that redirects the bot to site"a". Site"a" delivers the content with a page code 200 just as if it were any other get request. The webmaster of site"a" is none the wiser from analyzing his logs. The bot attributes content from site"a" as belonging to site"b". When a surfer clicks the actual link on site"b" it directs wherever the webmaster of site"b" wants it to go. It need not be the destination page that was indexed.

Maybe you could have apache *always* issue a 301 redirect to itself no matter how a page is called... I think that would cause alot of grief to the site in the long run. I still maintain, IMHO that having at least *some* absolute url's in your site back to the site index helps to immunize from this. I can't explain why convincingly as I am not privvy to the inner workings of the bot. The only thing I know is *if* this page is spidered and is attributed to belong to some site"b" it will contain an absolute link to a site with original and duplicate content. I think that is poison to that url for site"b" with the bot.

aleksl




msg:756765
 11:28 pm on Mar 24, 2005 (gmt 0)

accidentalGeek, trying again, here's how your example will not work:

a = badguy.xyz
b = victim.xyz/somepage
c = victim.xyz/somepage (or other page, this is where you do 301).

a --302--> b --301--> c

GBot will store URL (a) - it is 302-ed. after that, do whatever you want. even 301. GBot then will remove URL (b), but retain URL (c).

See, it's like in math, if a=b, b=c, then a=c. a and c remain. there's nothing you can do about a. :)

not to knock you down. funny, this is the first idea that came to my head too when I've heard about the issue.

GuinnessGuy




msg:756766
 11:55 pm on Mar 24, 2005 (gmt 0)

Greetings,

I believe that I too have fallen victim to the 302 jacker mess. I contacted one of the culprits who now directs to somewhere else on his site, not mine. I suppose that means I'm outta luck until Google follows that link again and updates the cache.

Question: Is there any way to force or at least encourage Google to update the cache so that it doesn't contain my index page? Today, in an attempt to do this I did an addurl through Google with the thought that it will force a spidering, and, hopefully a refresh of the cache. The cache, as it stands now is dated 3 Nov '04.

I have one other 302 that shows up when I do a site:mysite.com search in google. It is still active so I MAY try to remove it via Idaho's method. The one question I have is can I use a noindex, nofollow meta tag that excludes just the googlebot? I'm really terrified that during the minute or so this noindex tag is active that the Yahoo bot will come around.

On a more pleasant note, I did send a note to Google with the 'canonicalpage' term in the subject and explained that I thought our site had been hi-jacked. I got a 3 sentence reply back that they were passing my email on to the engineers for investigation. That doesn't exactly sound like a canned response to me. If anyone with experience dealing with Google has an opinion about that I'd like to hear it. This debacle has cause much grief and not a little bit of poverty. :(

GuinnessGuy

Jim_at_SFE




msg:756767
 11:58 pm on Mar 24, 2005 (gmt 0)

>>> I had a domain hijacked, by a far lower PR page. <<<

Contrary to what "GoogleGuy" said at the other forum, it is clearly possible for that to happen. Every page that was hijacked from my site was replaced in the main Google index by the hijacking page. With those pages removed and credited to another site, my site's page rankings dropped across the board.

The reason this affects Google is that many of the 302 redirects are created innocently by people working with CMS programs that use them to track click-thrus on links. The more popular a site, the more 302-redirect links it will likely get, and the farther it will drop in the SERPs -- essentially turning a significant part of Google's ranking system on its head. Links that were once counted in favor of a site's ranking are now counted against it when they're made as 302 redirects. As a result, Google's search results are far less accurate. For example, some searches that used to bring up pages from my site now bring up their titles and descriptions but the links go to some directory in Belgium that is using 302 redirects. Google's users will eventually get sick of that and go elsewhere if Google doesn't fix the problem.

accidentalGeek




msg:756768
 12:04 am on Mar 25, 2005 (gmt 0)

quoth idoc:

Maybe you could have apache *always* issue a 301 redirect to itself no matter how a page is called... I think that would cause alot of grief to the site in the long run.

Not to mention the self-inflicted DDOS attack as every client gets caught in an infinite loop and pounds the daylight out of the server ;)

Always issuing a 301 was the first idea I blurted out at slashdot. Ten minutes later I realized what I had done.

GuinnessGuy




msg:756769
 12:11 am on Mar 25, 2005 (gmt 0)

Greetings,

Speaking of 301's, can anyone tell me how to 301 from non-www to www using ASP? Or is this an application for ASP. Basically, our site is running on an IIS server. Seems like there have been plently of code snippets for Apache offered here but non for ASP/IIS. Is no one using IIS anymore?

GuinnessGuy

accidentalGeek




msg:756770
 12:17 am on Mar 25, 2005 (gmt 0)

quoth aleksl:

GBot will store URL (a) - it is 302-ed. after that, do whatever you want. even 301. GBot then will remove URL (b), but retain URL (c).

Rats. I think I'm starting to see it now. Googlebot treats a 302 as though the <i>content</i> it receives from the redirection were at the original URL, regardless of how it might obtain that content. I suppose this makes sense considering that 302 means "this stuff usually lives here but it's out for a while." Googlebot takes the 302 at its word and presumes that the content will come home eventually.

I'm also starting to see how difficult this will be for Google (and every other search engine) fo correct without breaking all sorts of other things.

Thanks for setting me straight.

accidentalGeek




msg:756771
 12:21 am on Mar 25, 2005 (gmt 0)


Is no one using IIS anymore?

Only in my dreams.

But seriously, a redirect is an HTTP header which you should be able to control using the Response object from ASP (or at least that's the way it was when I was an ASP guru back in the dark ages). It's as simple as setting the response code to 301 and adding a Location: header that points to the new location.

g1smd




msg:756772
 12:44 am on Mar 25, 2005 (gmt 0)

>> Speaking of 301's, can anyone tell me how to 301 from non-www to www using ASP? <<

A Google search for 301 Redirect ISAPI Rewrite may find what you need.

idoc




msg:756773
 12:49 am on Mar 25, 2005 (gmt 0)

"self-inflicted DDOS attack as every client gets caught in an infinite loop" ;)

You could limit the redirects to one per get, but still, I don't think 301's to yourself buys you anything in and of itself. Maybe if you randomize the page... I had looked at redirecting for the bots IP only to a subdomain that I disallowed from all other bots. But, a funny thing happened... I don't have a problem anymore. I am seeing server logs 2-3 times the size in past days and traffic I haven't seen in over a year.

"contacted one of the culprits who now directs to somewhere else on his site, not mine."

That's the thing... the link *never* likely pointed to what was indexed from your site to begin with. The refresh or programmatic redirect creates a "phantom url". That is why the bots can't deal with this so easily. I don't think they don't care to. It's a tough spot for everybody concerned. Except of course the site"b" guys.

Lorel




msg:756774
 1:10 am on Mar 25, 2005 (gmt 0)


I did send a note to Google with the 'canonicalpage' term in the subject and explained that I thought our site had been hi-jacked. I got a 3 sentence reply back that they were passing my email on to the engineers for investigation. That doesn't exactly sound like a canned response to me. If anyone with experience dealing with Google has an opinion about that I'd like to hear it.

I wrote a similar message back in Dec. and got the same reply. However my case involved a shared IP address with the one person having a 302 from one of his sites to another and it being applied to my client's site.

john316




msg:756775
 2:37 am on Mar 25, 2005 (gmt 0)

Maybe it would just be better to d/l a tracker.php and use it for your own site navigation, use 302s for linking on your site, throw up a few dozen mirrors and maybe the dupe zapping process at $G will let *your* hijacked content win. The more mirrors you have the better the odds.

ARE YOU FEELING LUCKY?

GuinnessGuy




msg:756776
 2:44 am on Mar 25, 2005 (gmt 0)

Lorel,

You kind of left me hanging with that post. Did you have any evidence that the Google engineers did anything?

One other thing I might point out about the situation with our site is that the results of a site:mysite.com search in Google results in about five entries of the form:

www.mysite.com/?referer=x

where 'x' takes different values. These all point to one file...our index page. All but one has a cache link in the entry. When it has a cache link it also says 'Supplemental Result'. Our legit index page only shows as:

www.mysite.com/

with no title, description, or cache, unlike the entries with the referer=x.

Does this look like a different problem apart from hi-jacking? Certainly there are two 302 links from other sites, one being at the top of the site:mysite.com results so I'm a bit confused. It certainly seems possible that Google is doing what Yahoo did around this time of year last year when it was penalizing us for duplicate content because it interpereted each of those

www.mysite.com/?referer=x

as unique URL's. Is Google now known to have this problem? If so, is there anything I can do about it? The cache for these entries is like 5 months old so I get the feeling that google doesn't spider them anymore. We've added a script to the index page such that if any requests for these pages come in with the referer=x that it returns a wholly different version of the index page(we no longer use variables like this as we no longer use affiliates). But if Google isn't going to spider and refresh the cache then this action won't do us a bit of good, I'm afraid.

GuinnessGuy

Emmett




msg:756777
 2:49 am on Mar 25, 2005 (gmt 0)


Speaking of 301's, can anyone tell me how to 301 from non-www to www using ASP? Or is this an application for ASP. Basically, our site is running on an IIS server. Seems like there have been plently of code snippets for Apache offered here but non for ASP/IIS. Is no one using IIS anymore?

GuinnessGuy

This is what I use for the non-www to www redirect in ASP. (Spaces added to prevent linking on the forum)

-----
PathInfo = Request.ServerVariables("PATH_INFO")
ServerName = Request.ServerVariables("SERVER_NAME")
IsWWW = InStr(ServerName,"www.yourdomain.com")

If IsWWW < 1 Then
NewLocation = "http:// www.yourdomain.com" & PathInfo
Response.Status="301 Moved Permanently"
Response.AddHeader "Location", NewLocation
End If

GuinnessGuy




msg:756778
 3:03 am on Mar 25, 2005 (gmt 0)

Hi Emmett,

Thanks loads.

One more thing...care to tell this newbie where that bit of code resides?

GuinnessGuy

Lorel




msg:756779
 3:05 am on Mar 25, 2005 (gmt 0)


You kind of left me hanging with that post. Did you have any evidence that the Google engineers did anything?

No. they just replied they would look into it like they did with your message. I'm trying to get the client to upgrade to dedicated IP which would solve this problem.

Reid




msg:756780
 4:57 am on Mar 25, 2005 (gmt 0)

Guinessguy - It looks to me that you are getting jacked by five pages. Think about it - they have your title and description but you don't. You are the temporary location of those 5 pages and if google ever gets around to deciding who wins the dup-content race you are the first one out of the algo.

To remove a page from google without disturbing other bots
<META NAME="GOOGLEBOT" CONTENT="NOINDEX, NOFOLLOW">

Kimkia




msg:756781
 5:21 am on Mar 25, 2005 (gmt 0)

I just used the google remove url tool to take down 3 hijackers of my home page. I've been avoiding this all week, because years of work have gone into my site, and the risk associated with NOINDEX, NOFOLLOW scares me silly.

I worked fast -- uploaded my homepage with the NOINDEX, NOFOLLOW, then quickly entered three hijacking urls in the google tool. As soon as that was done, uploaded my home page again, with meta changed back to INDEX, FOLLOW.

But, holy cow - I had to put my entire site (a huge part of my life) at risk to deal with hijackers, one of which had #3 spot in my allinurl:site.com search, with a PR5, same as my site.

Hijacker #3 was a commercial site that sells items appealing to my readers. I do NOT sell these items - I provide original content, how to's, free directions for people to learn how to make, do, and create - and income for me comes from AdSense and Fastclick mostly...like a magazine subsidized by advertising.

Hijacker#3 also gets income from Adsense. How did they get PR5? Maybe it's because their directory hijacks site after site with 302 redirects...you "preview" every site in their directory...they call urls into a small window box that displays the home page of every site thus victimized.

Another one truly disturbs me...the site title includes mine, preceded by: "The fastest way to get your sites spidered by all major search engines..."

Find this site, and you get a directory of hijacked sites, surrounded by searchfeed ads and a search box that looks like a google search, plus a Google Adsense skycraper. No information about search engine spiders...just an invite to add your link to their directory, which is composed of...what else? 302 redirects.

This piece of work has a PR4.

From my perspective, four years of solid hard work on a site that offers solid original content is threatened by these hijackers.

Emmett




msg:756782
 5:30 am on Mar 25, 2005 (gmt 0)


One more thing...care to tell this newbie where that bit of code resides?

I just put it in an asp include file that I include at the top of all my pages with a <!--#include virtual="/inc/filename.asp"--> statement. Just make sure its got <% %> tags around it in the code file. Also its a good idea to test it out on 1 page before you upload it to all.

Net_Warrior




msg:756783
 6:28 am on Mar 25, 2005 (gmt 0)

Okay, so my site was hijacked by a site that pulled an absolute URL of my homepage and put it inside a frame with the hijacker's URL. Of course google cached it and it appears in allinurl:mysite.com

My question is this. Can I successfully delete the link if I temporarily change the name of my site's public_html folder so my site temporarily does not exist. Then do a remove dead link request in Google?

What are the risks of doing it this way?

Thanks for any advise? If this doesn't work, what are my options?

Reid




msg:756784
 7:34 am on Mar 25, 2005 (gmt 0)

netwarrior - as long as the link returns a 404 'page not found' when you use the removal tool you will get a 'successful' result. It then takes 24 hrs for it to disappear.

The risk of taking your site offline - all robots (and visitors) who visit during this time will get a 404.

Try adding the META tag in my previous post to the page the hijack points at and then remove the tag after you remove the URL's. The only risk is that googlebot will be turned away (for that page) during this time if it is normally crawling your site at the same time.

What this tag does is the removal tool sends googlebot to see if the page exists before it gives a 'successful' this tag will stop googlebot in it's tracks and return a 404.

This 277 message thread spans 10 pages: < < 277 ( 1 2 3 4 5 6 7 8 [9] 10 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About
© Webmaster World 1996-2014 all rights reserved