How Search Engines See Site and Page Duplication
If content with keywords is good, then twice as much content is better, and three times as much is better still, right? Wrong! Some site developers have duplicated pages and even entire sites, making virtual photocopies and adding the pages to the site or placing duplicated sites at different domain names.
Sometimes called mirror pages or mirror sites, these duplicate pages are intended to help a site gain more than one or two entries in the top positions. If you can create three or four web sites that rank well, you can dominate the first page of the search results.
Some people who use this trick try to modify each page just a little to make it harder for search engines to recognize duplicates. But search engines have designed tools to find duplication and often drop a page from their indexes if they find it’s a duplicate of another page at the same site.
Duplicate pages found across different sites are often okay, which is why content syndication can work well if done right, but entire duplicate sites are something that search engines frown on.
Here are a couple of variations on the duplication theme:
Page swapping: In this now little-used technique, one page is placed at a site and then, after the page has attained a good position, it’s removed and replaced with a less optimized page. One serious problem with this technique is that major search engines often reindex pages very quickly, and it’s impossible to know when the search engines will return.
Page jacking: Some truly unethical search engine marketers have employed the technique of using other peoples’ high-ranking web pages, in effect stealing pages that perform well for a while. This is known as page jacking.