#1 – Paraphrasing Mike Grehan, author of The Search Engine Marketing Book
If you create a page about a unique word or phrase, you can easily rank for that term.
As an example, if you were to create a site about xandrough, Google will want to return that site when a searcher uses that term because your site would be the most relevant to that term. The problem of course is that nobody is likely to type that into any search engine because the word does not exist, so ranking for it won’t do any good.
However, how about a keyphrase like:
Landscape Design Sussex County, NJ
Now, the possibilities of this concept seem pretty exciting for Landscapers located in Northwest, NJ. The sites are returned in the organic lack a sense of relevancy. Is that because, like xandrough, it is not likely to be typed into the engine. I don’t think so, as you can see many landscapers are competing for the phrase in Adwords. It actually looks to me like a phrase that is juicily near a conversion worth thousands of dollars.
Try a search like this for your business plus a regional term. If you don’t find many sites that match that query, there may be an opportunity.
#2 – Search Engines Return Pages Not Sites on the Results Page
With this in mind for the example above; the page has to be about Landscape Design in Sussex County, not the site. In other words, if one page on your site is about landscape design and another page is about sussex county, no page of your site will come up for the keyphrase: ‘landscape design sussex county’.
This can be used to the advantage of the linkless site, making it possible to come up in the results for multiple search queries. For example, one page could be for Lawn Maintenance, while another could be Landscape Design.
#3 – The Order and Proximity of The Keywords Matter
Write a sentence or phrase that contains both the service and the geographic term and have those terms in close proximity.
#4 – Time
It will take some time before the search engines trust a linkless site. So, you may just to have to wait. However, if your site is a couple of years old and you re-write it, applying some of these principles, you may be able to rank for your terms as soon as the next time Googlebot visits your site.
To speed some things up even more you may want to add your business listing to Google Maps and verify your business listing. After filling out the information you will recieve an automated telephone call and you will have to enter the numbers they provide you.
Also, you could fill out the business profile on superpages.com. Don’t just set up a listing, you want to make sure you add your url to the business profile… Google only spiders business profiles on superpages.com, not the rest of the site.
The above options are free… the links they provide won’t have any value in ranking your site in search results but they may help your site get indexed quicker by google. And they both get traffic in their own right and could score you some work.
Web Promotion Services
Learn how to promote your business online...
Tuesday, September 28, 2010
Sunday, September 19, 2010
Competitive Keywords. Name
Highly competitive keywords and phrases
Keyphrases such as personal loan and cheap mortgage are highly competitive. The competition is very fierce to rank highly in search engines for these keywords and phrases, as they are part of a very lucrative market, especially in the current economic climate. The top websites (those shown in the first 30 results for personal loan) will have spent a lot of time and money reaching their high ranking and will be proactive in staying there. In this case, other methods of promotion could offer a cheaper and quicker source of targeted visitors to your site. These methods include:
1.Viral marketing
2.Conventional forms of advertising such as radio
3.E-mail marketing
4.Paid for Internet advertising such as Google adwords
Domain name
If possible, purchase a domain name that sums up your products and services. There is no proof that this helps within search engines but it does help in many other areas. For example, some people looking for specific information may just enter a domain name into a browser address bar and hope to strike lucky instead of using a search engine.
An example would be www.bouncycastlehire.co.uk. The domain name sums up exactly what this Company provides, is very easy to remember and stands out in a page of search engine results. Standing out in a page of SERP’s is important, as you will have 9 other websites to compete against.
Imagine you have a family and are looking to buy a new car. When at the show room you are faced with a row of 10 cars all with the same technical specification but one has a sign in it that states “ideal for families”. Would you request details about this car first? If you can create the perception that your website has the information required by the domain name alone, then you have already generated interest.
Try to stay away from hyphenated domain names if possible. Although this has no negative effect in the search engines, many people forget to put it in and may reach one of your competitor’s websites instead. Research what other domain names with similar spelling there are and check to see if they are competitors. Deliberately purchasing a domain name very similar to a direct competitor (e.g. same words but spelt or arranged differently) can attract a negative legal response from the other owner, especially if you provide similar products or services.
Having to write www.InternetConsultancyandMangment.com into a browser would be unusually cruel!
Keyphrases such as personal loan and cheap mortgage are highly competitive. The competition is very fierce to rank highly in search engines for these keywords and phrases, as they are part of a very lucrative market, especially in the current economic climate. The top websites (those shown in the first 30 results for personal loan) will have spent a lot of time and money reaching their high ranking and will be proactive in staying there. In this case, other methods of promotion could offer a cheaper and quicker source of targeted visitors to your site. These methods include:
1.Viral marketing
2.Conventional forms of advertising such as radio
3.E-mail marketing
4.Paid for Internet advertising such as Google adwords
Domain name
If possible, purchase a domain name that sums up your products and services. There is no proof that this helps within search engines but it does help in many other areas. For example, some people looking for specific information may just enter a domain name into a browser address bar and hope to strike lucky instead of using a search engine.
An example would be www.bouncycastlehire.co.uk. The domain name sums up exactly what this Company provides, is very easy to remember and stands out in a page of search engine results. Standing out in a page of SERP’s is important, as you will have 9 other websites to compete against.
Imagine you have a family and are looking to buy a new car. When at the show room you are faced with a row of 10 cars all with the same technical specification but one has a sign in it that states “ideal for families”. Would you request details about this car first? If you can create the perception that your website has the information required by the domain name alone, then you have already generated interest.
Try to stay away from hyphenated domain names if possible. Although this has no negative effect in the search engines, many people forget to put it in and may reach one of your competitor’s websites instead. Research what other domain names with similar spelling there are and check to see if they are competitors. Deliberately purchasing a domain name very similar to a direct competitor (e.g. same words but spelt or arranged differently) can attract a negative legal response from the other owner, especially if you provide similar products or services.
Having to write www.InternetConsultancyandMangment.com into a browser would be unusually cruel!
Sunday, September 12, 2010
What The Search Engines Say - Teoma
To truly understand what sets Teoma apart from the competition, you first need to know the 3 primary techniques used by search engines today:
Text Analysis - determines a site’s relevance by the text on the page. This technique was fine when the Web was small and spammers couldn’t artificially increase their rankings.
Popularity - determines that the more links there are to a site, the more popular it is. However, this is not necessarily the best judge of relevance.
Status - goes beyond popularity by analyzing the importance or "status" of the sites providing the incoming links. But this lacks context because it doesn’t calculate whether the incoming links are related to the search subject.
Subject-Specific Popularity This is the Teoma difference. Teoma uses elements of the above three techniques, but it does more. Rather than rely on the recommendations of the entire Web audience to determine relevance, Teoma technology uses Subject-Specific Popularity to connect with the authorities - the experts within specific interest groups that guide it to the best resources for a subject.
The Teoma Web Crawler FAQ
The Teoma Crawler is Ask Jeeves' Web-indexing robot (or, crawler/spider, as they are typically referred to in the search world). The crawler collects documents from the Web to build the ever-expanding index for our advanced search functionality at Ask Jeeves at Ask.com, Ask.co.uk and Teoma.com (among other Web sites that license the proprietary Teoma search technology).
Teoma is unique from any other search technology because it analyzes the Web as it actually exists - in subject-specific communities. This process begins by creating a comprehensive and high-quality index. Web crawling is an essential tool for this approach, and it ensures that we have the most up-to-date search results.
Q: What is a Web crawler/Web spider?
A: A Web crawler (or, spider or robot) is a software program designed to follow hyperlinks throughout a Web site, retrieving and indexing pages to document the site for searching purposes. The crawlers are innocuous and cause no harm to an owner's site or servers.
Q: Why does Teoma use Web crawlers?
A: Teoma utilizes Web crawlers to collect raw data and gather information that is used in building our ever-expanding search index. Crawling ensures that the information in our results is as up-to-date and relevant as it can possibly be. Our crawlers are well designed and professionally operated, providing an invaluable service that is in accordance with search industry standards.
Q: How does the crawler work?
A: The crawler goes to a Web address (URL) and downloads the HTML page. The crawler follows hyperlinks from the page, which is URLs on the same site or on different sites.The crawler adds new URLs to its list of URLs to be crawled. It continually repeats this function, discovering new URLs, following links, and downloading them.
The crawler excludes some URLs if it has downloaded a sufficient number from the Web site or if it appears that the URL might be a duplicate of another URL already downloaded. The files of crawled URLs are then built into a search catalog. These URL's are displayed as part of search results on the site powered by Teoma's technology when a relevant match is made.
Q: How frequently will the Teoma Crawler download pages from my site?
A: The crawler will download only one page at a time from your site (specifically, from your IP address). After it receives a page, it will pause a certain amount of time before downloading the next page. This delay time may range from 0.1 second to hours. The quicker your site responds to the crawler when it asks for pages, the shorter the delay.
Q: How can I tell if the Teoma crawler has visited my site/URL?
A: To determine whether the Teoma crawler has visited your site, check your server logs. Specifically, you should be looking for the following user-agent string:
User-Agent: Mozilla/2.0 (compatible; Ask Jeeves/Teoma)
Q: How did the Teoma Web crawler find my URL?
A: The Teoma crawler finds pages by following links (HREF tags in HTML) from other pages. When the crawler finds a page that contains frames (i.e., it is a frameset), the crawler downloads the component frames and includes their content as part of the original page. The Teoma crawler will not index the component frames as URLs themselves unless they are linked via HREF from other pages.
Q: What types of links does the Teoma crawler follow?
A: The Teoma crawler will follow HREF links, SRC links and re-directs.
Q. Does the Teoma crawler include dynamic URLs?
A. We include a select number of dynamic URLs in our index. However, they are screened to detect likely duplicates before downloading.
Q: Why has the Teoma crawler not visited my URL?
A: If the Teoma crawler has not visited your URL, it is because we did not discover any link to that URL from other pages (URLs) we visited.
Q: How do I register my site/URL with Teoma so that it will be indexed?
A: We appreciate your interest in having your site listed on Ask Jeeves and the Teoma search engine. It is important to note that we no longer offer a paid Site Submission program. As a result of some recent enhancements to Teoma, we're confident that we're indexing even more Web pages than ever, and that your site should appear in our Search index as a result of our ongoing "crawling" of the Web for new and updated sites and content.
If you are the owner/webmaster of a site in question, you may also want to research some online resources that provide tips and helpful information on how to best create your Web site and set up your Web server to optimize how search engines look at Web content, and how they index and trigger based upon different types of search keywords.
Q: Why aren't the pages the Teoma crawler indexed showing up in the search results at Teoma.com?
A: If you don't see your pages indexed in our search results, don't be alarmed. Because we are so thorough about the quality of our index, it takes some time for us to analyze the results of a crawl and then process the results for inclusion into the database. Teoma does not necessarily include every site it has crawled in its index.
Text Analysis - determines a site’s relevance by the text on the page. This technique was fine when the Web was small and spammers couldn’t artificially increase their rankings.
Popularity - determines that the more links there are to a site, the more popular it is. However, this is not necessarily the best judge of relevance.
Status - goes beyond popularity by analyzing the importance or "status" of the sites providing the incoming links. But this lacks context because it doesn’t calculate whether the incoming links are related to the search subject.
Subject-Specific Popularity This is the Teoma difference. Teoma uses elements of the above three techniques, but it does more. Rather than rely on the recommendations of the entire Web audience to determine relevance, Teoma technology uses Subject-Specific Popularity to connect with the authorities - the experts within specific interest groups that guide it to the best resources for a subject.
The Teoma Web Crawler FAQ
The Teoma Crawler is Ask Jeeves' Web-indexing robot (or, crawler/spider, as they are typically referred to in the search world). The crawler collects documents from the Web to build the ever-expanding index for our advanced search functionality at Ask Jeeves at Ask.com, Ask.co.uk and Teoma.com (among other Web sites that license the proprietary Teoma search technology).
Teoma is unique from any other search technology because it analyzes the Web as it actually exists - in subject-specific communities. This process begins by creating a comprehensive and high-quality index. Web crawling is an essential tool for this approach, and it ensures that we have the most up-to-date search results.
Q: What is a Web crawler/Web spider?
A: A Web crawler (or, spider or robot) is a software program designed to follow hyperlinks throughout a Web site, retrieving and indexing pages to document the site for searching purposes. The crawlers are innocuous and cause no harm to an owner's site or servers.
Q: Why does Teoma use Web crawlers?
A: Teoma utilizes Web crawlers to collect raw data and gather information that is used in building our ever-expanding search index. Crawling ensures that the information in our results is as up-to-date and relevant as it can possibly be. Our crawlers are well designed and professionally operated, providing an invaluable service that is in accordance with search industry standards.
Q: How does the crawler work?
A: The crawler goes to a Web address (URL) and downloads the HTML page. The crawler follows hyperlinks from the page, which is URLs on the same site or on different sites.The crawler adds new URLs to its list of URLs to be crawled. It continually repeats this function, discovering new URLs, following links, and downloading them.
The crawler excludes some URLs if it has downloaded a sufficient number from the Web site or if it appears that the URL might be a duplicate of another URL already downloaded. The files of crawled URLs are then built into a search catalog. These URL's are displayed as part of search results on the site powered by Teoma's technology when a relevant match is made.
Q: How frequently will the Teoma Crawler download pages from my site?
A: The crawler will download only one page at a time from your site (specifically, from your IP address). After it receives a page, it will pause a certain amount of time before downloading the next page. This delay time may range from 0.1 second to hours. The quicker your site responds to the crawler when it asks for pages, the shorter the delay.
Q: How can I tell if the Teoma crawler has visited my site/URL?
A: To determine whether the Teoma crawler has visited your site, check your server logs. Specifically, you should be looking for the following user-agent string:
User-Agent: Mozilla/2.0 (compatible; Ask Jeeves/Teoma)
Q: How did the Teoma Web crawler find my URL?
A: The Teoma crawler finds pages by following links (HREF tags in HTML) from other pages. When the crawler finds a page that contains frames (i.e., it is a frameset), the crawler downloads the component frames and includes their content as part of the original page. The Teoma crawler will not index the component frames as URLs themselves unless they are linked via HREF from other pages.
Q: What types of links does the Teoma crawler follow?
A: The Teoma crawler will follow HREF links, SRC links and re-directs.
Q. Does the Teoma crawler include dynamic URLs?
A. We include a select number of dynamic URLs in our index. However, they are screened to detect likely duplicates before downloading.
Q: Why has the Teoma crawler not visited my URL?
A: If the Teoma crawler has not visited your URL, it is because we did not discover any link to that URL from other pages (URLs) we visited.
Q: How do I register my site/URL with Teoma so that it will be indexed?
A: We appreciate your interest in having your site listed on Ask Jeeves and the Teoma search engine. It is important to note that we no longer offer a paid Site Submission program. As a result of some recent enhancements to Teoma, we're confident that we're indexing even more Web pages than ever, and that your site should appear in our Search index as a result of our ongoing "crawling" of the Web for new and updated sites and content.
If you are the owner/webmaster of a site in question, you may also want to research some online resources that provide tips and helpful information on how to best create your Web site and set up your Web server to optimize how search engines look at Web content, and how they index and trigger based upon different types of search keywords.
Q: Why aren't the pages the Teoma crawler indexed showing up in the search results at Teoma.com?
A: If you don't see your pages indexed in our search results, don't be alarmed. Because we are so thorough about the quality of our index, it takes some time for us to analyze the results of a crawl and then process the results for inclusion into the database. Teoma does not necessarily include every site it has crawled in its index.
Sunday, September 5, 2010
What The Search Engines Say - MSN Part II
Items and techniques discouraged by MSN Search
- The following items and techniques are not appropriate uses of the index. Use of these items and techniques may affect how your site is ranked within MSN Search and may result in the removal of your site from the MSN Search index.
- Loading pages with irrelevant words in an attempt to increase a page's keyword density. This includes stuffing ALT tags that users are unlikely to view.
- Using hidden text or links. You should use only text and links that are visible to users.
- Using techniques to artificially increase the number of links to your page, such as link farms.
About site ranking
MSN Search site ranking is completely automated. The MSN Search ranking algorithm analyzes factors such as page contents, the number and quality of sites that link to your pages, and the relevance of your site’s content to keywords. The algorithm is complex and never human-mediated. You cannot pay to boost your site’s relevance ranking; however, we do offer advertising options for site owners.
Each time the index is updated, you may notice a shift in your site’s ranking. As new sites are added and some sites become obsolete, previous relevance rankings are revised.
Although you cannot directly change your site’s ranking, you can optimize its design and technical implementation to enable appropriate ranking by most search engines.
About your site description
As the MSN Search web crawler MSNBot crawls your site, it analyzes the content on indexed pages and generates keywords to associate with each page. Then MSNBot extracts page content that is highly relevant to the keywords (often sentence segments that contain keywords or information in the description) meta tag to construct the site description displayed in search results. The page title and URL are also extracted and displayed in search results.
Updating your site description
Site descriptions are extracted from the content of your page each time MSNBot crawls your site and indexes its pages. If you change the contents of a page, you may see a change in the description the next time our index is updated.
Since the descriptions are extracted from your indexed web pages, the best way to affect your site description is to ensure that your web pages effectively deliver the information you want to see in search results.
Excellent content design and effective use of terms that target your message are the best ways to affect the site description that MSNBot extracts from your site. Effective strategies include:
Placing descriptive content near the top of each page.
Making sure each page has a clear topic and purpose.
Add a site description into the description meta tag
- The following items and techniques are not appropriate uses of the index. Use of these items and techniques may affect how your site is ranked within MSN Search and may result in the removal of your site from the MSN Search index.
- Loading pages with irrelevant words in an attempt to increase a page's keyword density. This includes stuffing ALT tags that users are unlikely to view.
- Using hidden text or links. You should use only text and links that are visible to users.
- Using techniques to artificially increase the number of links to your page, such as link farms.
About site ranking
MSN Search site ranking is completely automated. The MSN Search ranking algorithm analyzes factors such as page contents, the number and quality of sites that link to your pages, and the relevance of your site’s content to keywords. The algorithm is complex and never human-mediated. You cannot pay to boost your site’s relevance ranking; however, we do offer advertising options for site owners.
Each time the index is updated, you may notice a shift in your site’s ranking. As new sites are added and some sites become obsolete, previous relevance rankings are revised.
Although you cannot directly change your site’s ranking, you can optimize its design and technical implementation to enable appropriate ranking by most search engines.
About your site description
As the MSN Search web crawler MSNBot crawls your site, it analyzes the content on indexed pages and generates keywords to associate with each page. Then MSNBot extracts page content that is highly relevant to the keywords (often sentence segments that contain keywords or information in the description) meta tag to construct the site description displayed in search results. The page title and URL are also extracted and displayed in search results.
Updating your site description
Site descriptions are extracted from the content of your page each time MSNBot crawls your site and indexes its pages. If you change the contents of a page, you may see a change in the description the next time our index is updated.
Since the descriptions are extracted from your indexed web pages, the best way to affect your site description is to ensure that your web pages effectively deliver the information you want to see in search results.
Excellent content design and effective use of terms that target your message are the best ways to affect the site description that MSNBot extracts from your site. Effective strategies include:
Placing descriptive content near the top of each page.
Making sure each page has a clear topic and purpose.
Add a site description into the description meta tag
Sunday, August 29, 2010
What The Search Engines Say - MSN Part I
Technical recommendations for your website
Use only well-formed HTML code in your pages. Ensure that all tags are closed, and that all links function properly. If your site contains broken links, MSNBot may not be able to index your site effectively, and people may not be able to reach all of your pages.
If you move a page, set up the page's original URL to direct people to the new page, and tell them whether the move is permanent or temporary.
Make sure MSNBot is allowed to crawl your site, and is not on your list of web crawlers that are prohibited from indexing your site.
Use a robots.txt file or meta tags to control how MSNBot and other web crawlers index your site. The robots.txt file tells web crawlers which files and folders it is not allowed to crawl. The Web Robots Pages provide detailed information on the robots.txt Robots Exclusion standard.
Keep your URLs simple and static. Complicated or frequently changed URLs are difficult to use as link destinations. For example, the URL www.example.com/mypage is easier for MSNBot to crawl and for people to type than a long URL with multiple extensions. Also, a URL that doesn't change is easier for people to remember, which makes it a more likely link destination from other sites.
Content guidelines for your website
The best way to attract people to your site, and keep them coming back, is to design your pages with valuable content that your target audience is interested in.
In the visible page text, include words users might choose as search query terms to find the information on your site.
Limit all pages to a reasonable size. We recommend one topic per page. An HTML page with no pictures should be under 150 KB.
Make sure that each page is accessible by at least one static text link.
Create a site map that is fairly flat (i.e., each page is only one to three clicks away from the home page). Links embedded in menus, list boxes, and similar elements are not accessible to web crawlers unless they appear in your site map.
Keep the text that you want indexed outside of images. For example, if you want your company name or address to be indexed, make sure it is displayed on your page outside of a company logo.
Use only well-formed HTML code in your pages. Ensure that all tags are closed, and that all links function properly. If your site contains broken links, MSNBot may not be able to index your site effectively, and people may not be able to reach all of your pages.
If you move a page, set up the page's original URL to direct people to the new page, and tell them whether the move is permanent or temporary.
Make sure MSNBot is allowed to crawl your site, and is not on your list of web crawlers that are prohibited from indexing your site.
Use a robots.txt file or meta tags to control how MSNBot and other web crawlers index your site. The robots.txt file tells web crawlers which files and folders it is not allowed to crawl. The Web Robots Pages provide detailed information on the robots.txt Robots Exclusion standard.
Keep your URLs simple and static. Complicated or frequently changed URLs are difficult to use as link destinations. For example, the URL www.example.com/mypage is easier for MSNBot to crawl and for people to type than a long URL with multiple extensions. Also, a URL that doesn't change is easier for people to remember, which makes it a more likely link destination from other sites.
Content guidelines for your website
The best way to attract people to your site, and keep them coming back, is to design your pages with valuable content that your target audience is interested in.
In the visible page text, include words users might choose as search query terms to find the information on your site.
Limit all pages to a reasonable size. We recommend one topic per page. An HTML page with no pictures should be under 150 KB.
Make sure that each page is accessible by at least one static text link.
Create a site map that is fairly flat (i.e., each page is only one to three clicks away from the home page). Links embedded in menus, list boxes, and similar elements are not accessible to web crawlers unless they appear in your site map.
Keep the text that you want indexed outside of images. For example, if you want your company name or address to be indexed, make sure it is displayed on your page outside of a company logo.
Sunday, August 15, 2010
Adsense Golden Rules
Google Adsense scam or not. If you are one of those who tried Google Adsense and failed or got banned when you were near your payout, it looks disappointing and even more when Google does not tell you the reason. Why did Google kick you out? Well, most of the time we never know. Once you have been kicked out, you cannot join back. So to prevent this, I listed below a very, very abbreviated version of Google Adsense T&Cs and policies. If you follow these rules you should have no problems.
* Do not encourage users to click your ads.
* Do not put ads on pages with no content, pop-up, pop-under, error page, registration or similar pages.
* Do not overlap ads with content that users cannot distinguish between.
* Do not use automated bots to increase clicks.
* Do not encourage or participate in ‘click groups’ that click each others ads.
* Make sure you don’t display more than the maximum number of ads on a page. Check with the Google Adsense rules.
* Do not create more than one Adsense account. You CAN have more than one site for a single Adsense account.
* Do not edit or modify the Adsense code ( does not include changing properties).
* Do not redirect users away from any advertisers page.
* Do not click your own ads (not even to test them).
* Do not display pornographic, hatred or any other banned content.
* Do not buy banned sites, typically MFA from others.
Enjoy!
Google.Com
* Do not encourage users to click your ads.
* Do not put ads on pages with no content, pop-up, pop-under, error page, registration or similar pages.
* Do not overlap ads with content that users cannot distinguish between.
* Do not use automated bots to increase clicks.
* Do not encourage or participate in ‘click groups’ that click each others ads.
* Make sure you don’t display more than the maximum number of ads on a page. Check with the Google Adsense rules.
* Do not create more than one Adsense account. You CAN have more than one site for a single Adsense account.
* Do not edit or modify the Adsense code ( does not include changing properties).
* Do not redirect users away from any advertisers page.
* Do not click your own ads (not even to test them).
* Do not display pornographic, hatred or any other banned content.
* Do not buy banned sites, typically MFA from others.
Enjoy!
Google.Com
Monday, August 9, 2010
What The Search Engines Say - Lycos
How do I improve the ranking of my web pages in search engines?
Although we cannot guarantee your placement within search results for particular keywords, the following tips will help you to ensure that your pages are spider friendly:
Write great content that human searchers would understand and do not try to trick the Search Engine's algorithms.
Use keywords that searchers use to find your web site in the meta-data. Use your web logs to determine the keywords. Don't guess!
Make sure your web content mentions those keywords near the top of the page. For instance, place the keywords in the headline or in the first paragraph on the page.
Repeat keywords more than once within your web page, but don't over do it. Too much repetition is considered spam.
How can I make my site spider-friendly?
• Speed: If your site is slow, it will affect the length of time it takes to spider the web site. Try to build pages with few and small graphics.
• Title: Spiders won't index the information if TITLE tags are the same on every page. (TITLE tags are displayed at the very top of the browser.)
• Descriptions: META description tags can be included for each web page. These can provide a better search result description than a spider-created excerpt.
• Registration: Spiders can't traverse a site if there is a username/password in their way. If you must have users login, set up a separate site where the spider can access the content.
• Search-based Sites: Spiders function by following hyperlinks. Purely search-based sites cannot be spidered. Therefore, create a "spider.html" file (i.e. a list of URLs on the site for the spider to traverse).
How do I improve the ranking of my web pages in search engines?
Although we cannot guarantee your placement within search results for particular keywords, the following tips will help you to ensure that your pages are spider friendly:
• Write great content that human searchers would understand and do not try to trick the Search Engine's algorithms.
• Use keywords that searchers use to find your web site in the meta-data. Use your web logs to determine the keywords. Don't guess!
• Make sure your web content mentions those keywords near the top of the page. For instance, place the keywords in the headline or in the first paragraph on the page.
• Repeat keywords more than once within your web page, but don't over do it. Too much repetition is considered Spam.
Although we cannot guarantee your placement within search results for particular keywords, the following tips will help you to ensure that your pages are spider friendly:
Write great content that human searchers would understand and do not try to trick the Search Engine's algorithms.
Use keywords that searchers use to find your web site in the meta-data. Use your web logs to determine the keywords. Don't guess!
Make sure your web content mentions those keywords near the top of the page. For instance, place the keywords in the headline or in the first paragraph on the page.
Repeat keywords more than once within your web page, but don't over do it. Too much repetition is considered spam.
How can I make my site spider-friendly?
• Speed: If your site is slow, it will affect the length of time it takes to spider the web site. Try to build pages with few and small graphics.
• Title: Spiders won't index the information if TITLE tags are the same on every page. (TITLE tags are displayed at the very top of the browser.)
• Descriptions: META description tags can be included for each web page. These can provide a better search result description than a spider-created excerpt.
• Registration: Spiders can't traverse a site if there is a username/password in their way. If you must have users login, set up a separate site where the spider can access the content.
• Search-based Sites: Spiders function by following hyperlinks. Purely search-based sites cannot be spidered. Therefore, create a "spider.html" file (i.e. a list of URLs on the site for the spider to traverse).
How do I improve the ranking of my web pages in search engines?
Although we cannot guarantee your placement within search results for particular keywords, the following tips will help you to ensure that your pages are spider friendly:
• Write great content that human searchers would understand and do not try to trick the Search Engine's algorithms.
• Use keywords that searchers use to find your web site in the meta-data. Use your web logs to determine the keywords. Don't guess!
• Make sure your web content mentions those keywords near the top of the page. For instance, place the keywords in the headline or in the first paragraph on the page.
• Repeat keywords more than once within your web page, but don't over do it. Too much repetition is considered Spam.
Thursday, August 5, 2010
What The Search Engines Say - Yahoo!
Yahoo! strives to provide the best search experience on the Web by directing searchers to high-quality and relevant web content in response to a search query.
Pages Yahoo! Wants Included in its Index
• Original and unique content of genuine value
• Pages designed primarily for humans, with search engine considerations secondary
• Hyperlinks intended to help people find interesting, related content, when applicable
• Metadata (including title and description) that accurately describes the contents of a web page
• Good web design in general
Unfortunately, not all web pages contain information that is valuable to a user. Some pages are created deliberately to trick the search engine into offering inappropriate, redundant or poor-quality search results; this is often called "spam." Yahoo! does not want these pages in the index.
What Yahoo! Considers Unwanted
Some, but not all, examples of the more common types of pages that Yahoo! does not want include:
• Pages that harm accuracy, diversity or relevance of search results
• Pages dedicated to directing the user to another page
• Pages that have substantially the same content as other pages
• Sites with numerous, unnecessary virtual hostnames
• Pages in great quantity, automatically generated or of little value
• Pages using methods to artificially inflate search engine ranking
• The use of text that is hidden from the user
• Pages that give the search engine different content than what the end-user sees
• Excessively cross-linking sites to inflate a site's apparent popularity
• Pages built primarily for the search engines
• Misuse of competitor names
• Multiple sites offering the same content
• Pages that use excessive pop-ups, interfering with user navigation
• Pages that seem deceptive, fraudulent or provide a poor user experience
YST's Content Quality Guidelines are designed to ensure that poor-quality pages do not degrade the user experience in any way. As with Yahoo's other guidelines, Yahoo reserves the right, at its sole discretion, to take any and all action it deems appropriate to insure the quality of its index.
Pages Yahoo! Wants Included in its Index
• Original and unique content of genuine value
• Pages designed primarily for humans, with search engine considerations secondary
• Hyperlinks intended to help people find interesting, related content, when applicable
• Metadata (including title and description) that accurately describes the contents of a web page
• Good web design in general
Unfortunately, not all web pages contain information that is valuable to a user. Some pages are created deliberately to trick the search engine into offering inappropriate, redundant or poor-quality search results; this is often called "spam." Yahoo! does not want these pages in the index.
What Yahoo! Considers Unwanted
Some, but not all, examples of the more common types of pages that Yahoo! does not want include:
• Pages that harm accuracy, diversity or relevance of search results
• Pages dedicated to directing the user to another page
• Pages that have substantially the same content as other pages
• Sites with numerous, unnecessary virtual hostnames
• Pages in great quantity, automatically generated or of little value
• Pages using methods to artificially inflate search engine ranking
• The use of text that is hidden from the user
• Pages that give the search engine different content than what the end-user sees
• Excessively cross-linking sites to inflate a site's apparent popularity
• Pages built primarily for the search engines
• Misuse of competitor names
• Multiple sites offering the same content
• Pages that use excessive pop-ups, interfering with user navigation
• Pages that seem deceptive, fraudulent or provide a poor user experience
YST's Content Quality Guidelines are designed to ensure that poor-quality pages do not degrade the user experience in any way. As with Yahoo's other guidelines, Yahoo reserves the right, at its sole discretion, to take any and all action it deems appropriate to insure the quality of its index.
Friday, July 23, 2010
What The Search Engines Say - Google - Part II
When your site is ready:
• Once your site is online, submit it to Google at http://www.google.com/addurl.html.
• Make sure all the sites that should know about your pages are aware your site is online.
• Submit your site to relevant directories such as the Open Directory Project and Yahoo!.
• Periodically review Google's webmaster section for more information.
Quality Guidelines - Basic principles:
• Make pages for users, not for search engines. Don't deceive your users, or present different content to search engines than you display to users.
• Avoid tricks intended to improve search engine rankings. A good rule of thumb is whether you'd feel comfortable explaining what you've done to a website that competes with you. Another useful test is to ask, "Does this help my users? Would I do this if search engines didn't exist?"
• Don't participate in link schemes designed to increase your site's ranking or PageRank. In particular, avoid links to web spammers or "bad neighbourhoods" on the web as your own ranking may be affected adversely by those links.
• Don't use unauthorized computer programs to submit pages, check rankings, etc. Such programs consume computing resources and violate our terms of service. Google does not recommend the use of products such as Webposition Gold™ that send automatic or programmatic queries to Google.
Quality Guidelines - Specific recommendations:
• Avoid hidden text or hidden links.
• Don't employ cloaking or sneaky redirects.
• Don't send automated queries to Google.
• Don't load pages with irrelevant words.
• Don't create multiple pages, sub domains, or domains with substantially duplicate content.
• Avoid "doorway" pages created just for search engines, or other "cookie cutter" approaches such as affiliate programs with little or no original content.
These quality guidelines cover the most common forms of deceptive or manipulative behaviour, but Google may respond negatively to other misleading practices not listed here, (e.g. tricking users by registering misspellings of well-known web sites). It's not safe to assume that just because a specific deceptive technique isn't included on this page, Google approves of it. Webmasters who spend their energies upholding the spirit of the basic principles listed above will provide a much better user experience and subsequently enjoy better ranking than those who spend their time looking for loopholes they can exploit.
If you believe that another site is abusing Google's quality guidelines, please report that site at http://www.google.com/contact/spamreport.html. Google prefers developing scalable and automated solutions to problems, so we attempt to minimize hand-to-hand spam fighting. The spam reports we receive are used to create scalable algorithms that recognize and block future spam attempts.
• Once your site is online, submit it to Google at http://www.google.com/addurl.html.
• Make sure all the sites that should know about your pages are aware your site is online.
• Submit your site to relevant directories such as the Open Directory Project and Yahoo!.
• Periodically review Google's webmaster section for more information.
Quality Guidelines - Basic principles:
• Make pages for users, not for search engines. Don't deceive your users, or present different content to search engines than you display to users.
• Avoid tricks intended to improve search engine rankings. A good rule of thumb is whether you'd feel comfortable explaining what you've done to a website that competes with you. Another useful test is to ask, "Does this help my users? Would I do this if search engines didn't exist?"
• Don't participate in link schemes designed to increase your site's ranking or PageRank. In particular, avoid links to web spammers or "bad neighbourhoods" on the web as your own ranking may be affected adversely by those links.
• Don't use unauthorized computer programs to submit pages, check rankings, etc. Such programs consume computing resources and violate our terms of service. Google does not recommend the use of products such as Webposition Gold™ that send automatic or programmatic queries to Google.
Quality Guidelines - Specific recommendations:
• Avoid hidden text or hidden links.
• Don't employ cloaking or sneaky redirects.
• Don't send automated queries to Google.
• Don't load pages with irrelevant words.
• Don't create multiple pages, sub domains, or domains with substantially duplicate content.
• Avoid "doorway" pages created just for search engines, or other "cookie cutter" approaches such as affiliate programs with little or no original content.
These quality guidelines cover the most common forms of deceptive or manipulative behaviour, but Google may respond negatively to other misleading practices not listed here, (e.g. tricking users by registering misspellings of well-known web sites). It's not safe to assume that just because a specific deceptive technique isn't included on this page, Google approves of it. Webmasters who spend their energies upholding the spirit of the basic principles listed above will provide a much better user experience and subsequently enjoy better ranking than those who spend their time looking for loopholes they can exploit.
If you believe that another site is abusing Google's quality guidelines, please report that site at http://www.google.com/contact/spamreport.html. Google prefers developing scalable and automated solutions to problems, so we attempt to minimize hand-to-hand spam fighting. The spam reports we receive are used to create scalable algorithms that recognize and block future spam attempts.
Sunday, July 18, 2010
What The Search Engines Say - Google - Part I
Following these guidelines will help Goggle find, index, and rank your site, which is the best way to ensure you'll be included in Google's results. Even if you choose not to implement any of these suggestions, we strongly encourage you to pay very close attention to the "Quality Guidelines," which outline some of the illicit practices that may lead to a site being removed entirely from the Google index. Once a site has been removed, it will no longer show up in results on Google.com or on any of Google's partner sites.
Design and Content Guidelines:
• Make a site with a clear hierarchy and text links. Every page should be reachable from at least one static text link.
• Offer a site map to your users with links that point to the important parts of your site. If the site map is larger than 100 or so links, you may want to break the site map into separate pages.
• Create a useful, information-rich site and write pages that clearly and accurately describe your content.
• Think about the words users would type to find your pages, and make sure that your site actually includes those words within it.
• Try to use text instead of images to display important names, content, or links. The Google crawler doesn't recognize text contained in images.
• Make sure that your TITLE and ALT tags are descriptive and accurate.
• Check for broken links and correct HTML.
• If you decide to use dynamic pages (i.e., the URL contains a '?' character), be aware that not every search engine spider crawls dynamic pages as well as static pages. It helps to keep the parameters short and the number of them small.
• Keep the links on a given page to a reasonable number (fewer than 100).
Technical Guidelines:
• Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session ID's, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site.
• Allow search bots to crawl your sites without session ID's or arguments that track their path through the site. These techniques are useful for tracking individual user behaviour, but the access pattern of bots is entirely different. Using these techniques may result in incomplete indexing of your site, as bots may not be able to eliminate URLs that look different but actually point to the same page.
• Make sure your web server supports the If-Modified-Since HTTP header. This feature allows your web server to tell Google whether your content has changed since we last crawled your site. Supporting this feature saves you bandwidth and overhead.
• Make use of the robots.txt file on your web server. This file tells crawlers which directories can or cannot be crawled. Make sure it's current for your site so that you don't accidentally block the Googlebot crawler. Visit http://www.robotstxt.org/wc/faq.html for a FAQ answering questions regarding robots and how to control them when they visit your site.
• If your company buys a content management system, make sure that the system can export your content so that search engine spiders can crawl your site.
Design and Content Guidelines:
• Make a site with a clear hierarchy and text links. Every page should be reachable from at least one static text link.
• Offer a site map to your users with links that point to the important parts of your site. If the site map is larger than 100 or so links, you may want to break the site map into separate pages.
• Create a useful, information-rich site and write pages that clearly and accurately describe your content.
• Think about the words users would type to find your pages, and make sure that your site actually includes those words within it.
• Try to use text instead of images to display important names, content, or links. The Google crawler doesn't recognize text contained in images.
• Make sure that your TITLE and ALT tags are descriptive and accurate.
• Check for broken links and correct HTML.
• If you decide to use dynamic pages (i.e., the URL contains a '?' character), be aware that not every search engine spider crawls dynamic pages as well as static pages. It helps to keep the parameters short and the number of them small.
• Keep the links on a given page to a reasonable number (fewer than 100).
Technical Guidelines:
• Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session ID's, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site.
• Allow search bots to crawl your sites without session ID's or arguments that track their path through the site. These techniques are useful for tracking individual user behaviour, but the access pattern of bots is entirely different. Using these techniques may result in incomplete indexing of your site, as bots may not be able to eliminate URLs that look different but actually point to the same page.
• Make sure your web server supports the If-Modified-Since HTTP header. This feature allows your web server to tell Google whether your content has changed since we last crawled your site. Supporting this feature saves you bandwidth and overhead.
• Make use of the robots.txt file on your web server. This file tells crawlers which directories can or cannot be crawled. Make sure it's current for your site so that you don't accidentally block the Googlebot crawler. Visit http://www.robotstxt.org/wc/faq.html for a FAQ answering questions regarding robots and how to control them when they visit your site.
• If your company buys a content management system, make sure that the system can export your content so that search engine spiders can crawl your site.
Subscribe to:
Posts (Atom)