Friday, March 31, 2006

What Makes the Perfect SEO Firm

SEO companies come in all shapes and sizes. You've got your solo SEOs that either a) do everything themselves and/or b) sub-contract out many aspects of each campaign while maintaining a tight control on the quality and results of the project.

Then you have your big SEO firms that employ 20+ employees that handle various aspects of your account. These firms can often turn into SEO factories and can lack the ability to treat each client individually, because everything is done in bulk.

A third SEO firm is a smaller firm that employs a small handful of employees and may also sub-out a few various aspects of the campaign, but overall each client is treated with a personal hands-on approach with most of the campaign handled in-house with strict quality control measures.
The perfect SEM firm must have most, if not all of the following:

Project Manager

The project manager is responsible for overseeing every aspect of the SEM campaign. Their job is to ensure that each aspect of the project is given to the appropriate person to complete and double checks all work to ensure that the highest standards are met before it is passed on to the client. The Project Manager is the primary contact with the client and bears the responsibility of ensuring that the client knows that their campaign is a top priority.

Lead Search Engine Optimizer

The Lead SEO spends much of their time doing nothing but research. Researching SEO trends, search engine functionality, patents, white papers, etc. Their job is to ensure that they stay on the cutting edge of the technological changes that are coming down the pike. It is also their job to take this knowledge and apply it to each and every optimization campaign. From keyword research and organization to implementation of the on-page optimization factors. The lead SEO also overseas any optimization techniques implemented by sub-seos, link builders, etc. that may be managing the day-to-day tracking and tweaking of any site.

Search Engine Optimizer

All other SEOs work directly under the lead SEO and work to implement his "vision" of the perfectly optimized page. Each should do their own research and knowledge building but any implementations should follow the outline presented by the lead SEO. These SEOs should be responsible for the day-to-day optimization changes, as well as continuous research to ensure that any client's site's are free from potential currently unknown search engine roadblocks.

Coder/Programmer

While all SEOs must have a significant knowledge of HTML, XHTML and CSS coding, it is helpful to have an additional person who is primarily responsible for the code optimization aspects. Optimizing and streamlining code can be a significantly time-consuming process, especially when ensuring each optimized page has full cross-browser compatibility. When not working on the optimization code, such a person would be working on developing and improving in-house tools, reporting systems, as well as publicly available tools.

Copywriter

The importance of a professional copywriter cannot be understated. SEO methods that simply take your keywords and try to place them into text are becoming less and less effective, not only for search engine placement, but for ensuring a quality user experience. Each optimized page should have its content re-written by a professional copywriter following project-specific keyword guidelines established by the lead SEO. SEO copywriter should have a marketing background and experience with ad testing and writing press releases, as well as general article content.

Link Researchers

This may be controversial to some, but it is increasingly important to have full-time link researchers continuously working on each campaign. Link researchers are responsible for seeking out other quality and related sites to do one or more of the following: Link to important sites, request links from important and/or related sites and submitting links to quality relevant directories.

As SEO becomes more marketing oriented, many more positions must be added for a firm to properly service their client’s needs. Speaking strictly from an SEO aspect, these jobs are each an important aspect of achieving success with your optimization campaign.

Friday, March 24, 2006

Working With the Robots.txt File

Topics :
What is the robots.txt file?
Working with the robots.txt file
Advantages of robots.txt
Disadvantages of the robots.txt file
Optimization of the robots.txt file
Using the robots.txt file

What is the robots.txt file?

The robots.txt file is an ASCII text file that has specific instructions for search engine robots about specific content that they are not allowed to index. These instructions are the deciding factor of how a search engine indexes your website's pages. The universal address of the robots.txt file is: www.example.com/robots.txt . This is the first file that a robot visits. It picks up instructions for indexing the site content and follows them. This file contains two text fields.

Lets study this example:

User-agent: *
Disallow:

The User-agent field is for specifying robot name for which the access policy follows in the Disallow field. Disallow field specifies URLs which the specified robots have no access to. An example:

User-agent: *
Disallow: /

Here "*" means all robots and "/ " means all URLs. This is read as, "No access for any search engine to any URL". Since all URLs are preceded by "/ " so it bans access to all URLs when nothing follows after "/ ". If partial access has to be given, only the banned URL is specified in the Disallow field. Lets consider this example:

# Research access for Googlebot.
User-agent: Googlebot
Disallow:

User-agent: *
Disallow: /concepts/new/

Here we see that both the fields have been repeated. Multiple commands can be given for different user agents in different lines. The above commands mean that all user agents are banned access to /concepts/new/ except Googlebot which has full access. Characters following # are ignored up to the line termination as they are considered to be comments.

Working with the robots.txt file

1. The robots.txt file is always named in all lowercase ( e.g. Robots.txt or robots.Txt is incorrect)

2. Wildcards are not supported in both the fields. Only * can be used in the User-agent fields' command syntax because it is a special character denoting "all". Googlebot is the only robot that now supports some wildcard file extensions.

Ref : http://www.google.com/webmasters/remove.html

3. The robots.txt file is an exclusion file meant for search engine robot reference and not obligatory for a website to function. An empty or absent file simply means that all robots are welcome to index any part of the website.

4. Only one file can be maintained per domain.

5. Website owners who do not have administrative rights cannot sometimes make a robots.txt file. In such situations, the Robots Meta Tag can be configured to serve the same purpose. Here we must keep in mind that lately, questions have been raised about robot behavior regarding the Robot Meta Tag. Some robots might skip it altogether. Protocol makes it obligatory for all robots to start with the robots.txt thereby making it the default starting point for all robots.

6. Separate lines are required for specifying access to different user agents and Disallow field should not carry more than one command in a line in the robots.txt file. There is no limit to the number of lines though i.e. both the User-agent and Disallow fields can be repeated with different commands any number of times. Blank lines will also not work within a single record set of both the commands.

7. Use lower-case for all robots.txt file content. Please also note that filenames on Unix systems are case sensitive. Be careful about case sensitivity when defining directory or files for Unix hosted domains.

Advantages of the robots.txt file

1. Protocol demands that all search engine robots start with the robots.txt file. This is the default entry point for robots if the file is present. Specific instructions can be placed on this file to help index your site on the web. Major search engines will never violate the Standard for Robots Exclusion.

2. The robots.txt file can be used to keep out unwanted robots like email retrievers, image strippers etc.

3. The robots.txt file can be used to specify the directories on your server that you don't want robots to access and/or index e.g. temporary, cgi, and private/back-end directories.

4. An absent robots.txt file could generate a 404 error and redirect the robot to your default 404 error page. Here it was noticed after careful research that sites that do not have a robots.txt file present and had a customized 404-error page, would serve the same to the robots. The robot is bound to treat it as the robots.txt file, which can confuse its indexing.

5. The robots.txt file is used to direct select robots to relevant pages to be indexed. This especially comes in handy where the site has multilingual content or where the robot is searching for only specific content.

6. The need for the robots.txt file was also necessary to stop robots from deluging servers with rapid-fire requests or re-indexing the same files repeatedly. If you have duplicate content on your site for any reason, the same can be prevented from getting indexed. This will help you avoid any duplicate content penalties.

Disadvantages of the robots.txt file

Careless handling of directory and filenames can lead hackers to snoop around your site by studying the robots.txt file, as you sometimes may also list filenames and directories that have classified content. This is not a serious issue as deploying some effective security checks to the content in question can take care of it. For example, if you have your traffic log on your site on a URL such as www.example.com/stats/index.htm which you do not want robots to index, then you would have to add a command to your robots.txt file. As an example:

User-agent: *
Disallow: /stats/

However, it is easy for a snooper to guess what you are trying to hide and simply typing the URL www.example.com/stats in his browser would enable access to the same. This calls for one of the following remedies -

1. Change file names :

  • Change the stats filename from index.htm to something different, such as stats-new.htm so that your stats URL now becomes www.example.com/stats/stats-new.htm
  • Place a simple text file containing the text, "Sorry you are not authorized to view this page", and save it as index.htm in your /stats/directory.

This way the snooper cannot guess your actual filename and get to your banned content.

2. Use login passwords :

  • Password-protect the sensitive content listed in your robots.txt file.

Optimization of the robots.txt file :

1. The right commands: Use correct commands. Most common errors include - putting the command meant for "User-agent" field in the "Disallow field" and vice-versa.

  • lease note that there is no "Allow" command in the standard robots.txt protocol. Content not blocked in the "Disallow" field is considered allowed. Currently, only two fields are recognized: "The User-agent field" and the "Disallow field". Experts are considering the addition of more robot recognizable commands to make the robots.txt file more Webmaster and robot friendly.
  • Please also note that Google is the only search engine, which is experimenting with certain new robots.txt commands. There are indications that Google now recognizes the "Allow" command. Please refer to: http://www.google.com/webmasters/remove.html.

2. Bad Syntax : Do not put multiple file URLs in one Disallow line in the robots.txt file. Use a new Disallow line for every directory that you want to block access to. Incorrect example :

User-agent: *
Disallow: /concepts/ /links/ /images/

Correct example:

User-agent: *
Disallow: /concepts/
Disallow: /links/
Disallow: /images/

3. Files and directories : If a specific file has to be disallowed, end it with the file extension and without a forward slash at the end. Study the following example :

For file:

User-agent: *
Disallow: /hilltop.html

For Directory:

User-agent: *
Disallow: /concepts/

Remember, if you have to block access to all files in the directory, you don't have to specify each and every file in robots.txt . You can simply block the directory as shown above. Another common error is leaving out the slashes altogether. This would leave a very different message than intended.

4. The right location : No robot will access a badly placed robots.txt file. Make sure that the location is http://www.example.com/robots.txt.

5. Capitalization : Never capitalize your syntax commands. Directory and filenames are case sensitive in Unix platforms. The only capitals used per standard are: "User-agent " and "Disallow "

6. Correct Order : If you want to block access to all but one or more than one robot, then the specific ones should be mentioned first. Lets study this example:

User-agent: *
Disallow: /

User-agent: MSNBot
Disallow:

In the above case, MSNBot would simply leave the site without indexing after reading the first command. Correct syntax is:

User-agent: MSNBot
Disallow:

User-agent: *
Disallow: /

7. Presence : Not having a robots.txt file at all could generate a 404 error for search engine robots, which could redirect the robot to the default 404-error page or your customized 404-error page. If this happens seamlessly, it is up to the robot to decide if the target file is a robots.txt file or an html file. Typically it would not cause many problems but you may not want to risk it. It's always a better idea to put the standard robots.txt file in the root directory, than not having it at all.

The standard robots.txt file for allowing all robots to index all pages is:

User-agent: *
Disallow:

8. Using # carefully in the robots.txt file: Adding comments after the syntax commands is not a good idea using "#". Some robots might misinterpret the line although it is acceptable as per the robots exclusion standard. New lines are always preferred for comments.

Using the robots.txt file

1. Robots are configured to read text. Too much graphic content could render your pages invisible to the search engine. Use the robots.txt file to block irrelevant and graphic-only content.

2. Indiscriminate access to all files, it is believed, can dilute relevance to your site content after being indexed by robots. This could seriously affect your site's ranking with search engines. Use the robots.txt file to direct robots to content relevant to your site's theme by blocking the irrelevant files or directories.

3. The file can be used for multilingual websites to direct robots to relevant content for relevant topics for different languages. It ultimately helps the search engines to present relevant results for specific languages. It also helps the search engine in its advanced search options where language is a variable.

4. Some robots could cause severe server loading problems by rapid firing too many requests at peak hours. This could affect your business. By excluding some robots that might be irrelevant to your site, in the robots.txt file, this problem can be taken care of. It is really not a good idea to let malevolent robots use up precious bandwidth to harvest your emails, images etc.

5. Use the robots.txt file to block out folders with sensitive information, text content, demo areas or content yet to be approved by your editors before it goes live.

The robots.txt file is an effective tool to address certain issues regarding website ranking. Used in conjunction with other SEO strategies, it can significantly enhance a website's presence on the net.

Thursday, March 23, 2006

SEO Link Building With Web Content Secrets

It's the timeless question: how do you get other sites to link to you? The most commonly discussed ways are reciprocal linking (swapping links) and buying links. Yet there's another important tool for building links that should be a part of your toolbox: distributing content in exchange for one-way inbound links.

Comparison with Other Linking Methods
  • Reciprocal Linking: The big advantage of content distribution over swapping links is that the links built are one-way, and therefore presumably more valuable. Of course, reciprocal links still have value, but relying primarily on them might hamper your SEO efforts.
  • Indirect Reciprocal Links: I link my site A to your site, so you link your site to my site B. The problems are that this can be a lot of work, and also, Google can detect indirect links if you do it more than once with the same group of sites, which might make your linking arrangements look like a link farm.
  • Paid Links: The problem with paid links is 1) the costs add up; 2) search engines are getting better and better at discounting paid links. According to Matt Cutts' blog, "I wouldn't be surprised if search engines begin to take stronger action against link buying in the near future...link-selling sites can lose their ability to give reputation (e.g. PageRank and anchortext)."

Kinds of Content to Distribute

  1. Articles: This is the essential kind of content distribution, to the point that many people consider content distribution simply as "article marketing." However, you're missing out on a few other sources of links if you only do articles.
  2. News blurbs: A lot of news-style sites will only reprint pieces of a couple of paragraphs. The good news is that often enough the whole point of these news blurbs is to include links to other sites, in a sort of "look what we've found" kind of way, a la Slashdot.org
  3. Press Releases: There are some sites that aggressively reprint press releases. A press release is like an article, only in a very specific press release format, and frankly that's not that enjoyable to read. I don't know why some sites are so head-over-heels over press releases, but, hey, that's their business. The good news is that even if you can't write and don't want to hire a writer, press releases (at least basic ones) are pretty easy to do.
  4. Tools, games and other webware: Sites with popular tools, software, Flash games and other webware often let other sites use it in exchange for a link. The big potential downside is technical support.
  5. Images: Images, especially charts and photographs, are important forms of content on the web. If you have great images on your site and people ask you to use them on their sites, require a backlink in exchange. The problem with images is that they are so easily stolen. Stolen words can be uncovered with a web search. You could try to watermark images with a copyright symbol, URL, and the link requirement. But in the process you'd make the image much less desirable.
  6. Web Design Templates: These have been freely distributed for a long time. Yet they are even more easily stolen than images. Also, if you embed a link in the footer of a web template, what you'll get back are sitewide links, which are often thought to be filtered out in search engines.

Maximizing Content Distribution

Links' Effectiveness: Anchor Text

You need optimized anchor text to rank high for any competitive keyword. That means you need your target keyword in the anchor text, and very importantly, variants of the target keyword (too many links with the exact same anchor text may be filtered). The problem is that some sites by default don't let you choose the anchor text of the link to your site. So you need to: 1) look for sites that do reprint content with optimized anchor text; 2) specifically ask for your target anchor text to be used. Also, do keep in mind that a true natural linking structure will require you to have a number of links that are not anchor-text-optimized, typically with the URL as the anchor text.

How to Find Sites

Finding sites to submit content is the biggest challenge. You can start by asking around to any other webmasters you already have a relationship with. Next, web-search. The classic method is "submit article" + [keyword]. Most of the sites you find this way won't be good candidates, which is why this can be a bit labor-intensive. I use offshore labor for this step, as well as a program that will sort and store all the search results into a spreadsheet; otherwise it might not be worth it. Then again, the same would be true for finding reciprocal linking partners.

Ethical Issues & Best Practices

Golden rule: remember that there's a human being who has to approve your article for submission.

  • Read and adhere to all submission guidelines.
  • Avoid automation. There's almost always some detail of submission that requires a human eye: a multitude of html formatting requirements, changing site themes, etc.
  • Don't submit by email unless specifically instructed. Using a contact form prevents possible sp@m accusations.
  • Only approach websites that request content submissions.
  • Don't misrepresent reprint content as original
  • Don't submit the same content too often. After about two hundred reprints, a lot of people will be seeing the same thing over and over again and possibly complaining.

In short, as SEO gets more competitive, having more and more linking methods at your disposal gets more and more important. Don't overlook this important tool.

SEO for Traffic with Content vs. Ranking with Links

http://seo-gujarat-india.blogspot.com/2006/03/seo-for-traffic-with-content-vs.htmlHow do you grow your search engine traffic without adding a single new link or making any changes to your existing webpages?

It's simple. Just add content.

Simply having keyword-optimized pages of content on your site won't rank you high for competitive search engine keywords – that's a fact of life. But keyword-optimized content can really bring in the traffic for low-competition and unique keywords. The low-competition and unique keywords are typically longer multi-word variants of the keyword. For instance, instead of "search engine ranking," "ranking for search engine traffic niche keywords."

If you have lots of pages of optimized content–and you optimize well – all the search engine traffic from these low-competition keywords will really add up. Plus, you'll usually get more repeat visitors and type-in traffic, too.

Just picture this realistic example of traffic-building with content vs. ranking-building with links. Company A invests $5,000 for link-building in order to rank for a competitive keyword. Company B invests the same amount, only in content. Company A and Company B: each start out on equal SEO footing: equally old websites with the same amount and quality of content, same content management systems, the same PageRank and quantity, quality, and relevance of inbound links.

Company A's research reveals that $5000 is just the amount needed to get on the first page of Google for a target keyword that should deliver 100 unique visitors per day if the site ends up in the first position. They dutifully get inbound links optimized for that keyword, following all SEO best practices. Three months and $5,000 later, the site is stuck somewhere toward the bottom of the second page of Google search results for the target keyword. Six months later, they've actually sunk a bit lower in the SERPs. The good news is that the site is getting some traffic from the links built and from the lowly search engine position, but nowhere near the 100 visitors/day they were hoping for from search results.

Company B, meanwhile, had content written around a long list of keywords with little or no competition in the search engines, using up-to-date search engine copywriting techniques. They've been enjoying a growing stream of visitors to their site almost since the first page of content was added. Three months later, the site's search engine traffic has grown by a hundred unique visitors per day, or 3,000 per month. Moreover, Company B's repeat visitor traffic has also jumped. Type-in traffic has increased, presumably as visitors forward the URLs of useful pages to their friends. Page views are up, too, not only from more repeat visitors and type-in visitors, but also from first-time search visitors staying longer and browsing more pages. Six months later, the website's content has built a loyal following on the net, generating even more repeat visitors. The search engine traffic is as good as it ever was.

What happened?

Pitfalls of Link-Building for Search Engine Ranking
Company A thought it had a fairly sure thing: build enough optimized links for the keyword, taking care not to trigger search engine penalties. Yet as they've discovered, there is no sure thing when it comes to search engine rankings:
  • Over-optimization penalty minefield. The search engines, particularly Google and Yahoo!, are very risk-averse when it comes to ranking sites well for competitive keywords. On the whole, they are perfectly willing to risk dropping several good sites from top rankings in order to try to keep one bad site out. They are constantly tweaking their algorithms to identify sites whose link structures are not indicative of a quality site. In the process, plenty of good sites with good SEO also get swept up. This risk of failure is the inherent risk of SEO. True, most of the time, a good site with good SEO does move to the top. But in a large minority of cases, quality goes unrewarded.
  • Competition and the moving target. As Site A was moving up the search engine results for its competitive target keyword, so were the other sites. There is no rest for the victorious when it comes for SEO. The top sites for highly competitive keywords are constantly building new optimized links. That's why any SEO effort has to aim to do at least ten percent better than the site currently in the position it's targeting.
  • Lack of keyword diversity. Too often, websites with modest SEO budgets (and $5,000 is modest when it comes to a competitive keyword) aim for just a few keywords. Given all the potential pitfalls of an SEO campaign, you need to be going after ten or more target competitive keywords, and at least another ten related but less competitive keywords. That way, failure for a few keywords won't scuttle the whole project. Meanwhile, search engines look for diversity in targeted keywords, so you get much more out of targeting a larger group of keywords. If you can't afford to do this, you're really better off not going after competitive keywords. Sure, you might get those rankings. But what happens if you've spent your budget and still have little to show for it?

Meanwhile, the fundamental advantage of pursuing low-competition keywords is that, by definition, it's much closer to being a sure thing.

Advantages of Web Content SEO
  • Greater certainty. Not only is a page of content extremely likely to bring in search engine traffic—unlike the similar investment in links — it won't suddenly disappear. The sites linking to you might stop anytime, or do something to stop links' passing search engine value (such as adding the "nofollow" tag or switching to a search-engine-unfriendly content management system).
  • Cost. Traditionally, copywriting has been more expensive than link-building. But that's changed. As "nofollow" link-Scrooge-ry becomes more and more common, and as paid and reciprocal links get downgraded, the real cost of obtaining quality links increases. Meanwhile, the copywriting market has increasingly adapted to the needs of search engine marketing. To get a search engine visitor, you don't need a Pulitzer-prize winning essay or a killer sales letter. You simply need highly focused, readable, keyword-optimized, information-packed pages of around 250 words each — and more and more copywriting and SEO firms are delivering this service cost-effectively. Blogs, meanwhile, let you and your employees add content easily. Bulletin boards (modified to be search-engine-friendly) let site visitors add content, too. In fact, "natural content" from blogs and bulletin boards is now much more viable than natural link building.

In conclusion, when you look at SEO, don't forget that your number-one goal is not to rank high for a certain keyword, but to get more search engine traffic. In some less competitive sectors, high rankings may still be a realistic and effective proposition. But increasingly, ranking high for competitive keywords is no longer the best way to get traffic.

Predicting Search Engine Algorithm Changes

With moderate search engine optimization knowledge, some common sense, and a resourceful and imaginative mind, one can keep his or her web site in good standing with search engines even through the most significant algorithm changes. The recent Google update of October/November 2005, dubbed "Jagger", is what inspired me to write this, as I saw some web sites that previously ranked in the top 20 results for extremely competitive keywords suddenly drop down to the 70th page. Yes, the ebb and flow of search engine rankings is nothing to write home about, but when a web site doesn't regain many ranking spots after such a drop it can tell us that the SEO done on the site may have had some long-term flaws. In this case, the SEO team had not done a good job predicting the direction a search engine would take with its algorithm.

Impossible to predict, you say? Not quite. The ideas behind Google's algorithm come from the minds of fellow humans, not supercomputers. I'm not suggesting that it's easy to "crack the code" so to speak because the actual math behind it is extremely complicated. However, it is possible to understand the general direction that a search engine algorithm will take by keeping in mind that any component of SEO which is possible to manipulate to an abnormal extent will eventually be weighted less and finally rendered obsolete.

One of the first such areas of a web site that started to get abused by webmasters trying to raise their rankings was the keywords meta tag. The tag allows a webmaster to list the web site's most important keywords so the search engine knows when to display that site as a result for a matching search. It was only a matter of time until people started stuffing the tag with irrelevant words that were searched for more frequently than relevant words in an attempt to fool the algorithm. And they did fool it, but not for long. The keywords meta tag was identified as an area that was too susceptible to misuse and was subsequently de-valued to the point where the Google algorithm today doesn't even recognize it when scanning a web page.

Another early tactic which is all but obsolete is repeating keywords at the bottom of a web page and hiding them by changing the color of the text to match the background color. Search engines noticed that this text was not relevant to the visitor and red-flagged sites that employed this method of SEO.

This information is quite basic, but the idea behind the aforementioned algorithm shifts several years ago is still relevant today. With the Jagger update in full swing, people in the SEO world are taking notice that reciprocal links may very well be going the way of the keywords meta tag. (i.e. extinct) Webmasters across the world have long been obsessed with link exchanges and many profitable web sites exist offering services that help webmasters swap links with ease. But with a little foresight, one can see that link trading has its days numbered, as web sites have obtained thousands of incoming links from webmasters who may have never even viewed the web site they are trading with. In other words, web site popularity is being manipulated by excessively and unnaturally using an SEO method.

So with keyword meta tags, keyword stuffing within content, and now link exchanges simply a part of SEO history, what will be targeted in the future? Well, let's start with what search engines currently look at when ranking a web site and go from there:

On-page Textual Content
In the future, look for search engines to utilize ontological analysis of text. In other words, not only your main keywords will play a factor in your rankings, but also words that relate to them. For example, someone trying to sell NFL jerseys online would naturally mention the names of teams and star players. In the past, algorithms might have skipped over those names, deemed them irrelevant to a search for "NFL jerseys." But in the future, search engines will reward those web sites with a higher ranking than those that excessively repeat just "NFL jerseys." With ontological analysis, web sites that speak of not only the main keywords but other relevant words can expect higher rankings.
The Conclusion: Write your web site content for your visitors, not search engines. The more naturally written sites can expect to see better results in the future.

Offering Large Amounts of Content
This can frequently take the form of dynamic pages. Even now, search engines can have a difficult time with dynamic content on web sites. These pages usually have lengthy URLs consisting of numbers and characters such as &, =, and ? The common problem is that the content changes so frequently on these dynamic pages that the page becomes "old" in the search engine's database, thus leaving searchers seeing results that contain old information. Since many dynamic pages are created by web sites displaying hundreds or thousands of products they sell, and the number of people selling items on the Internet will obviously increase in the coming years, you can expect that search engines will improve their technology and do a better job indexing dynamic content in the future.
The Conclusion: Put yourself ahead of the game if you are selling products online and invest in database and shopping cart software that is SEO-friendly.

Incoming Links
Once thought to be a very difficult thing to manipulate, incoming links to one's web site have been abused by crafty SEOs and webmasters the world over. It is finally at a point where Google is doing a revamp of what constitutes a "vote from [one site to another]" as they explain it in their webmaster resources section. Link exchanges are worth significantly less now than ever to the point where the only real value in obtaining them is to make sure a new web site gets crawled by search engine spiders.
Over the years, many web sites reached top spot for competitive keywords by flexing their financial muscle and buying thousands of text links pointing to their site with keywords in the anchor text. Usually these links would appear like advertisements along sidebars or navigation areas of web sites. Essentially this was an indirect way of paying for high Google rankings, something which Google is no doubt trying to combat with each passing algorithm update. One idea of thought is that different areas of a web page from a visual point of view will be weighted differently. For example, if a web site adds a link to your site within the middle of their page text, that link should count for more than one at the bottom of the site near the copyright information.
This brings up the value of content distribution. By writing articles, giving away free resources, or offering something else of value to people, you can create a significant amount of content on other web sites that will include a link back to your own.
The Conclusion: It all starts with useful content. If you are providing your web site visitors with useful information, chances are many other sites will want to do the same. SEO doesn't start with trying to cheat the algorithm; it starts with an understanding of what search engines look for in a quality web site.