Search Engine Traffic Guide
Google.com is certainly the Internet's largest search engine. It serves 200 million requests per day and runs from more than 15,000 servers distributed worldwide. It is arguably one of the most scalable Internet services ever provided to the general public. It is not the case, however, to say that one server handles one user's request. If this were the case, each computer would have to trawl through thousands of terabytes of data looking for a search term. It would take weeks to return a single query. Instead, the servers are divided into six different groups Web servers, document servers, index servers, spell check servers, advertisement servers, and Googlebot servers each performing its own task. Google also employs another breed of software, a spider named Googlebot. This piece of software, running on thousands of PCs simultaneously, trawls the Web continuously, completing a full round-trip in approximately one month. Googlebot requests pages in an ordered fashion, following links...
A Web request requires two components, a Web server and a client. The client is (currently) most often a browser, but it could be another type of program, such as a spider (a program that walks Web links, gathering information) or an agent (a program tasked with finding specific information, often using search engines), a standard executable application, a wireless handheld device, or a request from a chip embedded in an appliance, such as a refrigerator. In this book, you'll focus mostly but not exclusively on browser clients therefore, you can think of the words browser and client as essentially the same thing for most of the book. I'll make it a point to warn you when the terms are not interchangeable.
All applications have an intended audience, but when you initially conceive of an application, you're often unclear exactly who the audience is. For example, suppose you imagine the perfect search engine, one that not only finds relevant information but also contains links to every piece of related information across the entire Internet. By paying a small fee, users could improve their search experience. But wait who's the audience Is there an audience for a pay-per-search service when there are so many free search services already available It turns out that people are willing to pay for search services, but only when those services are extremely targeted.
In Chapter 4, you've built a search engine capable of searching through the list of accounts in the Accounts table. When the list of accounts increases into the hundreds or thousands, you will find that the time it takes to perform a search starts to increase. It is not unheard of to have users wait 10 seconds or more for a search to complete. This is, of course, unacceptable in cases where the application is time critical.
As more and more Web services are created and deployed by numerous organizations on the Internet, it will become increasingly more difficult for consumers to find these services. This is analogous to the difficulty Web users might experience in finding a specific page of information amongst the billions of pages of information currently published on the Web were it not for Web search engines. Similar to the search engine approach that is used to query and locate Web pages, the Universal Description, Discovery, and Integration (UDDI) specification defines a logically centralized but physically distributed, XML-based registry database and API for companies to find each other and the Web services that they offer. The UDDI registry API offers support for querying as well as updating the registry database. The UDDI Web site (located at http www.uddi.org) provides a browser-based interface for registering your company and services as well as the capability to look up potential business...
In cases where you do not know anything about where a Web service is located (or even if it exists), you will need the services of Universal Description, Discovery, and Integration (UDDI). This technology is essentially a universal search engine for Web services that is available via the Internet as both an interactive Web site and a set of programmable Web services.
Most users do not have a lot of time to spend wading through myriad stories just to find the one they find interesting. A smart user will go to a Web site that helps her find the information. A simple search engine often will do just fine, but it would be better in many cases for a user to be able to come back day after day and see new, updated information relating to previous searches and have it show up as the top story. This is the role of personalization.
This section of the chapter will acquaint you with some of the most interesting new features that Window XP has to offer. In some cases, these new features are also available to Windows 2000 users in the form of a Windows Update or other Microsoft download. We won't cover every Windows XP feature because some are implemented automatically and others are in the esoteric category. For example, we won't discuss either the new search engine or the Remote Assistance feature, even though these features are unique to Windows XP. Windows XP comes in two versions, and we'll look at both in the sections that follow. You can obtain a list of Home Edition features at and Professional Edition features at
The title element names your Web page. The title usually appears on the colored bar at the top of the browser window, and will also appear as the text identifying your page if a user adds your page to their list of Favorites or Bookmarks. The title is also used by search engines for cataloging purposes, so picking a meaningful title can help search engines direct a more focused group of people to your site. Line 12
Listing 59 Web Form1aspxvb a Visual Basic NETgenerated codebehind file for Web Formlaspx showing how to manipulate a
WebForm1.aspx next has a number of Meta tags placed by Visual Studio .NET. Meta HTML elements convey hidden information about the document to both the server and the client. Search engines commonly read Meta tags to index pages. The remainder of Listing 5-8 is similar to previous examples, defining ASP.NET server controls for the page. Some bells and whistles are used for the title label (a dotted border and a background color), but the rest of the controls are declared with the minimum number of attributes. WebForm1.aspx.vb, in Listing 5-9, begins with the opening of a class declaration, as follows
Finding information on the Internet can be a challenge at best. Typically when you are looking for a piece of information, you turn to a Web-based search engine. By simply typing in a few keywords, the search engine provides you with a list of hyperlinks to some related Web pages. Without these search engines, you would be lost in a vast sea of information. Even if I told you that the piece of information you want is on Web server X, you would still be hard-pressed to find that needle in the haystack. But if I told you that there was a useful XML Web service on server X, you would have all you need to immediately start using that service in your applications.
Crystal Decisions has recently revamped its support Web site with a new comprehensive search engine that allows you to seach multiple categories of documents, including the Knowledge Base. The Crystal Decisions Knowledge Base provides a comprehensive selection of articles across the range of Crystal Decisions products and is updated twice a week with over 100 documents relating to Crystal Reports .NET. To search for articles related to Crystal Reports .NET, search on the following keywords dotnet, .NET, VB .NET, or CSHARP.
A lot of CMSs use third-party search engines to do their searches for them. Doing this makes sense because it allows the CMS people to specialize in the thing they do best, content management, while allowing a different group that specializes in searching to do that. Some CMSs have their own built-in search engines. They often are not as advanced as what's available from a third party, but they have the benefit of saving the user money by not forcing him to buy a search program and then have to integrate it.
A large part of the .NET strategy is the concept of web services. In the world of web services, programmers write useful little black boxes of code and expose them to other programmers via the Internet. Need the shipping status of a package displayed on your e-tailer site Just call your shipper's .NET web service (such a service does not exist at the time of this writing, but Microsoft is banking on the fact that it will be in the .NET future). Want to incorporate a search engine into your own site Just hook up to Yahoo or AltaVista or Google's web service.
SEO Article Copywriting
Ghost Writing and Its Link to Internet Marketing. From 1996 to 2000, SEO copywriting was still not formulated. To optimize Websites, operators and owners had just needed to formulate and create Meta tags or titles and submit the tags and the whole Website to directories and search engines so that search listing would include the Website.