When the World Wide Web became public in 1991 things were easy, yet wonderous enough to make the early adopters enthusiastic about the future. As the WWW grew, search engines became necessary to index web pages and to enable users to find information quickly.
A brief history of the search engines
The first search engine was called “Archie” – short for “archived.” It was created in 1990 as a school project to help find File Transfer Protocol (FTP) files.
Then, in 1991, Mark McCahill and his team at the University of Minnesota, created the Gopher protocol an early alternative to the World Wide Web, which boasted the Gopherspace with a search engine called Veronica. It shut down in the 2000s, according to the Gopher history by Tim Gihring, but you can still access the Gopherspace following instructions provided on the Floodgap Public Gopher Proxy. The public search engine called Gopher today, which identifies itself as “The search engine that protects your privacy,” is not the same thing.
The World Wide Web Wanderer followed in 1993. It was a bot – a web crawler with the mission to measure the size of the World Wide Web. From this, it generated the Wandex index. It was created at the Massachusetts Institute of Technology by Matthew Gray, who is now a software engineer at Google.
Till Yahoo! was invented in 1994, a suite of primitive search engines followed. Names like Martijn Koster’s ALIWEB – as in “Archie-Like Indexing of the Web” – JumpStation, the World Wide Web Worm, and Excite represented consistent steps forward in the evolution of search at their time. Then, the web directories were born: first VLib, Tim Berners-Lee’s Virtual Library, then the EINet Galaxy. Yahoo!, which was originally a directory, was founded in April 1994 by David Filo and Jerry Yang. Initially, it only included their favorite web pages with short text descriptions of the URLs they listed. As the directory grew, the option to “search” became imperative. The directory is no longer a stand-alone entity, and it is now a part of Yahoo! Small Business, although Yahoo! announced its demise in 2014.
After the Yahoo! Directory, the WebCrawler was the first to index whole pages, and it’s still in use. Lycos, which was created in 1994 is still around too. Steve Kirsch’s Infoseek was popular enough in its time to be purchased by The Walt Disney Company in 1998.
Here it is also significant to include the Open Directory Project (ODP), which was created by a team conducted by Rich Skrenta in 1998. This would later become the holy grail for many SEOs in the early 2000s. You may still remember the ODP as DMOZ. It closed down in 2017 because AOL, which owned it, no longer wanted to support the project. A copy of DMOZ as it was when it closed down is still available at dmoztools.net. DMOZ is essential because Google itself decided to copy the project, including only its page rank algorithm to rank listings. The Google Directory was only available until 2011 when the company determined that search was a better way for users to find information.
In 1995, AltaVista was the first search engine to offer unlimited bandwidth and to allow natural language queries. They also had other advanced features, like allowing users to add and delete URLs. It was purchased by Yahoo! in 2003 and shut down in 2013.
Finally, advancing to 1996, a search engine that used backlinks to find information found its way to the WWW. It also counted the number of links that pointed to a site as a signal of “authority,” and it ranked websites based on citation notation. Does it sound familiar? It was called BackRub, which was probably not an unfortunate name. Two brilliant students from the Stanford University worked on this project: Larry Page and Sergey Brin. Yes, BackRub is the predecessor of Google.
Later in 1996, Inktomi pioneered the paid inclusion model, and in 1997, Ask Jeeves launches as a natural language search engine. MSN Search by Microsoft launched in 1998 using search results from Inktomi. It later also took listings from Looksmart and then, in 1999 from AltaVista. They eventually replaced these search engines with their own creation, which was consistently updated and improved, as well as rebranded to Windows Live Search in 2006, Live Search in 2007, and eventually Bing in 2009.
1998 is also the year when Google officially entered the market. By the end of 1999, Brin and Page were trying to sell their search engine to Excite for 1 million dollars, but Excite CEO George Bell rejected their offer. This was a fortunate decision because Excite went bankrupt in 2001, and Infospace bought it for $10 million. By 2001, Google was a force to be reckoned with. Another interesting aspect of Google’s early development is that they started selling text-based ads based on keyword searches as early as 2000, contrary to a previous statement by Brin and Page that Google would stay ad-free. Following the public IPO, Google continued to develop and improve, releasing updates after updates that changed and shaped the way we do SEO today.
Sure, Google was not the last search engine to hit the Web. It was followed by a number of others, some still around, but many defunct: Cuil, Powerset, Hakia, are the first to come to mind. In fact, Powerset was acquired by Microsoft in 2008 for about $100 million to integrate with Bing. Hakia shut down in 2014, but its technology still powers other sites.
DuckDuckGo launched in 2008 and is currently one of the most reliable search engines that do not track and store any personal information from its users. Gigablast, which was launched in 2000 survives competition too, supporting specialized searches and Boolean algebra operators. Then, since 2009, Wolfram Alpha is a very reliable computational knowledge engine. International search engines like Yandex (Russia) and Baidu (China) are potent players too.
There’s much more to add, but this data is enough to give you sufficient background into what shaped the evolution of SEO. While the first search engines had limited capabilities, today they are complex algorithm-based protocols, which use machine learning and can decipher natural human speech, capable of discovering almost everything the Web has to offer. And because as of March 2016 the indexed Web counted at least 4.62 billion pages, it’s kinda obvious why websites that are optimized according to SEO criteria defined by Google would, ideally, get found faster. And they do offer a Search Engine Optimization (SEO) Starter Guide that you should check up to learn what you need to do to have your site complying to the rules of the world’s most famous search engine.
Credit: https://searchengineland.com
Looking back at the dawn of SEO
SEO started pretty much when the first search engines allowed users to submit websites for indexing. AltaVista pioneered this feature. Then BackRub decided to count links, and after Google launched its PageRank toolbar in 2000, link trading became one of the leading SEO strategies to rely on. This was also the primitive stage of SEO when keyword stuffing was the annoying trend that made pages read like written by preschoolers with fluency disorders.
PageRank was not the only thing launched by Google in 2000. This is the year when they started selling text-based ads to monetize the search engine, which was not a problem at first, but after the launch of AdSense in 2003, Made for AdSense sites plagued the Web like the Spanish Flu. For years, MFA and scraper sites hurt the quality of the search engine results causing copyright concerns to many publishers and site owners whose content was stolen
But let’s look back at the linking strategies that worked so well in the infancy of SEO. At the moment in time, software like Axandra’s iBusinessPromoter SEO were favorite tools to help webmasters and SEOs submit sites to hundreds of web directories in bulk. PageRank demanded links, and directory links –, especially from Yahoo!Directory, DMOZ, and Google Directory – had a lot of link juice. Therefore, directory submissions worked wonders for fast rankings. Or, sites like Entireweb, which in its infancy was a web directory too, offered free submissions to all the major search engines plus hundreds of others to irrelevant to mention. This was a fast way to have a site listed by search engines, therefore in front of potential customers and readers, and it worked fine.
Until Google begun the algorithm update dance. Moz warns that Google changes its search algorithm around 500–600 times every year but only announces its major updates. “Boston” was the first update they announced in early 2003. This was shortly followed by Cassandra, an update aimed to curb hidden content and keyword stuffing – but only the Florida update in November later that year significantly impacted this black hat SEO technique, making keyword spam useless for search engine rankings. Cassandra punished sites engaged in massive link exchanges too. A later update known as Dominic targeted link farms.
With the Brandy update in February 2004, latent semantic indexing (LSI) was introduced, and SEOs had to learn how to optimize sites for their specific intent (theme) rather than keyword density.
Google, Yahoo!, and MSN Search introduced the “nofollow” attribute for backlinks in 2005 to fight webspam. SEOs faced another challenge to adapt to the new rules. Google went after unnatural links again with the Jagger update in October 2005, followed by many other smaller updates until one of the most significant the search engine giant has ever released: Panda.
Initially affecting English-language search results in the United States, Panda, released in 2011, was aimed at low-quality sites. It affected content-farms but also sites with a lot of duplicate content, pages with too many ads, and sites that did not bring any real value to the users. A Panda update in 2014, helped sites with heavy editorial content improve their rankings significantly while sites with too many Affiliate-Links and automatically generated content are penalized. They also launched a page layout algorithm that deranking sites with little content “above the fold.”
Links came back in focus again in 2012 with the release of the Penguin update, which targets sites that engage in buying and selling links, plus other link strategies that Google advises against. Penguin is real time and part of the core algorithm used by Google to index sites.
Good news for publishers followed in August 2012 with the DMCA penalty update against sites that violate copyrights.
Hummingbird in 2013 was the update that finally convinced the SEO community to focus on natural language and context. With Hummingbird, Google is finally able to recognize the intent behind a query. Then, Pigeon in 2014 focuses on delivering better local search results. Shortly after, HTTPS everywhere becomes a big deal, and Google includes HTTPS among its ranking signals.
2015 is the year of the mobile: both Google and Bing release algorithm updates designed to benefit mobile-friendly pages in mobile search results. 2016 and 2017 followed with other updates that forecast what SEO will be like in the future.
The future of SEO – a story in progress
The snippet length increase in November 2017 recommends now 300 characters for Meta Descriptions, which indicates a new SEO trend for 2018. If Google wants more content in Meta Descriptions, it will undoubtedly appreciate more text on web pages too. But remember the updates discussed before and keep in mind that Google values content created for the users and functional UI.
Mobile-first is a thing, not a trend, but in the future, all sites should be mobile-ready to rank.
With voice search the trend of 2018, there are new SEO strategies to focus on – and we will discuss them in-depth in a following article. But, for the future you should also consider optimizing for customer intent, and Think with Google is a great resource to help you with this.
Plus, it’s time to consider artificial intelligence – Facebook is already actively looking into this and using it in meaningful ways. And because AI is “no magic, just code” it makes sense for SEOs to wonder how to optimize for the future, because there’s no doubt that AI and machine learning technologies will influence the future of SEO. In this context, Google already has RankBrain, the AI that figures out the context of all kinds of content featured on websites. Are you ready to adapt? AI is where it will all happen next in SEO.