Some SEO practices haven’t kept pace with changes in search engines and may now be self-defeating, leading to content that fails to rank. Here are six SEO practices that hinder ranking and suggestions for more effective approaches.
The word redundant means no longer effective, not necessary, superfluous. The following are three redundant SEO practices.
For example, some SEOs think that buying expired domains is a relatively new thing but it’s actually well over twenty years old. Old school SEOs stopped buying them in 2003 when Google figured out how to reset the PageRank on expired domains. Everyone holding expired domains at that time experienced it when they stopped working.
This is the announcement in 2003 about Google’s handling of expired domains:
“Hey, the index is going to be coming out real soon, so I wanted to give people some idea of what to expect for this index. Of course it’s bigger and deeper (yay!), but we’ve also put more of a focus on algorithmic improvements for spam issues. One resulting improvement with this index is better handling of expired domains–the authority for a domain will be reset when a domain expires, even though dangling links to the expired domain are still out on the web. We’ll be rolling this change in over the next few months starting with this index.”
In 2005 Google became domain name registrar #895 in order to gain access to domain name registration information in order to “increase the quality” of the search results. Becoming a domain name registrar gave them real-time access to when domain names were registered, who registered them and what web hosting address they were pointing to.
It’s surprising to relatively newbie SEOs when I say that Google has a handle on expired domains but it’s not news to those of us who were the very first SEOs in history to buy them. Buying expired domains for ranking purposes is an example of a redundant SEO practice.
Another example are paid links. I know for a fact that some paid links will push a site to rank better and this has been the case for many years and still is. But, those rankings are temporary. Most sites generally don’t get a manual action, they just stop ranking.
A likely reason is that Google’s infrastructure and algorithms can neutralize the PageRank flowing from paid links thereby allowing the site to rank where it’s supposed to rank without disrupting their business by penalizing their site. That wasn’t always the case.
The recent HCU updates are a blood bath. But the 2012 Google Penguin algorithm update was cataclysmic on a scale several orders larger than what many are experiencing today. It affected big brand sites, affiliate sites and everything in between. Thousands and thousands of websites lost their rankings, nobody was spared.
The paid link business never returned to the mainstream status it formerly enjoyed when so-called white hats endorsed paid links based on the rationalization that paid links weren’t bad because they’re “advertising.” Wishful thinking.
Insiders at the paid link sellers informed me that a significant amount of paid links didn’t work because Google was able to unravel the link networks. As early as 2005 Google was using statistical analysis to identify unnatural link patterns. In 2006 Google applied for a patent on a process that used a Reduced Link Graph as a way to map out the link relationships of websites, which included identifying link spam networks.
If you understand the risk, have at it. Most people who aren’t interested in burning a domain and building another one should avoid it. Paid links is another form of redundant SEO.
The epitome of redundant SEO is the use of “follow, index” in the meta robots tag.
<meta name="robots" content="index, follow" />
This is why index, follow is redundant:
Validation:
Google’s Special Tags documentation specifically says that those tags aren’t needed because crawling and indexing are the default behavior.
“The default values are index, follow and don’t need to be specified.”
Here’s the part that’s a head scratcher. Some WordPress SEO plugins add the “index, follow” robots meta tag by default. So if you use one of these SEO plugins, it’s not your fault if “index, follow” is on your web page. SEO plugin makers should know better.
I’m not saying to avoid using Google’s search features for research. That’s fine. What this is about is using that data verbatim “because it’s what Google likes.” I’ve audited many sites that were hit by Google’s recent updates that exact match these keywords across their entire website and while that’s not the only thing wrong with the content, I feel that it generates a signal that the site was made for search engines, something that Google warns about.
Scraping Google’s search features like People Also Ask and People Also Search For can be a way to get related topics to write about. But in my opinion it’s probably not a good idea to exact match those keywords across the entire website or in an entire web page.
It feels like keyword spamming and building web pages for search engines, two negative signals that Google says it uses.
Many SEO strategies begin with keyword research and end with adding keywords to content. That’s an old school way of content planning that ignores the fact that Google is a natural language search engine.
If the content is about the keyword, then yes, put your keywords in there. Use the headings for describing what the content is about and titles to say what the page is about. Because Google is a natural language search engine it should recognize your phrasing as meaning what a reader is asking about. That’s what the BERT is about, understanding what a user means.
The decades old practice of regarding headings and titles as a dumping ground for keywords is deeply ingrained. It’s something I encourage you to take some time to think about because a hard focus on keywords can become an example of SEO that gets in the way of SEO.
A commonly accepted SEO tactic is to analyze the competitors top-ranked content, then use the insights about that content to create the exact same content but better. On the surface it sounds reasonable but it doesn’t take much thinking to recognize the absurdity of a strategy predicated on copying someone else’s content but “do it better.” And then people ask why Google discovers their content but declines to index it.
Don’t overthink it. Overthinking leads to unnecessary things like the whole author bio EEEAT thing the industry recently cycled through. Just use your expertise, use your experience, use your knowledge to create content that you know will satisfy readers satisfied and make them buy more stuff.
When a publisher acts on the belief that ‘this is what Google likes,’ they’re almost certainly headed in the wrong direction. One example is a misinterpretation of Google’s Information Gain patent which they think means Google ranks sites that contain more content on related topics than what’s already in the search results.
That’s a poor understanding of the patent but more to the point, doing what’s in a patent is generally naïve because ranking is a multi-system process, focusing on one thing will not generally be enough to get a site to the top.
The context of the Information Gain Patent is about ranking web pages in AI Chatbots. The invention of the patent, what makes it new, is that it’s about anticipating what the next natural language question will be and then having those ready to show in the AI search results or showing those additional results after the original answers.
The key point about that patent is that it’s about anticipating what the next question will be in a series of questions. So if you ask an AI chatbot how to build a bird house, the next question the AI Search can anticipate is what kind of wood to use. That’s what information gain is about. Identifying what the next question may be and then ranking another page that answers that additional question.
The patent is not about ranking web pages in the regular organic search results. That’s a misinterpretation caused by cherry picking sentences out of context.
Publishing content that’s aligned with your knowledge, experience and your understanding of what users need is a best practice. That’s what expertise and experience is all about.
One of the longtime bad practices in SEO, going back decades, is the one where some SEO does a study of millions of search results and then draws conclusions about factors in isolation. Drawing conclusions about links, word counts, structured data, and 3rd party domain rating metrics ignores the fact that there are multiple systems at work to rank web pages, including some systems that completely re-rank the search results.
Here’s why SEO “research studies” should be ignored:
A. Isolating one factor in a “study” of millions of search results ignores the reality that pages are ranked due to many signals and systems working together.
B. Examining millions of search results overlooks the ranking influence of natural language-based analysis by systems like BERT and the influence they have on the interpretation of queries and web documents.
C. Search results studies present their conclusions as if Google still ranks ten blue links. Search features with images, videos, featured snippets, shopping results are generally ignored by these correlation studies, making them more obsolete than at any other time in SEO history.
It’s time the SEO industry considers sticking a fork in search results correlations then snapping the handle off.
SEO is subjective. Everyone has an opinion. It’s up to you to decide what is reasonable for you.
Featured Image by Shutterstock/Roman Samborskyi