Zentury Spotlight – Google Shares Tips for Technical SEO Checks

Google Shares Tips for Technical SEO Checks

Google shared a new video with users, providing three ideas on using Search Console to identify technical difficulties that could affect indexing or ranking. Here are the three tips for technical SEO checks:

  • Verify if the page can be indexed or not.
  • Verify whether a page is duplicated or whether another is the canonical version.
  • Examine the produced HTML for any errors relating to coding.

Checking to see if the URL can be indexed is a common mistake that is crucial to avoid.

For debugging whether or not Google has indexed a website, the URL inspection tool in the Google search interface is quite helpful. You may find out via the tool whether a page is indexable and if it is indexed. In the event that it cannot be indexed, it will provide an explanation for why Google might be experiencing difficulties with it.

Google then suggests determining if a page is canonical or duplicated. According to the video, it’s usually acceptable if a different page is chosen as the canonical version.

Google also cautions against confusing source code inspection of HTML with rendered HTML inspection. Rendered refers to the HTML created so that the webpage may be rendered by the browser or Googlebot.

Examining the rendered HTML will show you what the browser and Googlebot are actually seeing at the code level, which is helpful if you’re attempting to determine whether there’s a problem with the HTML.

Technical SEO Checks

Google Accused for “Stealing” Publisher’s Content

A publisher who saw little to no value to themselves from what they perceived to be Google effectively stealing their content, shared this situation on Twitter.

The publisher shared a screenshot of a branded site:search where users could look for activities in Denver using content straight from their website.

The publisher was mainly upset by Google’s use of their original content, including personal photographs, without driving traffic to their site.

Publishers and SEOs were probably not prepared for Google’s unexpected response to this situation.

In response, Danny Sullivan (Google’s SearchLiaison), provided an explanation of the situation. They described how a link to the publisher’s webpage is included in the rich result, which utilizes all of the publisher’s material.

SearchLiaison made a wise decision by not arguing that Google was correct.  Rather, they responded with empathy for the publisher’s situation.

Given that Danny Sullivan was a publisher for many years prior, unlike many Google employees, SearchLiaison probably understood the publisher’s sentiments. He perhaps has the most experience of being on the other side of Google’s fence of any employee.

Legal definitions of fairness exist, and Google could be able to exploit content from websites in ways that give the impression that the publisher is being “ripped off” in order to outrank them with their own content.

However, there’s also an individualized, common sense interpretation of fair play that comes from the heart. Perhaps it’s the sense of justice that many publishers experience when Google uses their content in a way that seems to favor Google over the publisher.

Stealing Publisher's Content

Google Updates Guidelines For Ranking And Spam Systems

Google has clarified how it handles sites with a significant volume of non-consensual explicit images and requests for their removal by updating their spam policy for web search and the guide to ranking systems.

The policy has changed to specifically name websites that charge for the removal of unfavorable content, but the guidance also says that they will degrade content on other websites that follow the same pattern of behavior.

As a result, when one website is reported, other websites that use similar exploitative removal techniques may be demoted.

The identical phrase regarding the “automatic protections” was entirely removed in a similar modification to Google’s Guide to Search Ranking Systems, most likely because it was unnecessary.

Nevertheless, the final statement, outlining criteria for removal or demotion from Google’s search results, underwent rewording.

Sites accumulating numerous requests for “non-consensual explicit imagery removals” are also subject to potential downgrading.

Google Updates Guidelines For Ranking And Spam Systems

Google’s Take On SEO Impact of Double Slashes in URLs

In an Office Hours hangout, Gary Illyes from Google provided an answer to the query of whether a double forward slash in a URL affects a website’s search engine optimization.

There are several causes of a double forward slash, most commonly relating to a coding error in the CMS or in the .htaccess file, which can lead to multiple webpages with different URLs.

This is a bothersome issue that is typically not resolved by rewriting the URL using an .htaccess rule to remove the additional forward slash because that doesn’t actually address the root of the issue.

Finding the precise cause of the web server’s double forward slashes in the URLs is the only practical solution to the double slash in the URL issue.

The usability of a website is crucial because poor usability can result in user annoyance and frustration, which can negatively impact the popularity of the website over time. This can then indirectly affect visibility if people stop visiting the website, no one recommends it, and other websites are reluctant to link to it.

One may argue that crawler confusion directly affects SEO. Making a website simple to navigate and crawl is great practice, therefore anything that throws off crawlers should be fixed right away.

SEO Impact of Double Slashes in URLs

Google’s Insights on Domain Age Impacting Ranking

SEOs have long observed that higher ranks are correlated with older domain names.

A question about whether domain age affected search rankings was posed on X (previously Twitter). John Mueller of Google explains the correlation.

Nearly twenty years have passed since SEOs first held the belief that domain age was significant and even a ranking influence.

Google submitted a patent for information retrieval based on historical data, which may have inspired the idea.

Domains were discussed in the patent with reference to historical data. However, SEOs misunderstood the patent to say what it actually said. They interpreted the patent completely wrong.

Domain Age Impacting Ranking

In a whole part of the patent labeled “Domain-Related Information,” spam sites are identified using domain-related information.

Giving a domain that has been registered for a long period additional ranking points is not the same as identifying spam sites.

It goes on to add that throwaway domains are not often registered for lengthy periods of time, in contrast to conventional sites.

This is the part of the patent that SEOs misinterpret. “Legitimate” domains are not ranked using this data. Spam websites are located using the registration data.

The truth is being told by John Mueller. Domain age isn’t taken into account while ranking. A patent that makes such a suggestion does not exist.

Leave a Comment