Zentury Spotlight -Google Shares Advice on Linking From Non-English to English Language Websites

Linking From Non-English to English

Google Shares Advice on Linking From Non-English to English Language Websites

If you ever wondered if linking from non-English to English language websites is bad, Google’s John Mueller shares his opinion on the subject.

He joined an insightful conversation on Mastodon regarding linking between sites in different languages and whether the language difference could impact rankings.

The common belief in the SEO world is that links from a site in non-English language may leave an impact on the ability of an English-language website to rank regionally in English-speaking countries. Therefore, to start a conversation, a certain owner of a website wanted to know whether such linking could help his English sites with ranking.

To answer the question, Link Mueller stated that links from sites published in a different language are not harmful. He also dismissed the notion that the links may be harmful. That said, he added that he has heard for years that links from sites in other languages function. It makes sense that a connection from a different language would be beneficial because this occurs naturally.

Google does not allow naturally occuring patterns to have a detrimental impact on rankings. The reciprocal link is an example of a naturally occurring linking pattern in which two sites link to one other.

Linking From Non-English to English

Google Announced Updates for Search Generative Experience

Google released a new YouTube video asking more people to experience the Search Generative Experience via Search Labs. Google SearchLiaison also tweeted that Google has made various enhancements to SGE in the last two weeks.

Google’s Search Generative Experience (SGE) will continue to grow, broadening and diversifying its source materials and pointing to a more generative future for search experiences.

The transition from relying mostly on publisher websites to incorporating brand websites in search results provides consumers with a greater range of information. It provides customers with more direct access to information, allowing them to make better decisions.

The incorporation of social media as a source in SGE responses illustrates Google’s understanding of these platforms’ relevance in supplying real-time, user-generated data to give more relevant answers.

Given Google’s continuous emphasis on monetizing search results, including sponsored results above SGE answers isn’t altogether surprising.

Despite these revisions, correct sources and the projected performance increases aren’t uniform across all queries, indicating that more work needs to be done.

Google SGE’s future is one of continuous evolution, with the goal of making search experiences more comprehensive, engaging, and quicker.

Search Generative Experience

Google on the “Index Bloat” Theory

The phrase “index bloat” refers to the circumstance in which search crawlers index pages that are unsuitable for search results.

This contains filtered product pages, internal search results, printer-friendly versions of pages, and other features.

Index bloat proponents say that these pages make it more difficult for search engines to decode websites, lowering search ranks.

Index bloat supporters frequently mention causes such as unintentional page duplication, improper robots.txt files, and poorly performing or thin content.

Google, on the other hand, believes that these aren’t sources of a non-existent “index bloat,” but rather common SEO practices that webmasters and SEO specialists should be aware of.

John Mueller, a Google Search Advocate, disproves the theory of index bloat, which holds that extensive indexing of pointless pages might harm search engine results.

Mueller claims that Google does not put an artificial cap on the number of pages indexed per site.

Mueller also added that it is more efficient to spend time creating valuable content rather than worrying about pages being left out of Google’s index.

 “Index Bloat” Theory

Google on Googlebot’s Interaction With Your Website

On the most recent episode of the podcast “Search Off The Record,” Google’s Search Relations provided answers to many queries about webpage indexing.

The issues covered included how to stop Googlebot from visiting a website entirely as well as how to stop Googlebot from crawling specific sections of a page.

When asked how to prevent Googlebot from crawling particular web page portions, such as “also bought” regions on product sites, Mueller responds that it is impossible.

He continued by outlining two possible approaches to the problem, none of which, he said, are the best ones. Mueller recommended preventing text from showing up in a search snippet by using the HTML data-nosnippet element. As an alternative, you might use JavaScript or an iframe with the source code restricted by robots.txt, however he advised against it.

He reassured everyone participating that it’s not an issue that has to be fixed if the content in question is being repeated across several pages.

Illyes offered a simple solution to the problem of blocking Googlebot from accessing any area of a website. He emphasised that the easiest way is robots.txt in case “you add a disallow: / for the Googlebot user agent, Googlebot will leave your site alone for as long as you keep that rule there.”

Googlebot’s Interaction With Your Website

Google Shares If Security Headers Have Role in Ranking 

One of the leading questions on a recent Google SEO Office Hours was if a security header has any impact on rankings.

It’s not as far-fetched as it initially seems because HTTPS is a minor Google ranking factor and security headers, such as the HSTS header, are crucial to ensuring a safe HTTPS connection.

According to John Mueller, Googlebot doesn’t rely on headers and HSTS is a message to browsers. Nevertheless, whether or not they have a ranking impact, good security policies are something that every website should adhere to.

All browsers employ a HSTS preload list hosted by Chrome, which is hard programmed into the browser, to use HTTPS by default.

Security Headers Have Role in Ranking 

More Related Articles