Our AI writing assistant, WriteUp, can assist you in easily writing any text. Click here to experience its capabilities.

June 2023 Google SEO office hours

Summary

This article is a transcript of the June 2023 edition of the Google SEO Office Hours, where questions about Google SEO are asked and answered. Questions include how to avoid syndicated versions of content appearing in Google Discover, whether it is okay to have two domains with different TLDs targeting the same country for the same keywords, whether Lighthouse JavaScript warnings have any influence on page ratings or ranking, and more. The answers explain how to block Googlebot from crawling certain sections, the difference between an XML sitemap and HTML, how Google treats structured data with parsing errors, and more.

Q&As

How can we avoid our syndicated content appearing in Google Discover?
We recommend making sure that the syndicated versions also include a noindex robots meta tag.

Is it okay for two domains with different TLDs to target the same country for the same keywords?
My gut reaction is whether this would be confusing for your users: two domains each having presumably the same content might be confusing. From a policy perspective this might also seem like search result manipulation, I'd check out Google's spam policies.

Do Lighthouse JavaScript warnings have any influence on page rating or ranking?
No. Generally that doesn't have any input on ranking.

How do I block Googlebot from crawling a specific section of a web page?
The short version is that you can't block crawling of a specific section on an HTML page. There are two similar things though: you can use the data-nosnippet HTML attribute to prevent text from appearing in a search snippet, or you could use an iframe or JavaScript whose source is blocked by robots.txt.

Does the integration of security headers such as for HSTS have a ranking influence?
No, the HSTS header does not affect Search. This header is used to tell users to access the HTTPS version directly, and is commonly used together with redirects to the HTTPS versions. Google uses a process called canonicalization to pick the most appropriate version of a page to crawl and indexโ€”it does not rely on headers like those used for HSTS.

AI Comments

๐Ÿ‘ Martin provided a thorough and well-informed response to the questions asked.

๐Ÿ‘Ž John's response was too vague and didn't provide a sufficient answer to the question.

AI Discussion

Me: It talks about the June 2023 edition of the Google SEO Office Hours and provides useful information such as how to prevent syndicated content from appearing in Google Discover, how to block Googlebot from crawling a specific section of a web page, and what is the difference between an XML sitemap and HTML. It also discusses SEO topics such as index bloat, numbers in URLs, and structured data with parsing errors.

Friend: Wow, that's a lot of information. What are the implications of the article?

Me: The article provides helpful advice for SEO, such as how to prevent syndicated content from appearing in Google Discover, how to block Googlebot from crawling a specific section of a web page, and what is the difference between an XML sitemap and HTML. It also discusses SEO topics such as index bloat, numbers in URLs, and structured data with parsing errors, which can be used to optimize websites for better search engine ranking. Additionally, the article provides useful tips on how to make sure that the URLs you submit are properly indexed by Google, as well as the importance of using robots.txt to block Googlebot from accessing certain parts of a website.

Action items

Technical terms

Canonicalization
The process of picking the most appropriate version of a page to crawl and index.
Robots meta tag
A tag used to tell search engine crawlers how to handle a page.
Index Bloat
A concept that suggests that search engines artificially limit the number of pages indexed per site.
Robots.txt
A file used to tell search engine crawlers which pages to crawl and which to ignore.
Structured Data
Data that is organized in a specific way to make it easier for search engines to understand.
HSTS Header
A header used to tell users to access the HTTPS version of a page directly.
Sitemap
A file used to tell search engines where content is located.
HTML Sitemap
A page used to help users navigate a website.
Host Groups
A group of search results from the same domain.
NOODP
A robots meta tag used to tell search engines to ignore the description of a page from the Open Directory Project.
Main Content
The most important content on a page.

Similar articles

0.86758363 SEO Buyers Guide

0.86755544 Google: Links No Longer A Top Three Ranking Signal

0.862885 The incredible ways to get SEO traffic without ever ranking

0.86014503 The Truth About Multiple H1 Tags and SEO

0.85570675 Technical SEO report reveals what matters in 2023

๐Ÿ—ณ๏ธ Do you like the summary? Please join our survey and vote on new features!