Google Confirms Alt Text Is Not Primarily An SEO Decision Google’s John Mueller shared Jeffrey Zeldman’s Bluesky post reminding publishers and SEOs of proper alt text usage, including a link to the W3C decision tree for guidance. The most important takeaway is that the decision process for alt text is not primarily an SEO decision. The W3C (World Wide Web Consortium) is an international standards making body for the Internet. A lot of the guidance that Google provides about how Googlebot crawls HTML and treats server response codes are based on the web standards developed by the W3C, so it’s always a good idea to go straight to the source to understand exactly how to deploy HTML (like alt text) because doing it the right way will very likely align with the same standards that Google is using. A decision tree is basically a decision making tool or diagram that asks a yes or no question. If the answer is “no” then the tree leads to another branch. Answering “yes” leads to a node that advises on what to do. The purpose of the W3C Alt Text decision tree is to guide publishers and SEOs on the proper use of alt text, which is for accessibility. The decision tree that Zeldman linked to has five questions: Does the image contain text? Is the image used in a link or a button, and would it be hard or impossible to understand what the link or the button does, if the image wasn’t there? Does the image contribute meaning to the current page or context? Is the image purely decorative or not intended for users? Is the image’s use not listed above or it’s unclear what alt text to provide? John Mueller did a repost on Bluesky with the additional insight that the decision making process for alt text is not “primarily” an SEO decision, meaning that accessibility should be the first consideration when deciding how to use alt text. Stay connected for more Updates! #Googleupdates #SEO #LatestUpdate #Newsfeed #DigitalMarketing
Time Booster Marketing’s Post
More Relevant Posts
-
How can I fix a page that has a canonical to a different URL? When a URL is 'canonicalised' to another location, it means the search engines are being instructed to not index the page, and the indexing and linking properties should be consolidated to the URL in the canonical. To address the SEO issue of pages having a canonical tag pointing to a different URL, follow these steps: Verify Canonical Tags: Check if the canonical tags are correctly implemented on your web pages. Make sure that each page has a canonical tag pointing to the preferred URL. Identify Correct Canonical URLs: Determine the correct canonical URL for each page. This should typically be the URL that you want search engines to index and rank. Update Canonical Tags: If the canonical tags are pointing to incorrect URLs, update them to point to the correct canonical URLs. Ensure consistency across your website. 301 Redirects (if necessary): If there are multiple URLs pointing to the same content, implement 301 redirects to redirect users and search engines to the canonical URL. This helps consolidate indexing and linking properties to the preferred URL. XML Sitemap Update: After updating the canonical tags and implementing redirects, update your XML sitemap to reflect the changes. This helps search engines discover the correct URLs efficiently. Internal Linking: Review internal links within your website to ensure they are pointing to the canonical URLs. Internal linking helps search engines understand the structure of your website and reinforces the authority of canonical URLs. Monitor and Test: Regularly monitor your website's performance in search results and use tools like Google Search Console to identify any remaining issues. Test different search queries to ensure that the correct URLs are being indexed and displayed in search results. Address External Links (if applicable): If there are external websites linking to non-canonical URLs, consider reaching out to those websites and requesting them to update their links to the canonical URLs. Alternatively, you can use redirects to ensure that external traffic is directed to the correct URLs. #seofix #seo #digitalmarketer
To view or add a comment, sign in
-
New Post: The Expert SEO Guide To URL Parameter Handling via \@sejournal, \@jes_scholz - https://lnkd.in/gkwX-WHF - URL parameters can be an SEO nightmare. Read how to handle them to improve crawling, indexing and organic performance on Google surfaces. The post The Expert SEO Guide To URL Parameter Handling appeared first on Search Engine Journal. - #news #business #world #jobs #school #passion
To view or add a comment, sign in
-
Understanding Crawled but Not Indexed Errors: Insights from Google's Gary Illyes At the SERP Conf 2024 conference in Bulgaria, Google's Gary Illyes shed light on a common issue many webmasters face: "Crawled but not Indexed" errors in Google Search Console. Despite the interview taking place in May, its insights remain highly relevant, especially since it went underreported until recently highlighted by Olesia Korobka on Facebook. What is "Crawled but Not Indexed"? This error indicates that while Google has crawled a page, it has chosen not to index it. This can be perplexing for site owners looking to improve their SEO. Understanding the reasons behind this can help in debugging and rectifying the issue. Key Reasons for Crawled but Not Indexed Duplicate Content: One primary reason Gary highlighted is content similarity. If Google finds similar content already indexed, it might skip indexing the new content. This includes both content on the same site and syndicated content that exists on other sites with better signals. Site Quality Issues: The overall quality of the site plays a significant role. A high number of "crawled but not indexed" pages could indicate general quality issues. Google’s perception of a site's quality can change, impacting the indexing rate of its URLs. Technical Issues: Technical errors can also cause this problem. For instance, if a site mistakenly serves the same content across multiple URLs, Google may choose not to index these pages. Other technical issues could include server errors or incorrect configurations. Takeaways for Webmasters Understanding these reasons can guide you in addressing the "crawled but not indexed" Issue: Ensure Unique Content: Avoid duplicate content on your site. If you syndicate content, ensure your site has unique value to offer. Improve Site Quality: Regularly audit your site for quality, focusing on both content and technical performance. Monitor Technical Health: Regularly check for and fix any technical issues that might affect how Google perceives and indexes your site. Gary Illyes' insights emphasize the need for a comprehensive approach to SEO, considering both content and technical factors. By addressing these areas, you can improve your chances of getting your pages indexed and ranked in Google. Watch the Full Interview For more in-depth insights, watch the full interview on Serpact's YouTube channel: Gary Illyes Answers Questions About Google and SEO SERP Conf. 2024. #SEO #GoogleSearchConsole #DigitalMarketing #SEOTips #WebDevelopment #ContentMarketing #TechnicalSEO #SiteQuality #IndexingIssues #DuplicateContent #TechErrors #GaryIllyes #SERPConf2024 #SearchEngineOptimization #Webmasters #SiteCrawl #GoogleBot #SEOInsights #WebsiteManagement #digitaldaddy
To view or add a comment, sign in
-
-
Google Confirms that Alternative Text is Not Primarily a Solution SEO☝️ Google's John Mueller explains how alternative text decisions should be made, explaining that it is not primarily about SEO Read the details in the article👈
To view or add a comment, sign in
-
💥Google's Cache Update💥 #Google has recently made a substantial change to its cache feature in #search #results. While this update might seem minor at first glance, it has significant implications for both #SEO professionals and users who relied on cached pages to view older versions of websites. What's Changed❓ 1️⃣ No Instant Access to Older Versions: Accessing older snapshots of webpages through cached results is now more difficult. This means that users won't be able to quickly view previous versions of a website. 2️⃣ SEO Auditing Becomes More Manual: Tracking changes or diagnosing indexing issues will require more manual effort, as quick access to older cached content is reduced. 3️⃣ Stay Updated: Website owners need to ensure their pages are fully #optimized and up-to-date to stay visible in #search results without relying on cached versions. How Will This Affect Your SEO Strategy❓ The #SEO landscape is constantly evolving. This update underscores the importance of: ✔Regular Content Updates: Keep your website's content fresh and relevant. ✔Thorough SEO Audits: Conduct regular #audits to identify and address any issues. ✔Monitoring Search Engine Algorithm Changes: Stay informed about the latest updates and adjust your strategy accordingly. By adapting to these changes, you can ensure that your website remains competitive and visible in search results.
To view or add a comment, sign in
-
-
Google On The SEO Impact Of 503 Status Codes via @sejournal, @MattGSouthern Google says brief 503 downtimes are acceptable, but extended unavailability can impact crawling. The post Google On The SEO Impact Of 503 Status Codes appeared first on Search Engine Journal. 𝗔𝗴𝗿𝗲𝗲 𝗼𝗿 𝗱𝗶𝘀𝗮𝗴𝗿𝗲𝗲? 𝗖𝗼𝗺𝗺𝗲𝗻𝘁 𝘆𝗼𝘂𝗿 𝗽𝗲𝗿𝘀𝗽𝗲𝗰𝘁𝗶𝘃𝗲!
To view or add a comment, sign in
-
Google has just 𝐮𝐩𝐝𝐚𝐭𝐞𝐝 𝐭𝐡𝐞 𝐜𝐚𝐜𝐡𝐞 𝐟𝐞𝐚𝐭𝐮𝐫𝐞 in search results!🔍 This update might seem #small, but it holds #significant implications for both SEO #professionals and users who relied on #cached pages to view older versions of #websites. So, what #changes can we expect? 1. No Instant Access to Older Versions⏳: Accessing older #snapshots of #webpages through cached results will now be more #challenging. 2. SEO Auditing Becomes Manual-Heavy🛠️: Tracking changes or diagnosing #indexing issues will require more #manual effort, as quick access to older cached content is reduced. 3. Stay Updated🔄: Website owners need to ensure their pages are fully #optimized and up-to-date to stay visible in search results without relying on #cached versions. The #SEOlandscape continues to evolve. How will this affect your approach? #googleupdate #seo #searchengineoptimization #cachefeature #digitalmarketing #seotrends #googlealgorithm #websiteoptimization #seostrategy #technews #searchresults #webdevelopment #seoupdates #googlesearch #websiteaudit
To view or add a comment, sign in
-
Google On The SEO Impact Of 503 Status Codes via @sejournal, @MattGSouthern Google says brief 503 downtimes are acceptable, but extended unavailability can impact crawling. The post Google On The SEO Impact Of 503 Status Codes appeared first on Search Engine Journal. 𝗔𝗴𝗿𝗲𝗲 𝗼𝗿 𝗱𝗶𝘀𝗮𝗴𝗿𝗲𝗲? 𝗖𝗼𝗺𝗺𝗲𝗻𝘁 𝘆𝗼𝘂𝗿 𝗽𝗲𝗿𝘀𝗽𝗲𝗰𝘁𝗶𝘃𝗲!
To view or add a comment, sign in
-
🚨 **SEO Alert** 🚨 Struggling with "Crawled but Not Indexed" errors in Google Search Console? You're not alone! Back in May, Google's Gary Illyes shared valuable insights at SERP Conf 2024 in Bulgaria. He explained the main reasons why your pages might be crawled but not indexed, and how to fix them: 1. **Duplicate Content**: Similar content already exists in the index. 2. **Site Quality Issues**: Poor overall site quality can impact indexing. 3. **Technical Errors**: Server errors or incorrect configurations might be to blame. Understanding these issues can help you debug and improve your site's performance. 🌐🔧 Check out the detailed illustration for more insights! #SEO #GoogleSearchConsole #WebDevelopment #DigitalMarketing #SEOTips #IndexingIssues #DuplicateContent #SiteQuality #TechnicalErrors #SERPConf2024
Understanding Crawled but Not Indexed Errors: Insights from Google's Gary Illyes At the SERP Conf 2024 conference in Bulgaria, Google's Gary Illyes shed light on a common issue many webmasters face: "Crawled but not Indexed" errors in Google Search Console. Despite the interview taking place in May, its insights remain highly relevant, especially since it went underreported until recently highlighted by Olesia Korobka on Facebook. What is "Crawled but Not Indexed"? This error indicates that while Google has crawled a page, it has chosen not to index it. This can be perplexing for site owners looking to improve their SEO. Understanding the reasons behind this can help in debugging and rectifying the issue. Key Reasons for Crawled but Not Indexed Duplicate Content: One primary reason Gary highlighted is content similarity. If Google finds similar content already indexed, it might skip indexing the new content. This includes both content on the same site and syndicated content that exists on other sites with better signals. Site Quality Issues: The overall quality of the site plays a significant role. A high number of "crawled but not indexed" pages could indicate general quality issues. Google’s perception of a site's quality can change, impacting the indexing rate of its URLs. Technical Issues: Technical errors can also cause this problem. For instance, if a site mistakenly serves the same content across multiple URLs, Google may choose not to index these pages. Other technical issues could include server errors or incorrect configurations. Takeaways for Webmasters Understanding these reasons can guide you in addressing the "crawled but not indexed" Issue: Ensure Unique Content: Avoid duplicate content on your site. If you syndicate content, ensure your site has unique value to offer. Improve Site Quality: Regularly audit your site for quality, focusing on both content and technical performance. Monitor Technical Health: Regularly check for and fix any technical issues that might affect how Google perceives and indexes your site. Gary Illyes' insights emphasize the need for a comprehensive approach to SEO, considering both content and technical factors. By addressing these areas, you can improve your chances of getting your pages indexed and ranked in Google. Watch the Full Interview For more in-depth insights, watch the full interview on Serpact's YouTube channel: Gary Illyes Answers Questions About Google and SEO SERP Conf. 2024. #SEO #GoogleSearchConsole #DigitalMarketing #SEOTips #WebDevelopment #ContentMarketing #TechnicalSEO #SiteQuality #IndexingIssues #DuplicateContent #TechErrors #GaryIllyes #SERPConf2024 #SearchEngineOptimization #Webmasters #SiteCrawl #GoogleBot #SEOInsights #WebsiteManagement #digitaldaddy
To view or add a comment, sign in
-
-
📢 🔍 Common Technical SEO Problems: 1. Page Speed Problem: Pages load slowly, leading to a poor user experience and lower rankings (Google prioritizes fast-loading pages). 2. Mobile-Friendly Website Problem: Your site isn’t responsive or optimized for mobile devices, which can lead to poor rankings in Google’s mobile-first index. 3. Duplicate Content Problem: Identical or similar content appears on multiple pages, confusing search engines and diluting rankings. 4. Broken Links (404 Errors) Problem: Internal or external links lead to non-existent pages, hurting user experience and crawlability. 5. Improper URL Structure Problem: URLs are overly complex, with parameters, or lack keywords, making them hard for users and search engines to understand. 6. Poor Crawlability Problem: Search engines cannot fully crawl your website due to issues like blocked resources or poorly structured sitemaps. 7. Indexing Issues Problem: Certain pages are not being indexed by search engines. 8. Lack of HTTPS Problem: Your site uses HTTP instead of HTTPS, making it less secure and potentially lowering your rankings. 9. Missing or Improper Structured Data Problem: Search engines can’t understand your content to display rich snippets. 10. Thin Content Problem: Pages with little or no valuable content negatively affect rankings. 11. Improper Linking Problem: Poor internal link structure makes it hard for search engines to understand the hierarchy and importance of pages. 12. Pagination Issues Problem: Improper handling of pagination makes it hard for search engines to crawl all pages. 13. Overlooked Meta Tags Problem: Missing or duplicate title tags and meta descriptions can harm rankings. #OnlineMarketing #technicalseoproblem #technicalseo #technicalseoseolving #technicalseoservices
To view or add a comment, sign in