If Google cannot crawl your product pages, collection pages, or blog posts, your robots.txt file might be the reason. This guide shows you exactly where to look, what to change, and how to confirm the fix is working.
How do you fix Shopify robots.txt blocking important pages?
- Visit yourstore.com/robots.txt and read every Disallow rule to see if any of them match your product, collection, or blog page URLs.
- Check Google Search Console under Indexing and then Pages for any URLs showing the status “Blocked by robots.txt.”
- On Shopify Plus, edit the robots.txt.liquid file in your theme code to remove or correct the problematic rule. On standard Shopify plans, contact the app that added the rule or use theme code to adjust it.
- After fixing the file, use the URL Inspection tool in Google Search Console to confirm access is restored and request indexing for each affected page.
Your Shopify store can look completely fine from the outside while Google is quietly blocked from reaching some of your most important pages. There are no error messages in your admin dashboard. Products load normally for customers. Everything seems to be working. But somewhere in your robots.txt file, a Disallow rule is telling Google’s crawler to stay away from URLs that should absolutely be indexed.
This problem is more common than most store owners realize, and it tends to get worse over time as more apps get installed and more rules get added to the file without anyone reviewing them. This guide walks you through exactly how to find the problem, understand what it means, and fix it the right way whether you are on a standard Shopify plan or Shopify Plus.
What robots.txt does and why it matters for Shopify SEO
Robots.txt is a plain text file that lives at the root of your domain. When Google’s crawler visits your store for the first time or returns for a regular crawl, one of the first things it does is fetch this file and read the instructions inside. Those instructions tell the crawler which pages and directories it is allowed to access and which ones it should skip entirely.
The file uses a simple format. A rule that starts with “Disallow:” followed by a URL path tells the crawler not to visit any URL that begins with that path. A rule that starts with “Allow:” explicitly permits access to a path even if a broader Disallow rule would otherwise block it. The rules apply to the crawlers listed under the “User-agent:” line above them.
Why this creates problems on Shopify stores
Shopify generates a robots.txt file automatically for every store. The default rules are generally sensible. They block crawlers from accessing things like the checkout, the cart, the account login pages, and various internal admin paths. None of those pages need to be in Google’s index, so blocking them is the right call.
The problem starts when the file changes without you knowing about it. Shopify apps are a very common cause of this. When you install a new app, it sometimes writes its own Disallow rules into your robots.txt to protect its internal pages from being crawled. That is reasonable in theory, but the rules occasionally end up broader than intended and accidentally block paths that contain your real store content.
Theme updates can also modify robots.txt behavior, especially on Shopify Plus where the robots.txt is editable through the theme editor. If someone on your team edited the file to try to solve one problem and introduced a new rule in the process, you would not necessarily see the impact right away. The damage tends to show up gradually as crawl coverage drops and pages fall out of the index one by one.
How to tell if robots.txt is blocking your pages
There are two reliable ways to find out whether your robots.txt file is responsible for your pages not being crawled or indexed. Using both of them together gives you a complete picture.
Method one: Read the robots.txt file directly
Open a browser and go to yourstore.com/robots.txt, replacing “yourstore” with your actual domain. Read through every line of the file. The rules that matter most are the ones listed under “User-agent: *” because the asterisk means they apply to all crawlers, including Googlebot.
For each Disallow rule you see, ask yourself whether the path it blocks could match any of your important pages. For example, a rule that says Disallow: /collections/ would block Google from crawling all of your collection pages. A rule that says Disallow: /blogs/ would block all of your blog posts. A rule that appears harmless because it targets an app-specific path like /apps/someplugin/ is usually fine, but a rule that blocks a broader path like /pages/ could be cutting off pages you rely on for organic traffic.
Method two: Use Google Search Console
Log into Google Search Console and go to Indexing in the left sidebar, then click Pages. Scroll down to the section that breaks down why pages are not indexed. Look specifically for a category called “Blocked by robots.txt.” Click on it to see the full list of URLs that Google found but could not crawl because of your robots.txt rules.
This view is especially useful because it shows you the actual impact of the problem. You may have a robots.txt rule that technically applies to a certain path, but if Google never tried to crawl a URL matching that path, there is no harm done. The Search Console report shows you exactly which real pages on your store are being blocked right now.
What Shopify robots.txt should and should not block
Understanding the difference between pages that should be blocked and pages that should be crawlable will help you evaluate every rule in your robots.txt file with confidence. Here is a clear breakdown.
| Page Type | Should Be Blocked | Should Be Crawlable | Reason |
|---|---|---|---|
| Checkout pages (/checkout/) | ✓ | No SEO value, private transaction data | |
| Cart page (/cart) | ✓ | Session-specific, no ranking value | |
| Account pages (/account) | ✓ | Private user data, login-gated | |
| Admin and app internal pages | ✓ | Not meant for public indexing | |
| Product pages (/products/) | ✓ | Core ranking pages for ecommerce | |
| Collection pages (/collections/) | ✓ | Category pages that drive organic traffic | |
| Blog posts (/blogs/) | ✓ | Content marketing and long-tail keywords | |
| Standard pages (/pages/) | ✓ | About, contact, policy pages need indexing | |
| Homepage | ✓ | Most important page on the entire store |
If you see a Disallow rule in your robots.txt that touches any of the paths in the right column of this table, that rule needs to be removed or narrowed down so it no longer affects those pages. The rule may have been added for a legitimate reason related to an app or internal page, but if it is written too broadly, the cure is worse than the disease.
The robots.txt file is one of the first things we check in any Shopify SEO audit. Store owners are often shocked to find that a filter app or review widget they installed months ago quietly added a Disallow rule that has been blocking Google from an entire section of their store ever since. LeanScaleMedia SEO Audit Team
Step-by-step: How to fix the blocking rules
Once you have identified the specific Disallow rule causing the problem, the fix depends on where the rule came from and which Shopify plan you are on. Work through the steps below in order.
If you are on Shopify Plus
- Step 1: Open the theme code editor In your Shopify admin, go to Online Store and then Themes. Click the three-dot menu next to your current theme and select Edit code. In the file list on the left, look for a file called robots.txt.liquid. This file controls your entire robots.txt output.
- Step 2: Locate the problematic Disallow rule Read through the robots.txt.liquid file carefully. Find the specific line or section that is generating the Disallow rule you want to remove or change. Rules added by apps may appear as liquid code blocks that output Disallow lines dynamically, or they may be written as plain text lines directly in the file.
- Step 3: Remove or narrow the rule If the rule is blocking an entire path that should be crawlable, delete that line entirely. If the rule is broadly written but serves a purpose for a specific subdirectory, rewrite it to be more specific. For example, if you see Disallow: /collections/ but you only want to block a particular filtered path, change it to something like Disallow: /collections/some-specific-path/ so that your main collection pages remain accessible.
- Step 4: Save the file and verify Save the robots.txt.liquid file. Then open a new browser tab and visit yourstore.com/robots.txt to confirm that the rule you removed is no longer showing in the output. If it is still there, go back and look for another place in the file where the same rule might be generated.
If you are on a standard Shopify plan
- Step 1: Identify which app added the rule Standard Shopify plans do not give you direct access to a robots.txt.liquid file, but apps installed on your store can still influence the file. Look at your installed apps and think about which ones were added around the time the problem started. Common culprits are SEO apps, review apps, wishlist apps, and loyalty program apps.
- Step 2: Check the app settings Open the settings page for each suspected app. Some apps have a setting that lets you control whether they add anything to robots.txt. If you find the option, turn it off or adjust it to only block the paths the app actually needs protected.
- Step 3: Contact the app developer If you cannot find the setting yourself, contact the app developer’s support team. Explain that one of their Disallow rules is blocking important pages on your store and ask them to help you narrow it down or remove it. Most reputable app developers will respond quickly because a misconfigured robots.txt rule reflects poorly on their product.
- Step 4: Uninstall the app if necessary If the app developer cannot or will not help, and the problem is causing meaningful harm to your SEO, consider uninstalling the app. When an app is uninstalled, any rules it added to robots.txt are typically removed along with it. Verify this by checking your robots.txt file again after the uninstall.
How to verify the fix and get your pages indexed
Fixing robots.txt is only the first half of the job. You also need to confirm the fix actually worked and then take action to get your previously blocked pages crawled and indexed as quickly as possible.
Confirming the fix in Google Search Console
- Step 1: Run the URL Inspection tool on a previously blocked page Go to Google Search Console and use the URL Inspection tool. Enter the URL of one of the pages that was showing as “Blocked by robots.txt.” The inspection result should now show that the URL is allowed to be crawled. If it still shows as blocked, Google may not have fetched your updated robots.txt yet. Wait a few hours and try again.
- Step 2: Request indexing for each affected page Once the URL Inspection tool confirms access is restored, click the Request Indexing button. Do this for each of your most important pages that were previously blocked. Google has a daily limit on manual indexing requests, so prioritize your highest-value pages first, such as your top collection pages and best-selling product pages.
- Step 3: Monitor the Pages report over the following weeks Go back to the Pages report in Search Console over the next two to four weeks. The number of pages showing as “Blocked by robots.txt” should decrease as Google re-crawls those URLs and finds them accessible. At the same time, your total indexed page count should start to grow as those pages make it into Google’s index.
- Step 4: Check your sitemap for consistency While you are doing this cleanup, also verify that your sitemap at yourstore.com/sitemap.xml does not contain any URLs that are still blocked in robots.txt. Having a URL in your sitemap and also blocked in robots.txt sends conflicting signals to Google. All URLs in your sitemap should be fully accessible.
How to prevent this problem from happening again
Once your robots.txt is clean and your pages are back in the index, put a simple system in place to make sure the problem does not quietly come back over time.
- Check your robots.txt file every time you install a new app on your store. This takes less than a minute and can catch a bad rule before it has time to cause damage.
- Keep a saved copy of what your robots.txt looked like when everything was working correctly. If a future change introduces a problem, comparing the two versions will immediately show you what changed.
- Review the Pages report in Google Search Console once a month. The “Blocked by robots.txt” category should ideally show zero URLs. If it starts growing, investigate right away.
- When evaluating new apps before installation, look at reviews that mention SEO or crawling issues. Sometimes other merchants have already discovered that an app causes robots.txt problems and have left notes about it.
Want us to audit your Shopify robots.txt and technical SEO?
We help Shopify brands find hidden crawl blocks and indexing problems that are costing them organic traffic. Book a free call and we will dig into your store’s setup.
Book a free strategy call →